68
Thanks Kumaravelu S DBA From: Raghu, Nadupalle (Cognizant) Sent: Wednesday, April 18, 2007 12:28 PM Subject: Subject: Memory Notification: Library Cache Object Loaded Into Sga Type: PROBLEM Last Revision Date: 30-MAR-2007 Status: PUBLISHED In this Document Symptoms Changes Cause Solution References Applies to: Oracle Server - Enterprise Edition - Version: This problem can occur on any platform. . Symptoms The following messages are reported in alert.log after 10g Release 2 is installed. Memory Notification: Library Cache Object loaded into SGA Heap size 2294K exceeds notification threshold (2048K) Changes Installed / Upgraded to 10g Release 2 Cause These are warning messages that should not cause the program responsible for these errors to fail. They appear as a result of new event messaging mechanism and memory manager in 10g Release 2. The meaning is that the process is just spending a lot of time in finding free memory extents during an allocate as the memory may be heavily fragmented. Fragmentation in memory is impossible to eliminate completely, however, continued messages of large allocations in memory indicate there are tuning opportunities on the application.

7101564-Daily-Work

Embed Size (px)

DESCRIPTION

dba daily

Citation preview

Page 1: 7101564-Daily-Work

ThanksKumaravelu SDBAFrom Raghu Nadupalle (Cognizant) Sent Wednesday April 18 2007 1228 PMSubject

Subject Memory Notification Library Cache Object Loaded Into Sga

Type PROBLEM Last Revision Date 30-MAR-2007 Status PUBLISHED

In this Document Symptoms Changes Cause Solution References

Applies to Oracle Server - Enterprise Edition - Version This problem can occur on any platform

Symptoms

The following messages are reported in alertlog after 10g Release 2 is installed

Memory Notification Library Cache Object loaded into SGA Heap size 2294K exceeds notification threshold (2048K)

Changes

Installed Upgraded to 10g Release 2

Cause

These are warning messages that should not cause the program responsible for these errors to fail They appear as a result of new event messaging mechanism and memory manager in 10g Release 2

The meaning is that the process is just spending a lot of time in finding free memory extents during an allocate as the memory may be heavily fragmented Fragmentation in memory is impossible to eliminate completely however continued messages of large allocations in memory indicate there are tuning opportunities on the application

The messages do not imply that an ORA-4031 is about to happen

Solution

In 10g we have a new undocumented parameter that sets the KGL heap size warning threshold This parameter was not present in 10gR1 Warnings are written if heap size exceeds this threshold Set _kgl_large_heap_warning_threshold to a reasonable high value or zero to prevent these warning messages Value needs to be set in bytes

If you want to set this to 8192 (8192 1024) and are using an spfile

(logged in as as sysdba)

SQLgt alter system set _kgl_large_heap_warning_threshold=8388608 scope=spfile

SQLgt shutdown immediate SQLgt startup

SQLgt show parameter _kgl_large_heap_warning_thresholdNAME TYPE VALUE------------------------------------ ----------- ------------------------------_kgl_large_heap_warning_threshold integer 8388608

If using an old-style init parameter

Edit the init parameter file and add

_kgl_large_heap_warning_threshold=8388608

NOTE The default threshold in 10201 is 2M So these messages could show up frequently in some application environments

In 10202 the threshold was increased to 50MB after regression tests so this should be a reasonable and recommended value If you continue to see the these warning messages in the alert log after applying 10202 or higher an SR may be in order to investigate if you are encountering a bug in the Shared Pool

DATABASE LINK

create database link CDOI3 connect to cdo identified by cdo using CDOI3ctscom

select from cdot1CDOI3

102375154

User Nameoc4jadminPassword pass1234

httpsmetalinkoraclecommetalinkplsqlfp=110194410067257338331514NO

Oracle Server-Enterprise and Standard EditionDBA Administration Techn

ical Forum

Displayed below are the messages of the selected thread

Thread Status Closed

From Sara Dyer 18-Feb-05 1457 Subject ORA-04020 on startup of database

RDBMS Version 92040Operating System and Version HP-UX B1100Error Number (if applicable) ORA-04020Product (ie SQLLoader Import etc) Product Version

ORA-04020 on startup of database

Irsquom attempting to set up multi-master replication I ran catalogsql as sys as suggested in Note1220391 The below error occurred when running catalogsql I now cannot connect to the database using enterprise manager or a web application I can only connect via sqlplus I have restarted the database several times and each time I startup the below error occurs I have tried running utlrpsql and receive the same error ERROR at line 15 ORA-04020 deadlock detected while trying to lock object SYSDBMS_REPUTIL ORA-06508 PLSQL could not find program unit being called ORA-06512 at line 24 ORA-06508 PLSQL could not find program unit being called ORA-06512 at line 24

Also the following error occurs when attempting to access the web application

Fri 18 Feb 2005 120621 GMT ORA-04020 deadlock detected while trying to lock object SYSDBMS_STANDARD DAD name devltimetrk PROCEDURE time_sheetdisplay URL http144101261441643plsdevlTimeTrktime_sheetdisplay

From Otto Rodriguez 18-Feb-05 2159

Subject Re ORA-04020 on startup of database

Try the following 1 Set parameters in your updated initSIDora (create from spfile) AQ_TM_PROCESSES=0 _SYSTEM_TRIG_ENABLED=FALSE

2 Rename spfile shutdown and STARTUP MIGRATE 3 Run catalogsql again 4 Comment parameters added in step 1 5 Rename back your spfile 6 Shutdown and STARTUP normal

From Sara Dyer 22-Feb-05 1634 Subject Re ORA-04020 on startup of database

That fixed my original problem I put my pfile back the way it was and now I am getting this - ORACLE instance started

Total System Global Area 488075536 bytes Fixed Size 737552 bytes Variable Size 452984832 bytes Database Buffers 33554432 bytes Redo Buffers 798720 bytes Database mounted ORA-00604 error occurred at recursive SQL level 1 ORA-04045 errors during recompilationrevalidation of XDBDBMS_XDBZ0 ORA-04098 trigger SYST_ALTER_USER_B is invalid and failed re-validation

I tried recompiling everything with utlrpsql but recieved the trigger is invalid error and I tried adding _system_trig_enabled and setting it to _system_trig_enabled=TRUE in my pfile no help

Thank you

Sara

ORA-12518 TNSlistener could not hand off client connection

Your server is probably running out of memory and need to swap memory to disk One cause can be an Oracle process consuming too much memory

A possible workaround is to set following parameter in the listenerora and restart the listenerDIRECT_HANDOFF_TTC_LISTENER=OFF

Should you be working with Multi threaded server connections you might need to increase the value of large_pool_size

START INFORMICS REPOSITARY

Su ndash informatCd informaticarepositoryserver pmrepserverhttpwwworaclecomtechnologybooks10g_bookshtml

FOR SUN-SOLARIS 10G CONSOLE

smcamp

AUTO EXTEND ON DATABASE TEMPFILE OR DATAFILE

alter database tempfile lsquofile_namersquo autoextend on

HOW TO CREATE DATABASE MANUALLY

A)INITORA PARAMETER

spool off$ ksh$ set -o viinstance_name=DWDEVdb_name=DWDEVbackground_dump_dest=oradata2oracle9iadminDWDEVbdumpuser_dump_dest=oradata2oracle9iadminDWDEVudumpcore_dump_dest=oradata2oracle9iadminDWDEVcdumpcontrol_files=(oradata2oracle9iadminDWDEVcontrol01ctloradata2oracle9iadminDWDEVcontrol02ctl)compatible=92000remote_login_passwordfile=EXCLUSIVEundo_management=AUTOundo_tablespace=undo1

B) STARTUP NOMOUNT

C)

SQLgt create database DWDEV

d 2 atafile oradata2oracle9iadminDWDEVDWDEV1dbf size 2048m 3 logfile group 1 oradata2oracle9iadminDWDEVlog1rdo size 200m 4 group 2 oradata2oracle9iadminDWDEVlog2rdo size 200md 5 efault temporary tablespace temp 6 tempfile oradata2oracle9iadminDWDEVtemp01dbf size 10m 7 undo tablespace undot1 datafile oradata2oracle9iadminDWDEVundot1dbf size 100M

D) Run catalog amp catproc

NO OF CPU RUNNING IN THE SERVER

psrinfopsrinfo ndashv

ASSIGN DEFAULT TABLESPACE FOR THE USER

alter user SAMCORE default tablespace smdest_data quota unlimited on

CREATE CONSTRAINT

create table ri_primary_key_1 (

a number

b number

c number

constraint pk_name primary key (a b)

)

Alter table table_name add constraint some_name primary key (columname1coulumname2)

ENABLE NO VALIDATE amp DROP CONSTAINT

alter table test1 modify DAY_OF_WEEK varchar2(1) not null enable novalidate

ALTER TABLE egg DROP CONSTRAINT eggREFchickenALTER TABLE chicken DROP CONSTRAINT chickenREFegg

Insert into table_name select from table_nameCreate table table_name as selet from table_name

DROP THE DATABASE

The following shows the steps to drop a database in Unix enviroment In order to delete a database there are few things need to be taken care of First all the database related files eg dbf ctl rdo arc need to be deleted Then the entry in listenerora and tnsnamesora need to be removed Third all the database links need to be removed since it will be invalid anyways

It depends how you login to oracle account in Unix you should have environment set for the user oracle To confirm that the environment variable is set do a env|grep ORACLE and you will notice that your ORACLE_SID=SOME_SID and ORACLE_HOME=SOME_PATH If you do not already have the ORACLE_SID and ORACLE_HOME set do it now

Make sure also that you set the ORACLE_SID and ORACLE_HOME correct else you will end up deleting other database Next you will have to query all the database related files from dictionaries in order to identify which files to delete Do the following

01 Login as connect as sysdba at svrmgrl02 startup the database if its not already started The database must at least mounted03 spool tmpdeletelistlst 04 select name from v$datafile (This will get all the datafiles alternatively you can select file_name from dba_data_files) 05 select member from v$logfile06 select name from v$controlfile07 archive log list (archive_log_dest is where the archived destination is)08 locating ifile by issuing show parameter ifile (alternatively check the content of initora)09 spool off 10 Delete in OS level the files listed in tmpdeletelistlst11 remove all the entries which refer to the deleted database in tnsnamesora and listenerora (located in $ORACLE_HOMEnetworkadmin) 12 remove all database links referring to the deleted database13 check varoptoracleoratab to make sure there is no entry of the database deleted If yes remove it14 DONE

SQLgt select DAY_OF_WEEKcount(DAY_OF_WEEK) as cnt from PPM_AR_BROADCAST_HOUR group by DAY_OF_WEEK

CHANGE THE NLS_DATABASE_PARAMETER

select from nls_database_parameters where parameter=NLS_CHARACTERSET

ALTER THE FILE TO OFFLINE

alter database tempfile oradata2rating9idatatemp01dbf offline

alter database tempfile oradata2rating9idatatemp01dbf online

SQLgt alter table PPMENO_PRODPPMENO_MEDIA_ENCODING_ENTITY modify (MINOR_STATION_IND varchar2(2 byte) constraint a1 not null enable novalidate)migrationoracle9ibinpupbldmigrationoracle9isqlplusadminpupbldsqlu01apporacleproduct816binpupbldu01apporacleproduct816sqlplusadminpupbldsqldata1ora92orainstallbinpupblddata1ora92orainstallsqlplusadminpupbldsql

STATSPACK INSTALLATION

Statspack Installation

Steps1Create tablespace tablespace_name datafile lsquofilenamedbfrsquorsquo size 500M 2 optoraclerdbmsadmin 3run the command in sql prompt optoraclerdbmsadminspcreatesql 4

grant select on PPMDP_STENppmdp_media_stream to public

create public synonym ppmdp_media_stream for PPMDP_STENppmdp_media_stream

IMP UTILITY

connected to ORACLE

The errors occur on Oracle database installed in Windows machine too Actually the problem can occurs in any platform of Oracle database It usually happens when try to import into new database

The problem occurs because imp utiliy encounters error out when trying to execute some commands

The solution to solve the problem is as following

Login as sys in the SQLPLUS and run the following sqls

$OHrdbmsadmindbmsreadsql$OHrdbmsadminprvtreadplb

After executing the above sql scripts retry the import The error should disappears

Select granteegranted_name from dba_role_privs

UNDOTBS

Guidelines for Using Partition-Level Import

Partition-level Import can only be specified in table mode It lets you selectively load data from specified partitions or subpartitions in an export file Keep the following guidelines in mind when using partition-level import

bull Import always stores the rows according to the partitioning scheme of the target table

bull Partition-level Import inserts only the row data from the specified source partitions or subpartitions

bull If the target table is partitioned partition-level Import rejects any rows that fall above the highest partition of the target table

bull Partition-level Import cannot import a nonpartitioned exported table However a partitioned table can be imported from a nonpartitioned exported table using table-level Import

bull Partition-level Import is legal only if the source table (that is the table called tablename at export time) was partitioned and exists in the Export file

bull If the partition or subpartition name is not a valid partition in the export file Import generates a warning

bull The partition or subpartition name in the parameter refers to only the partition or subpartition in the Export file which may not contain all of the data of the table on the export source system

bull If ROWS=y (default) and the table does not exist in the Import target system the table is created and all rows from the source partition or subpartition are inserted into the partition or subpartition of the target table

bull If ROWS=y (default) and IGNORE=y but the table already existed before Import all rows for the specified partition or subpartition in the table are inserted into the table The rows are stored according to the existing partitioning scheme of the target table

bull If ROWS=n Import does not insert data into the target table and continues to process other objects associated with the specified table and partition or subpartition in the file

bull If the target table is nonpartitioned the partitions and subpartitions are imported into the entire table Import requires IGNORE=y to import one or more partitions or subpartitions from the Export file into a nonpartitioned table on the import target system

USER CREATION IN OS

useradd -d exporthomeS106255 -m s106462

FIND UPTIME OF THE DATABASE

SQLgt select TO_CHAR(startup_timemm-dd-yy hh24miss) from v$instance

SQLgt select property_name property_value from database_properties

The SQL will return the following results look for DEFAULT_TEMP_TABLESPACE for the setting

PROPERTY_NAME PROPERTY_VALUEmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashDICTBASE 2DEFAULT_TEMP_TABLESPACE TEMPDBTIMEZONE +0100NLS_NCHAR_CHARACTERSET AL16UTF16GLOBAL_DB_NAME ARONGENERALICHEXPORT_VIEWS_VERSION 8NLS_LANGUAGE AMERICANNLS_TERRITORY AMERICANLS_CURRENCY $NLS_ISO_CURRENCY AMERICANLS_NUMERIC_CHARACTERS NLS_CHARACTERSET WE8ISO8859P1NLS_CALENDAR GREGORIANNLS_DATE_FORMAT DD-MON-RRNLS_DATE_LANGUAGE AMERICANNLS_SORT BINARYNLS_TIME_FORMAT HHMISSXFF AMNLS_TIMESTAMP_FORMAT DD-MON-RR HHMISSXFF AMNLS_TIME_TZ_FORMAT HHMISSXFF AM TZRNLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HHMISSXFF AM TZRNLS_DUAL_CURRENCY $

NLS_COMP BINARYNLS_LENGTH_SEMANTICS BYTENLS_NCHAR_CONV_EXCP FALSENLS_RDBMS_VERSION 92060

If default temporary tablespace is wrong the alter it with the following command

SQLgt alter database default temporary tablespace temp

To check default temporary tablespace for all users of the database

SQLgt select username temporary_tablespace account_status from dba_users

will return the following result check if all users TEMPORARY_TABLESPACE is set to correct settings

USERNAME TEMPORARY_TABLESPACE ACCOUNT_STATUSmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndashSYS TEMPRY OPENSYSTEM TEMP OPENOUTLN TEMP OPENDBSNMP TEMP OPENDBMONITOR TEMP OPENTEST TEMP OPENWMSYS TEMP EXPIRED amp LOCKED

If wrong temporary tablespace is found alter it with the correct tablespace name (for example sys) with the following SQL

SQLgt alter user sys temporary tablespace temp

Alternatively recreate or add a datafile to your temporary tablespace and change the default temporary tablespace for your database

SQLgt drop tablespace temp including contents and datafiles

SQLgt create temporary tablespace temp tempfile lsquodbtemp01dbfrsquo size 100m autoextend off extent management local uniform size 1m

SQLgt alter database default temporary tablespace temp

HOW TO DISABLE CONSTAINT

alter table PPM_PRODPPM_SAMPLE_HH_CHRSTCS disable constraint SHC_S_FK

WHAT WILL HAPPEN IF U DELETE THOSE FILES

And what can happen if I delete them

Its two very large files (150-160 MB each)920assistantsdbcatemplatesData_Warehousedfj920assistantsdbcatemplatesTransaction_Processingdfj

Re What are these files GOOD for [message 126248 is a reply to message 126216 ]

Sat 02 July 2005 0009

AchchanMessages 86Registered June 2005

Member

HiFiles that have a DJF extension contain the predefined redo logs and datafiles for seed templates in DBCAIf you delete them you wont be able to use those db creation templates in future

FOR JVM INSTALLATION IN ORACLE

For JVM installation

We have to run this scriptinitjvmasql

DB_DOMAIN NAME PARAMETER

db_domainGLOBAL_NAMES=TRUEALTER DATABASE RENAME GLOBAL_NAME TO WEBDVCTSCOM

CREATE TABLE STRUCTURE

SQLgt select DBMS_METADATAGET_DDL(TABLELOGOFF_TBLCOORS_TARGET) from dual CREATE OR REPLACE TRIGGER SYStrg_logoff

BEFORE logoff ON DATABASEBEGIN INSERT INTO SYSlogoff_tbl VALUES(sys_context(userenvsession_user) SYSDATE)END

BACKUP PATH

507 mount 1023710137unixbkp backup 508 cd backup 509 df -k 510 cd backup 511 ls 512 clear 513 ls 514 mkdir jpmc_bak 515 cd jpmc_bak 516 ls 517 df -k u02 518 pwd 519 ls u02 520 pwd 521 cp -rpf u02ccsystst 522 ls -ltr 523 history

NOOF CPU

isainfo ndashv

HOW TO LOCK THE USER ACCOUNT IN ORACLE

Alter user user_name account lock

CHANGE TABLESPACE BLOCK SIZE ISSUE

Db_2k_cache_size=10m

OEM IN ORACLE 10G

Emctl status dbconsole

httphostnameportem

SET ENVIRONMENT VARIABLE IN ORACLE

export PATH=optjava14bin$PATHexport JAVA_HOME=optjava14jre ora9i 8837 1 0 May 24 1147 ora_pmon_poi ora9i 2305 1 0 Mar 29 2359 ora_pmon_portal ora9i 2321 1 0 Mar 29 2417 ora_pmon_EDMS ora10g 17394 1 0 Apr 02 12857 ora_pmon_POI2 orainst 14743 14365 0 110243 pts3 000 grep pmonCREATE DIRECTORY create directory utl_dir as lsquopathrsquogrant all on directory utl

Modify the given parameter

utl_file_dir

If any timeout request

SqlnetInbound_connect_Timeout

Any privilege for DBMS package

Grant execute on dbmsstats to username

select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = lsquo27229rsquo and spaddr = paddr

Load dump to the Sybase database

Load database database_name from

Load database database_name from ldquocompresspathrdquo

Load database database_name from stripe_on ldquocompresspath01rdquo

Stripe on ldquocompresspath02rdquo

Dump database database_name to lsquopathrsquos

Those scripts should run for install JVM

javavminstallinitjvmsql

optoracle10gxdkadmininitxmlsql

optoracle10gxdkadminxmljasql

optoracle10grdbmsadmincatjavasql

optoracle10grdbmsadmincatexfsql

Once the database has been restarted resolve any invalid objects by

running the utlrpsql script eg

optoracle10grdbmsadminutlrpsql

Those scripts should run for Uninstall JVM

rdbmsadmincatnoexfsql

rdbmsadminrmaqjmssql

rdbmsadminrmcdcsql

xdkadminrmxmlsql

javavminstallrmjvmsql

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name) FROM dba_tablespaces

1023710137mdashBackup Report

SYBASE ndashDatabase

1su ndash syb

2dscp

3open

4listall

5isql ndashUsa ndashSddm(database Name)

6sp-who

7go

8shutdown with nowait

9Sybasesyb125ASE-12-5install

10 startserver ndashf RUNndashgsms

online database gem_curr

11sp ndashhelpdb

12sp-configure

13sp-config parameter newvalue=0is

vgdisplay -v vg02 | grep LV Name |more

For Truncate the table

dump tran test_saatchi with truncate_only

backupsp_helpdb test_saatchi

cd $SYSBASE

MORE INTERFACE

Sybadmin-pW

MAX_ENABLED_ROLES = 70

svrmgrlconnect internalstartupshutdown abort

The following command for gather statistics of Number of rows in each table

exec dbms_statsgather_database_stats()

create or replace procedure sess1kill_session ( v_sid number v_serial number ) as v_varchar2 varchar2(100) begin execute immediate ALTER SYSTEM KILL SESSION || v_sid || || v_serial || end

HARD MOUNT DRIVE

mount -o hardrwnoacrsize=32768wsize=32768suidproto=tcpvers=3 1023710137unixbkp backup

oradim -delete -sid EPToradim -new -sid EPT -pfile DoracleHome2databaseinitEPToraSET TNS_ADMIN=Coracleora92networkadmin

Alter user user_name quota unlimted on tablespace_name

This is most likely a bug I would recommend to apply patchset 9207 As oracle recommends at least 9203 versiyon Anyway you can try below fix

Change the listener and database services Log On user to domain user who is a member of the groups domain admin and ORA_DBA group The default setting is Local System Account

- Run regedit- Drill down to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServices- Locate and delete the OracleOraHome92TNSListener (or whatever the listener name is)- Reboot the entire Windows box- When started and logged on as the Oracle user go to a DOS Command prompt- Run lsnrctl start ltlistener_namegt without the single quotes and replacing ltlistener_namegt with the name- An OS error of 1060 will be seen (normal) as the service is missing- The listener should start correctly or the next logical error may display

By the way can you explain backward of the problem Did you do any upgrade May you use double oracle_homelsquovaroptoracle--Installloc

zfs set quota=10G datapoolzfsoracle

select oracle_username os_user_namelocked_modeobject_nameobject_type from v$locked_object a dba_objects b where aobject_id=bobject_id

Select distinct busername bosuser bmachinebterminalmode_heldmode_requested

blogon_time SESSION WAIT sw From dba_ddl_locks a v$session b v$session_wait sw Where name= and asession_id=bsid and status=ACTIVE and swsid=bsid

spcreatesqlspreportsqlfor i in SAGACEND SAGACENB GLATTD STNC wrkshdevdoORACLE_SID=$iexport ORACLE_SIDsqlplus as sysdba ltlt select sum(bytes)10241024 from dba_data_filesexitdone

optinfoallinfo

For Hp-ux File Extend

fuser -c oradata2

umount oradata2

lvextend -L 40000M devvg00lvol7extendfs devvg00rlvoradata2

mount oradata2

devvg01lvwls 2097152 1457349 610113 70 weblogic

ALL_TAB_PRIVS All object grants where the user or public is granteeALL_TAB_PRIVS_MADE All object grants made by user or on user owned objectsALL_TAB_PRIVS_RECD All object grants to user or publicDBA_SYS_PRIVS System privileges granted to users and rolesDBA_ROLES List of all roles in the databaseDBA_ROLE_PRIVS Roles granted to users and to other rolesROLE_ROLE_PRIVS Roles granted to other rolesROLE_SYS_PRIVS System privileges granted to rolesROLE_TAB_PRIVS Table privileges granted to rolesSESSION_PRIVS All privileges currently available to userSESSION_ROLES All roles currently available to userUSER_SYS_PRIVS System privileges granted to current userUSER_TAB_PRIV Grants on objects where current user is grantee grantor or owner

DBA_TAB_PRIVSetcftpdftpusers

bash-300 zfs create datapooltelebash-300 zfs set mountpoint=app datapoolappbash-300 zfs set quota=10G datapoolapp

EXECUTE dbms_sessionset_sql_trace (FALSE)

SELECT SUBSTR (dfNAME 1 70) file_name dfbytes 1024 1024 allocated_mb((dfbytes 1024 1024) - NVL (SUM (dfsbytes) 1024 1024 0))used_mbNVL (SUM (dfsbytes) 1024 1024 0) free_space_mbFROM v$datafile df dba_free_space dfsWHERE dffile = dfsfile_id(+)GROUP BY dfsfile_id dfNAME dffile dfbytesORDER BY file_name

purge table name-of-table

purge index name-of-table

purge recyclebin

purge dba_recyclebin

purge tablespace name-of-tablespace

purge tablespace name-of-tablespaces user name-of-user

date dd if = ltinput filegt of = lt output file gt date

isainfo ndashv-output of the o2 is 32 bit or 64 bit

1023720911

isql -Udba -Scso_otpwSQL

for start and stop the databasescript sybdata1syb126IQcso_ot

Recover databaseAlter database open

1023720469

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name)FROM dba_tablespaces

Here is the query to get the details based on Unix PID select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = ltunix_pidgt and spaddr = paddr

CREATE CONTROLFILE SET DATABASE GMACDEV RESETLOGS NOARCHIVELOG-- SET STANDBY TO MAXIMIZE PERFORMANCE MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 30 MAXINSTANCES 1 MAXLOGHISTORY 112LOGFILE GROUP 1 gmacGMACDEVloglog1rdo SIZE 100M GROUP 2 gmacGMACDEVloglog2rdo SIZE 100M-- STANDBY LOGFILEDATAFILE gmacGMACDEVdatasystemdbf gmacGMACDEVdataundodbf gmacGMACDEVdatauserdbf gmacGMACDEVdatatestdbfCHARACTER SET US7ASCIIselect from nls_database_parameters 2 where parameter = any(NLS_CHARACTERSETNLS_NCHAR_CHARACTERSET)

EXECUTE dbms_sessionset_sql_trace (TRUE)PLSQL PROFILER

If you are not already configured DBMS_PROFILE package look the following script in Oracle Home-gtrdbms-gtadminSCRIPT PROFLOADSQL and PROFTABSQL

102375164Softwares

My problemWhen I dont use tnsnames and want to use ipc protocol then I get the followingerrorSQLgt connect myuseridmypasswordERROR

ORA-01034 ORACLE not availableORA-27121 unable to determine size of shared memory segmentSVR4 Error 13 Permission denied

Answer to your problem=======================Make sure file oracle has the following permissionscd $ORACLE_HOMEbin6751 If not1 Login as oracle user2 Shutdown (normal) the db3 Go to $ORACLE_HOMEbin4 Execute the followingchmod 6751 oracle5 Check the file permissions on oracle using the followingls -l oracle they should be-rwsr-s--x Startup the db and try connecting as dba or non-oracle user if this will not work- make sure permisions on oracle filesystems are set correctly ( set to 755)- filesystem must be mounted with correct setuid

date +DATE mdynTIMEHMS

Now start the Oracle EM dbconsole Build Script ($ORACLE_HOMEbinemca for Linux and $ORACLE_HOMEBinemcabat for Windows)

$ emca -repos create$ emca -config dbcontrol db

STARTED EMCA at Fri May 14 104322 MEST 2004Enter the following information about the databaseto be configured

Listener port number 1521Database SID AKI1Service name AKI1WORLDEmail address for notification martinzahnakadiacomEmail gateway for notification mailhostPassword for dbsnmp xxxxxxx

Password for sysman xxxxxxxPassword for sys xxxxxxx

---------------------------------------------------------You have specified the following settings

Database ORACLE_HOME optoracleproduct1010Enterprise Manager ORACLE_HOME optoracleproduct1010

Database host name akiraListener port number 1521Database SID AKI1Service name AKI1Email address for notification martinzahnakadiacomEmail gateway for notification mailhost---------------------------------------------------------Do you wish to continue [yesno] yesAM oraclesysmanemcpEMConfig updateReposVarsINFO Updating file configrepositoryvariables

Now wait about 10 Minutes to complete

M oraclesysmanemcpEMConfig createRepositoryINFO Creating repository M oraclesysmanemcpEMConfig performINFO Repository was created successfullyM oraclesysmanemcputilPortQuery findUsedPortsINFO Searching services file for used portAM oraclesysmanemcpEMConfig getPropertiesINFO Starting the DBConsole AM oraclesysmanemcpEMConfig performINFO DBConsole is started successfullyINFO gtgtgtgtgtgtgtgtgtgtgt The Enterprise Manager URL is httpakira5500em ltltltltltltltltltltltEnterprise Manager configuration is completed successfullyFINISHED EMCA at Fri May 14 105525 MEST 2004

Try to connect to the database Control

httpakira5500em

emca -deconfig dbcontrol db -repos drop

1gt select name from sysconfigures2gt goMsg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go

Msg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go name

-------------------------------------------------------------------------------- number of devices

suspend audit when device full

(2 rows affected)1gt sp_configure number of devices2gt go Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 36 60 60 number dynamic

(1 row affected)(return status = 0)1gt sp_configure number of devices702gt go00000000002720070315 14521046 server Configuration file sybasesyb125ASE-12_5ddmcfg has been written and the previous version has been renamed to sybasesyb125ASE-12_5ddm04600000000002720070315 14521048 server The configuration option number of devices has been changed by sa from 60 to 70 Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 44 70 70 number dynamic

(1 row affected)Configuration option changed The SQL Server need not be rebooted since theoption is dynamicChanging the value of number of devices to 70 increases the amount of memoryASE uses by 12 K(return status = 0)

disk initname=gem_hist_data7physname=datasyb125gem_histgem_hist_data7datsize=1600Mgo

This Query is used to find out the object name and lock id

select cownercobject_namecobject_typebsidbserialbstatusbosuserbmachine from v$locked_object a v$session bdba_objects c where bsid = asession_id and aobject_id = cobject_id

For apply patches for migration one version to another version1Run the setupexe2shu down the database3startup migrate4Run the below scriptscatpatchsqlcatciosqlutlrpsqlcatexpsql5shu immediate

Find out the locked object and sql query

select aobject_name boracle_username bos_user_namecsid cserialcterminal dsql_textfrom sysdba_objects av$locked_object bv$session cv$sqltext d where aobject_id = bobject_id and csid = bsession_id and csql_hash_value = dhash_value

HP-UX Cron tab

NAME

crontab - user crontab file

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 2: 7101564-Daily-Work

The messages do not imply that an ORA-4031 is about to happen

Solution

In 10g we have a new undocumented parameter that sets the KGL heap size warning threshold This parameter was not present in 10gR1 Warnings are written if heap size exceeds this threshold Set _kgl_large_heap_warning_threshold to a reasonable high value or zero to prevent these warning messages Value needs to be set in bytes

If you want to set this to 8192 (8192 1024) and are using an spfile

(logged in as as sysdba)

SQLgt alter system set _kgl_large_heap_warning_threshold=8388608 scope=spfile

SQLgt shutdown immediate SQLgt startup

SQLgt show parameter _kgl_large_heap_warning_thresholdNAME TYPE VALUE------------------------------------ ----------- ------------------------------_kgl_large_heap_warning_threshold integer 8388608

If using an old-style init parameter

Edit the init parameter file and add

_kgl_large_heap_warning_threshold=8388608

NOTE The default threshold in 10201 is 2M So these messages could show up frequently in some application environments

In 10202 the threshold was increased to 50MB after regression tests so this should be a reasonable and recommended value If you continue to see the these warning messages in the alert log after applying 10202 or higher an SR may be in order to investigate if you are encountering a bug in the Shared Pool

DATABASE LINK

create database link CDOI3 connect to cdo identified by cdo using CDOI3ctscom

select from cdot1CDOI3

102375154

User Nameoc4jadminPassword pass1234

httpsmetalinkoraclecommetalinkplsqlfp=110194410067257338331514NO

Oracle Server-Enterprise and Standard EditionDBA Administration Techn

ical Forum

Displayed below are the messages of the selected thread

Thread Status Closed

From Sara Dyer 18-Feb-05 1457 Subject ORA-04020 on startup of database

RDBMS Version 92040Operating System and Version HP-UX B1100Error Number (if applicable) ORA-04020Product (ie SQLLoader Import etc) Product Version

ORA-04020 on startup of database

Irsquom attempting to set up multi-master replication I ran catalogsql as sys as suggested in Note1220391 The below error occurred when running catalogsql I now cannot connect to the database using enterprise manager or a web application I can only connect via sqlplus I have restarted the database several times and each time I startup the below error occurs I have tried running utlrpsql and receive the same error ERROR at line 15 ORA-04020 deadlock detected while trying to lock object SYSDBMS_REPUTIL ORA-06508 PLSQL could not find program unit being called ORA-06512 at line 24 ORA-06508 PLSQL could not find program unit being called ORA-06512 at line 24

Also the following error occurs when attempting to access the web application

Fri 18 Feb 2005 120621 GMT ORA-04020 deadlock detected while trying to lock object SYSDBMS_STANDARD DAD name devltimetrk PROCEDURE time_sheetdisplay URL http144101261441643plsdevlTimeTrktime_sheetdisplay

From Otto Rodriguez 18-Feb-05 2159

Subject Re ORA-04020 on startup of database

Try the following 1 Set parameters in your updated initSIDora (create from spfile) AQ_TM_PROCESSES=0 _SYSTEM_TRIG_ENABLED=FALSE

2 Rename spfile shutdown and STARTUP MIGRATE 3 Run catalogsql again 4 Comment parameters added in step 1 5 Rename back your spfile 6 Shutdown and STARTUP normal

From Sara Dyer 22-Feb-05 1634 Subject Re ORA-04020 on startup of database

That fixed my original problem I put my pfile back the way it was and now I am getting this - ORACLE instance started

Total System Global Area 488075536 bytes Fixed Size 737552 bytes Variable Size 452984832 bytes Database Buffers 33554432 bytes Redo Buffers 798720 bytes Database mounted ORA-00604 error occurred at recursive SQL level 1 ORA-04045 errors during recompilationrevalidation of XDBDBMS_XDBZ0 ORA-04098 trigger SYST_ALTER_USER_B is invalid and failed re-validation

I tried recompiling everything with utlrpsql but recieved the trigger is invalid error and I tried adding _system_trig_enabled and setting it to _system_trig_enabled=TRUE in my pfile no help

Thank you

Sara

ORA-12518 TNSlistener could not hand off client connection

Your server is probably running out of memory and need to swap memory to disk One cause can be an Oracle process consuming too much memory

A possible workaround is to set following parameter in the listenerora and restart the listenerDIRECT_HANDOFF_TTC_LISTENER=OFF

Should you be working with Multi threaded server connections you might need to increase the value of large_pool_size

START INFORMICS REPOSITARY

Su ndash informatCd informaticarepositoryserver pmrepserverhttpwwworaclecomtechnologybooks10g_bookshtml

FOR SUN-SOLARIS 10G CONSOLE

smcamp

AUTO EXTEND ON DATABASE TEMPFILE OR DATAFILE

alter database tempfile lsquofile_namersquo autoextend on

HOW TO CREATE DATABASE MANUALLY

A)INITORA PARAMETER

spool off$ ksh$ set -o viinstance_name=DWDEVdb_name=DWDEVbackground_dump_dest=oradata2oracle9iadminDWDEVbdumpuser_dump_dest=oradata2oracle9iadminDWDEVudumpcore_dump_dest=oradata2oracle9iadminDWDEVcdumpcontrol_files=(oradata2oracle9iadminDWDEVcontrol01ctloradata2oracle9iadminDWDEVcontrol02ctl)compatible=92000remote_login_passwordfile=EXCLUSIVEundo_management=AUTOundo_tablespace=undo1

B) STARTUP NOMOUNT

C)

SQLgt create database DWDEV

d 2 atafile oradata2oracle9iadminDWDEVDWDEV1dbf size 2048m 3 logfile group 1 oradata2oracle9iadminDWDEVlog1rdo size 200m 4 group 2 oradata2oracle9iadminDWDEVlog2rdo size 200md 5 efault temporary tablespace temp 6 tempfile oradata2oracle9iadminDWDEVtemp01dbf size 10m 7 undo tablespace undot1 datafile oradata2oracle9iadminDWDEVundot1dbf size 100M

D) Run catalog amp catproc

NO OF CPU RUNNING IN THE SERVER

psrinfopsrinfo ndashv

ASSIGN DEFAULT TABLESPACE FOR THE USER

alter user SAMCORE default tablespace smdest_data quota unlimited on

CREATE CONSTRAINT

create table ri_primary_key_1 (

a number

b number

c number

constraint pk_name primary key (a b)

)

Alter table table_name add constraint some_name primary key (columname1coulumname2)

ENABLE NO VALIDATE amp DROP CONSTAINT

alter table test1 modify DAY_OF_WEEK varchar2(1) not null enable novalidate

ALTER TABLE egg DROP CONSTRAINT eggREFchickenALTER TABLE chicken DROP CONSTRAINT chickenREFegg

Insert into table_name select from table_nameCreate table table_name as selet from table_name

DROP THE DATABASE

The following shows the steps to drop a database in Unix enviroment In order to delete a database there are few things need to be taken care of First all the database related files eg dbf ctl rdo arc need to be deleted Then the entry in listenerora and tnsnamesora need to be removed Third all the database links need to be removed since it will be invalid anyways

It depends how you login to oracle account in Unix you should have environment set for the user oracle To confirm that the environment variable is set do a env|grep ORACLE and you will notice that your ORACLE_SID=SOME_SID and ORACLE_HOME=SOME_PATH If you do not already have the ORACLE_SID and ORACLE_HOME set do it now

Make sure also that you set the ORACLE_SID and ORACLE_HOME correct else you will end up deleting other database Next you will have to query all the database related files from dictionaries in order to identify which files to delete Do the following

01 Login as connect as sysdba at svrmgrl02 startup the database if its not already started The database must at least mounted03 spool tmpdeletelistlst 04 select name from v$datafile (This will get all the datafiles alternatively you can select file_name from dba_data_files) 05 select member from v$logfile06 select name from v$controlfile07 archive log list (archive_log_dest is where the archived destination is)08 locating ifile by issuing show parameter ifile (alternatively check the content of initora)09 spool off 10 Delete in OS level the files listed in tmpdeletelistlst11 remove all the entries which refer to the deleted database in tnsnamesora and listenerora (located in $ORACLE_HOMEnetworkadmin) 12 remove all database links referring to the deleted database13 check varoptoracleoratab to make sure there is no entry of the database deleted If yes remove it14 DONE

SQLgt select DAY_OF_WEEKcount(DAY_OF_WEEK) as cnt from PPM_AR_BROADCAST_HOUR group by DAY_OF_WEEK

CHANGE THE NLS_DATABASE_PARAMETER

select from nls_database_parameters where parameter=NLS_CHARACTERSET

ALTER THE FILE TO OFFLINE

alter database tempfile oradata2rating9idatatemp01dbf offline

alter database tempfile oradata2rating9idatatemp01dbf online

SQLgt alter table PPMENO_PRODPPMENO_MEDIA_ENCODING_ENTITY modify (MINOR_STATION_IND varchar2(2 byte) constraint a1 not null enable novalidate)migrationoracle9ibinpupbldmigrationoracle9isqlplusadminpupbldsqlu01apporacleproduct816binpupbldu01apporacleproduct816sqlplusadminpupbldsqldata1ora92orainstallbinpupblddata1ora92orainstallsqlplusadminpupbldsql

STATSPACK INSTALLATION

Statspack Installation

Steps1Create tablespace tablespace_name datafile lsquofilenamedbfrsquorsquo size 500M 2 optoraclerdbmsadmin 3run the command in sql prompt optoraclerdbmsadminspcreatesql 4

grant select on PPMDP_STENppmdp_media_stream to public

create public synonym ppmdp_media_stream for PPMDP_STENppmdp_media_stream

IMP UTILITY

connected to ORACLE

The errors occur on Oracle database installed in Windows machine too Actually the problem can occurs in any platform of Oracle database It usually happens when try to import into new database

The problem occurs because imp utiliy encounters error out when trying to execute some commands

The solution to solve the problem is as following

Login as sys in the SQLPLUS and run the following sqls

$OHrdbmsadmindbmsreadsql$OHrdbmsadminprvtreadplb

After executing the above sql scripts retry the import The error should disappears

Select granteegranted_name from dba_role_privs

UNDOTBS

Guidelines for Using Partition-Level Import

Partition-level Import can only be specified in table mode It lets you selectively load data from specified partitions or subpartitions in an export file Keep the following guidelines in mind when using partition-level import

bull Import always stores the rows according to the partitioning scheme of the target table

bull Partition-level Import inserts only the row data from the specified source partitions or subpartitions

bull If the target table is partitioned partition-level Import rejects any rows that fall above the highest partition of the target table

bull Partition-level Import cannot import a nonpartitioned exported table However a partitioned table can be imported from a nonpartitioned exported table using table-level Import

bull Partition-level Import is legal only if the source table (that is the table called tablename at export time) was partitioned and exists in the Export file

bull If the partition or subpartition name is not a valid partition in the export file Import generates a warning

bull The partition or subpartition name in the parameter refers to only the partition or subpartition in the Export file which may not contain all of the data of the table on the export source system

bull If ROWS=y (default) and the table does not exist in the Import target system the table is created and all rows from the source partition or subpartition are inserted into the partition or subpartition of the target table

bull If ROWS=y (default) and IGNORE=y but the table already existed before Import all rows for the specified partition or subpartition in the table are inserted into the table The rows are stored according to the existing partitioning scheme of the target table

bull If ROWS=n Import does not insert data into the target table and continues to process other objects associated with the specified table and partition or subpartition in the file

bull If the target table is nonpartitioned the partitions and subpartitions are imported into the entire table Import requires IGNORE=y to import one or more partitions or subpartitions from the Export file into a nonpartitioned table on the import target system

USER CREATION IN OS

useradd -d exporthomeS106255 -m s106462

FIND UPTIME OF THE DATABASE

SQLgt select TO_CHAR(startup_timemm-dd-yy hh24miss) from v$instance

SQLgt select property_name property_value from database_properties

The SQL will return the following results look for DEFAULT_TEMP_TABLESPACE for the setting

PROPERTY_NAME PROPERTY_VALUEmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashDICTBASE 2DEFAULT_TEMP_TABLESPACE TEMPDBTIMEZONE +0100NLS_NCHAR_CHARACTERSET AL16UTF16GLOBAL_DB_NAME ARONGENERALICHEXPORT_VIEWS_VERSION 8NLS_LANGUAGE AMERICANNLS_TERRITORY AMERICANLS_CURRENCY $NLS_ISO_CURRENCY AMERICANLS_NUMERIC_CHARACTERS NLS_CHARACTERSET WE8ISO8859P1NLS_CALENDAR GREGORIANNLS_DATE_FORMAT DD-MON-RRNLS_DATE_LANGUAGE AMERICANNLS_SORT BINARYNLS_TIME_FORMAT HHMISSXFF AMNLS_TIMESTAMP_FORMAT DD-MON-RR HHMISSXFF AMNLS_TIME_TZ_FORMAT HHMISSXFF AM TZRNLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HHMISSXFF AM TZRNLS_DUAL_CURRENCY $

NLS_COMP BINARYNLS_LENGTH_SEMANTICS BYTENLS_NCHAR_CONV_EXCP FALSENLS_RDBMS_VERSION 92060

If default temporary tablespace is wrong the alter it with the following command

SQLgt alter database default temporary tablespace temp

To check default temporary tablespace for all users of the database

SQLgt select username temporary_tablespace account_status from dba_users

will return the following result check if all users TEMPORARY_TABLESPACE is set to correct settings

USERNAME TEMPORARY_TABLESPACE ACCOUNT_STATUSmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndashSYS TEMPRY OPENSYSTEM TEMP OPENOUTLN TEMP OPENDBSNMP TEMP OPENDBMONITOR TEMP OPENTEST TEMP OPENWMSYS TEMP EXPIRED amp LOCKED

If wrong temporary tablespace is found alter it with the correct tablespace name (for example sys) with the following SQL

SQLgt alter user sys temporary tablespace temp

Alternatively recreate or add a datafile to your temporary tablespace and change the default temporary tablespace for your database

SQLgt drop tablespace temp including contents and datafiles

SQLgt create temporary tablespace temp tempfile lsquodbtemp01dbfrsquo size 100m autoextend off extent management local uniform size 1m

SQLgt alter database default temporary tablespace temp

HOW TO DISABLE CONSTAINT

alter table PPM_PRODPPM_SAMPLE_HH_CHRSTCS disable constraint SHC_S_FK

WHAT WILL HAPPEN IF U DELETE THOSE FILES

And what can happen if I delete them

Its two very large files (150-160 MB each)920assistantsdbcatemplatesData_Warehousedfj920assistantsdbcatemplatesTransaction_Processingdfj

Re What are these files GOOD for [message 126248 is a reply to message 126216 ]

Sat 02 July 2005 0009

AchchanMessages 86Registered June 2005

Member

HiFiles that have a DJF extension contain the predefined redo logs and datafiles for seed templates in DBCAIf you delete them you wont be able to use those db creation templates in future

FOR JVM INSTALLATION IN ORACLE

For JVM installation

We have to run this scriptinitjvmasql

DB_DOMAIN NAME PARAMETER

db_domainGLOBAL_NAMES=TRUEALTER DATABASE RENAME GLOBAL_NAME TO WEBDVCTSCOM

CREATE TABLE STRUCTURE

SQLgt select DBMS_METADATAGET_DDL(TABLELOGOFF_TBLCOORS_TARGET) from dual CREATE OR REPLACE TRIGGER SYStrg_logoff

BEFORE logoff ON DATABASEBEGIN INSERT INTO SYSlogoff_tbl VALUES(sys_context(userenvsession_user) SYSDATE)END

BACKUP PATH

507 mount 1023710137unixbkp backup 508 cd backup 509 df -k 510 cd backup 511 ls 512 clear 513 ls 514 mkdir jpmc_bak 515 cd jpmc_bak 516 ls 517 df -k u02 518 pwd 519 ls u02 520 pwd 521 cp -rpf u02ccsystst 522 ls -ltr 523 history

NOOF CPU

isainfo ndashv

HOW TO LOCK THE USER ACCOUNT IN ORACLE

Alter user user_name account lock

CHANGE TABLESPACE BLOCK SIZE ISSUE

Db_2k_cache_size=10m

OEM IN ORACLE 10G

Emctl status dbconsole

httphostnameportem

SET ENVIRONMENT VARIABLE IN ORACLE

export PATH=optjava14bin$PATHexport JAVA_HOME=optjava14jre ora9i 8837 1 0 May 24 1147 ora_pmon_poi ora9i 2305 1 0 Mar 29 2359 ora_pmon_portal ora9i 2321 1 0 Mar 29 2417 ora_pmon_EDMS ora10g 17394 1 0 Apr 02 12857 ora_pmon_POI2 orainst 14743 14365 0 110243 pts3 000 grep pmonCREATE DIRECTORY create directory utl_dir as lsquopathrsquogrant all on directory utl

Modify the given parameter

utl_file_dir

If any timeout request

SqlnetInbound_connect_Timeout

Any privilege for DBMS package

Grant execute on dbmsstats to username

select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = lsquo27229rsquo and spaddr = paddr

Load dump to the Sybase database

Load database database_name from

Load database database_name from ldquocompresspathrdquo

Load database database_name from stripe_on ldquocompresspath01rdquo

Stripe on ldquocompresspath02rdquo

Dump database database_name to lsquopathrsquos

Those scripts should run for install JVM

javavminstallinitjvmsql

optoracle10gxdkadmininitxmlsql

optoracle10gxdkadminxmljasql

optoracle10grdbmsadmincatjavasql

optoracle10grdbmsadmincatexfsql

Once the database has been restarted resolve any invalid objects by

running the utlrpsql script eg

optoracle10grdbmsadminutlrpsql

Those scripts should run for Uninstall JVM

rdbmsadmincatnoexfsql

rdbmsadminrmaqjmssql

rdbmsadminrmcdcsql

xdkadminrmxmlsql

javavminstallrmjvmsql

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name) FROM dba_tablespaces

1023710137mdashBackup Report

SYBASE ndashDatabase

1su ndash syb

2dscp

3open

4listall

5isql ndashUsa ndashSddm(database Name)

6sp-who

7go

8shutdown with nowait

9Sybasesyb125ASE-12-5install

10 startserver ndashf RUNndashgsms

online database gem_curr

11sp ndashhelpdb

12sp-configure

13sp-config parameter newvalue=0is

vgdisplay -v vg02 | grep LV Name |more

For Truncate the table

dump tran test_saatchi with truncate_only

backupsp_helpdb test_saatchi

cd $SYSBASE

MORE INTERFACE

Sybadmin-pW

MAX_ENABLED_ROLES = 70

svrmgrlconnect internalstartupshutdown abort

The following command for gather statistics of Number of rows in each table

exec dbms_statsgather_database_stats()

create or replace procedure sess1kill_session ( v_sid number v_serial number ) as v_varchar2 varchar2(100) begin execute immediate ALTER SYSTEM KILL SESSION || v_sid || || v_serial || end

HARD MOUNT DRIVE

mount -o hardrwnoacrsize=32768wsize=32768suidproto=tcpvers=3 1023710137unixbkp backup

oradim -delete -sid EPToradim -new -sid EPT -pfile DoracleHome2databaseinitEPToraSET TNS_ADMIN=Coracleora92networkadmin

Alter user user_name quota unlimted on tablespace_name

This is most likely a bug I would recommend to apply patchset 9207 As oracle recommends at least 9203 versiyon Anyway you can try below fix

Change the listener and database services Log On user to domain user who is a member of the groups domain admin and ORA_DBA group The default setting is Local System Account

- Run regedit- Drill down to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServices- Locate and delete the OracleOraHome92TNSListener (or whatever the listener name is)- Reboot the entire Windows box- When started and logged on as the Oracle user go to a DOS Command prompt- Run lsnrctl start ltlistener_namegt without the single quotes and replacing ltlistener_namegt with the name- An OS error of 1060 will be seen (normal) as the service is missing- The listener should start correctly or the next logical error may display

By the way can you explain backward of the problem Did you do any upgrade May you use double oracle_homelsquovaroptoracle--Installloc

zfs set quota=10G datapoolzfsoracle

select oracle_username os_user_namelocked_modeobject_nameobject_type from v$locked_object a dba_objects b where aobject_id=bobject_id

Select distinct busername bosuser bmachinebterminalmode_heldmode_requested

blogon_time SESSION WAIT sw From dba_ddl_locks a v$session b v$session_wait sw Where name= and asession_id=bsid and status=ACTIVE and swsid=bsid

spcreatesqlspreportsqlfor i in SAGACEND SAGACENB GLATTD STNC wrkshdevdoORACLE_SID=$iexport ORACLE_SIDsqlplus as sysdba ltlt select sum(bytes)10241024 from dba_data_filesexitdone

optinfoallinfo

For Hp-ux File Extend

fuser -c oradata2

umount oradata2

lvextend -L 40000M devvg00lvol7extendfs devvg00rlvoradata2

mount oradata2

devvg01lvwls 2097152 1457349 610113 70 weblogic

ALL_TAB_PRIVS All object grants where the user or public is granteeALL_TAB_PRIVS_MADE All object grants made by user or on user owned objectsALL_TAB_PRIVS_RECD All object grants to user or publicDBA_SYS_PRIVS System privileges granted to users and rolesDBA_ROLES List of all roles in the databaseDBA_ROLE_PRIVS Roles granted to users and to other rolesROLE_ROLE_PRIVS Roles granted to other rolesROLE_SYS_PRIVS System privileges granted to rolesROLE_TAB_PRIVS Table privileges granted to rolesSESSION_PRIVS All privileges currently available to userSESSION_ROLES All roles currently available to userUSER_SYS_PRIVS System privileges granted to current userUSER_TAB_PRIV Grants on objects where current user is grantee grantor or owner

DBA_TAB_PRIVSetcftpdftpusers

bash-300 zfs create datapooltelebash-300 zfs set mountpoint=app datapoolappbash-300 zfs set quota=10G datapoolapp

EXECUTE dbms_sessionset_sql_trace (FALSE)

SELECT SUBSTR (dfNAME 1 70) file_name dfbytes 1024 1024 allocated_mb((dfbytes 1024 1024) - NVL (SUM (dfsbytes) 1024 1024 0))used_mbNVL (SUM (dfsbytes) 1024 1024 0) free_space_mbFROM v$datafile df dba_free_space dfsWHERE dffile = dfsfile_id(+)GROUP BY dfsfile_id dfNAME dffile dfbytesORDER BY file_name

purge table name-of-table

purge index name-of-table

purge recyclebin

purge dba_recyclebin

purge tablespace name-of-tablespace

purge tablespace name-of-tablespaces user name-of-user

date dd if = ltinput filegt of = lt output file gt date

isainfo ndashv-output of the o2 is 32 bit or 64 bit

1023720911

isql -Udba -Scso_otpwSQL

for start and stop the databasescript sybdata1syb126IQcso_ot

Recover databaseAlter database open

1023720469

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name)FROM dba_tablespaces

Here is the query to get the details based on Unix PID select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = ltunix_pidgt and spaddr = paddr

CREATE CONTROLFILE SET DATABASE GMACDEV RESETLOGS NOARCHIVELOG-- SET STANDBY TO MAXIMIZE PERFORMANCE MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 30 MAXINSTANCES 1 MAXLOGHISTORY 112LOGFILE GROUP 1 gmacGMACDEVloglog1rdo SIZE 100M GROUP 2 gmacGMACDEVloglog2rdo SIZE 100M-- STANDBY LOGFILEDATAFILE gmacGMACDEVdatasystemdbf gmacGMACDEVdataundodbf gmacGMACDEVdatauserdbf gmacGMACDEVdatatestdbfCHARACTER SET US7ASCIIselect from nls_database_parameters 2 where parameter = any(NLS_CHARACTERSETNLS_NCHAR_CHARACTERSET)

EXECUTE dbms_sessionset_sql_trace (TRUE)PLSQL PROFILER

If you are not already configured DBMS_PROFILE package look the following script in Oracle Home-gtrdbms-gtadminSCRIPT PROFLOADSQL and PROFTABSQL

102375164Softwares

My problemWhen I dont use tnsnames and want to use ipc protocol then I get the followingerrorSQLgt connect myuseridmypasswordERROR

ORA-01034 ORACLE not availableORA-27121 unable to determine size of shared memory segmentSVR4 Error 13 Permission denied

Answer to your problem=======================Make sure file oracle has the following permissionscd $ORACLE_HOMEbin6751 If not1 Login as oracle user2 Shutdown (normal) the db3 Go to $ORACLE_HOMEbin4 Execute the followingchmod 6751 oracle5 Check the file permissions on oracle using the followingls -l oracle they should be-rwsr-s--x Startup the db and try connecting as dba or non-oracle user if this will not work- make sure permisions on oracle filesystems are set correctly ( set to 755)- filesystem must be mounted with correct setuid

date +DATE mdynTIMEHMS

Now start the Oracle EM dbconsole Build Script ($ORACLE_HOMEbinemca for Linux and $ORACLE_HOMEBinemcabat for Windows)

$ emca -repos create$ emca -config dbcontrol db

STARTED EMCA at Fri May 14 104322 MEST 2004Enter the following information about the databaseto be configured

Listener port number 1521Database SID AKI1Service name AKI1WORLDEmail address for notification martinzahnakadiacomEmail gateway for notification mailhostPassword for dbsnmp xxxxxxx

Password for sysman xxxxxxxPassword for sys xxxxxxx

---------------------------------------------------------You have specified the following settings

Database ORACLE_HOME optoracleproduct1010Enterprise Manager ORACLE_HOME optoracleproduct1010

Database host name akiraListener port number 1521Database SID AKI1Service name AKI1Email address for notification martinzahnakadiacomEmail gateway for notification mailhost---------------------------------------------------------Do you wish to continue [yesno] yesAM oraclesysmanemcpEMConfig updateReposVarsINFO Updating file configrepositoryvariables

Now wait about 10 Minutes to complete

M oraclesysmanemcpEMConfig createRepositoryINFO Creating repository M oraclesysmanemcpEMConfig performINFO Repository was created successfullyM oraclesysmanemcputilPortQuery findUsedPortsINFO Searching services file for used portAM oraclesysmanemcpEMConfig getPropertiesINFO Starting the DBConsole AM oraclesysmanemcpEMConfig performINFO DBConsole is started successfullyINFO gtgtgtgtgtgtgtgtgtgtgt The Enterprise Manager URL is httpakira5500em ltltltltltltltltltltltEnterprise Manager configuration is completed successfullyFINISHED EMCA at Fri May 14 105525 MEST 2004

Try to connect to the database Control

httpakira5500em

emca -deconfig dbcontrol db -repos drop

1gt select name from sysconfigures2gt goMsg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go

Msg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go name

-------------------------------------------------------------------------------- number of devices

suspend audit when device full

(2 rows affected)1gt sp_configure number of devices2gt go Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 36 60 60 number dynamic

(1 row affected)(return status = 0)1gt sp_configure number of devices702gt go00000000002720070315 14521046 server Configuration file sybasesyb125ASE-12_5ddmcfg has been written and the previous version has been renamed to sybasesyb125ASE-12_5ddm04600000000002720070315 14521048 server The configuration option number of devices has been changed by sa from 60 to 70 Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 44 70 70 number dynamic

(1 row affected)Configuration option changed The SQL Server need not be rebooted since theoption is dynamicChanging the value of number of devices to 70 increases the amount of memoryASE uses by 12 K(return status = 0)

disk initname=gem_hist_data7physname=datasyb125gem_histgem_hist_data7datsize=1600Mgo

This Query is used to find out the object name and lock id

select cownercobject_namecobject_typebsidbserialbstatusbosuserbmachine from v$locked_object a v$session bdba_objects c where bsid = asession_id and aobject_id = cobject_id

For apply patches for migration one version to another version1Run the setupexe2shu down the database3startup migrate4Run the below scriptscatpatchsqlcatciosqlutlrpsqlcatexpsql5shu immediate

Find out the locked object and sql query

select aobject_name boracle_username bos_user_namecsid cserialcterminal dsql_textfrom sysdba_objects av$locked_object bv$session cv$sqltext d where aobject_id = bobject_id and csid = bsession_id and csql_hash_value = dhash_value

HP-UX Cron tab

NAME

crontab - user crontab file

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 3: 7101564-Daily-Work

select from cdot1CDOI3

102375154

User Nameoc4jadminPassword pass1234

httpsmetalinkoraclecommetalinkplsqlfp=110194410067257338331514NO

Oracle Server-Enterprise and Standard EditionDBA Administration Techn

ical Forum

Displayed below are the messages of the selected thread

Thread Status Closed

From Sara Dyer 18-Feb-05 1457 Subject ORA-04020 on startup of database

RDBMS Version 92040Operating System and Version HP-UX B1100Error Number (if applicable) ORA-04020Product (ie SQLLoader Import etc) Product Version

ORA-04020 on startup of database

Irsquom attempting to set up multi-master replication I ran catalogsql as sys as suggested in Note1220391 The below error occurred when running catalogsql I now cannot connect to the database using enterprise manager or a web application I can only connect via sqlplus I have restarted the database several times and each time I startup the below error occurs I have tried running utlrpsql and receive the same error ERROR at line 15 ORA-04020 deadlock detected while trying to lock object SYSDBMS_REPUTIL ORA-06508 PLSQL could not find program unit being called ORA-06512 at line 24 ORA-06508 PLSQL could not find program unit being called ORA-06512 at line 24

Also the following error occurs when attempting to access the web application

Fri 18 Feb 2005 120621 GMT ORA-04020 deadlock detected while trying to lock object SYSDBMS_STANDARD DAD name devltimetrk PROCEDURE time_sheetdisplay URL http144101261441643plsdevlTimeTrktime_sheetdisplay

From Otto Rodriguez 18-Feb-05 2159

Subject Re ORA-04020 on startup of database

Try the following 1 Set parameters in your updated initSIDora (create from spfile) AQ_TM_PROCESSES=0 _SYSTEM_TRIG_ENABLED=FALSE

2 Rename spfile shutdown and STARTUP MIGRATE 3 Run catalogsql again 4 Comment parameters added in step 1 5 Rename back your spfile 6 Shutdown and STARTUP normal

From Sara Dyer 22-Feb-05 1634 Subject Re ORA-04020 on startup of database

That fixed my original problem I put my pfile back the way it was and now I am getting this - ORACLE instance started

Total System Global Area 488075536 bytes Fixed Size 737552 bytes Variable Size 452984832 bytes Database Buffers 33554432 bytes Redo Buffers 798720 bytes Database mounted ORA-00604 error occurred at recursive SQL level 1 ORA-04045 errors during recompilationrevalidation of XDBDBMS_XDBZ0 ORA-04098 trigger SYST_ALTER_USER_B is invalid and failed re-validation

I tried recompiling everything with utlrpsql but recieved the trigger is invalid error and I tried adding _system_trig_enabled and setting it to _system_trig_enabled=TRUE in my pfile no help

Thank you

Sara

ORA-12518 TNSlistener could not hand off client connection

Your server is probably running out of memory and need to swap memory to disk One cause can be an Oracle process consuming too much memory

A possible workaround is to set following parameter in the listenerora and restart the listenerDIRECT_HANDOFF_TTC_LISTENER=OFF

Should you be working with Multi threaded server connections you might need to increase the value of large_pool_size

START INFORMICS REPOSITARY

Su ndash informatCd informaticarepositoryserver pmrepserverhttpwwworaclecomtechnologybooks10g_bookshtml

FOR SUN-SOLARIS 10G CONSOLE

smcamp

AUTO EXTEND ON DATABASE TEMPFILE OR DATAFILE

alter database tempfile lsquofile_namersquo autoextend on

HOW TO CREATE DATABASE MANUALLY

A)INITORA PARAMETER

spool off$ ksh$ set -o viinstance_name=DWDEVdb_name=DWDEVbackground_dump_dest=oradata2oracle9iadminDWDEVbdumpuser_dump_dest=oradata2oracle9iadminDWDEVudumpcore_dump_dest=oradata2oracle9iadminDWDEVcdumpcontrol_files=(oradata2oracle9iadminDWDEVcontrol01ctloradata2oracle9iadminDWDEVcontrol02ctl)compatible=92000remote_login_passwordfile=EXCLUSIVEundo_management=AUTOundo_tablespace=undo1

B) STARTUP NOMOUNT

C)

SQLgt create database DWDEV

d 2 atafile oradata2oracle9iadminDWDEVDWDEV1dbf size 2048m 3 logfile group 1 oradata2oracle9iadminDWDEVlog1rdo size 200m 4 group 2 oradata2oracle9iadminDWDEVlog2rdo size 200md 5 efault temporary tablespace temp 6 tempfile oradata2oracle9iadminDWDEVtemp01dbf size 10m 7 undo tablespace undot1 datafile oradata2oracle9iadminDWDEVundot1dbf size 100M

D) Run catalog amp catproc

NO OF CPU RUNNING IN THE SERVER

psrinfopsrinfo ndashv

ASSIGN DEFAULT TABLESPACE FOR THE USER

alter user SAMCORE default tablespace smdest_data quota unlimited on

CREATE CONSTRAINT

create table ri_primary_key_1 (

a number

b number

c number

constraint pk_name primary key (a b)

)

Alter table table_name add constraint some_name primary key (columname1coulumname2)

ENABLE NO VALIDATE amp DROP CONSTAINT

alter table test1 modify DAY_OF_WEEK varchar2(1) not null enable novalidate

ALTER TABLE egg DROP CONSTRAINT eggREFchickenALTER TABLE chicken DROP CONSTRAINT chickenREFegg

Insert into table_name select from table_nameCreate table table_name as selet from table_name

DROP THE DATABASE

The following shows the steps to drop a database in Unix enviroment In order to delete a database there are few things need to be taken care of First all the database related files eg dbf ctl rdo arc need to be deleted Then the entry in listenerora and tnsnamesora need to be removed Third all the database links need to be removed since it will be invalid anyways

It depends how you login to oracle account in Unix you should have environment set for the user oracle To confirm that the environment variable is set do a env|grep ORACLE and you will notice that your ORACLE_SID=SOME_SID and ORACLE_HOME=SOME_PATH If you do not already have the ORACLE_SID and ORACLE_HOME set do it now

Make sure also that you set the ORACLE_SID and ORACLE_HOME correct else you will end up deleting other database Next you will have to query all the database related files from dictionaries in order to identify which files to delete Do the following

01 Login as connect as sysdba at svrmgrl02 startup the database if its not already started The database must at least mounted03 spool tmpdeletelistlst 04 select name from v$datafile (This will get all the datafiles alternatively you can select file_name from dba_data_files) 05 select member from v$logfile06 select name from v$controlfile07 archive log list (archive_log_dest is where the archived destination is)08 locating ifile by issuing show parameter ifile (alternatively check the content of initora)09 spool off 10 Delete in OS level the files listed in tmpdeletelistlst11 remove all the entries which refer to the deleted database in tnsnamesora and listenerora (located in $ORACLE_HOMEnetworkadmin) 12 remove all database links referring to the deleted database13 check varoptoracleoratab to make sure there is no entry of the database deleted If yes remove it14 DONE

SQLgt select DAY_OF_WEEKcount(DAY_OF_WEEK) as cnt from PPM_AR_BROADCAST_HOUR group by DAY_OF_WEEK

CHANGE THE NLS_DATABASE_PARAMETER

select from nls_database_parameters where parameter=NLS_CHARACTERSET

ALTER THE FILE TO OFFLINE

alter database tempfile oradata2rating9idatatemp01dbf offline

alter database tempfile oradata2rating9idatatemp01dbf online

SQLgt alter table PPMENO_PRODPPMENO_MEDIA_ENCODING_ENTITY modify (MINOR_STATION_IND varchar2(2 byte) constraint a1 not null enable novalidate)migrationoracle9ibinpupbldmigrationoracle9isqlplusadminpupbldsqlu01apporacleproduct816binpupbldu01apporacleproduct816sqlplusadminpupbldsqldata1ora92orainstallbinpupblddata1ora92orainstallsqlplusadminpupbldsql

STATSPACK INSTALLATION

Statspack Installation

Steps1Create tablespace tablespace_name datafile lsquofilenamedbfrsquorsquo size 500M 2 optoraclerdbmsadmin 3run the command in sql prompt optoraclerdbmsadminspcreatesql 4

grant select on PPMDP_STENppmdp_media_stream to public

create public synonym ppmdp_media_stream for PPMDP_STENppmdp_media_stream

IMP UTILITY

connected to ORACLE

The errors occur on Oracle database installed in Windows machine too Actually the problem can occurs in any platform of Oracle database It usually happens when try to import into new database

The problem occurs because imp utiliy encounters error out when trying to execute some commands

The solution to solve the problem is as following

Login as sys in the SQLPLUS and run the following sqls

$OHrdbmsadmindbmsreadsql$OHrdbmsadminprvtreadplb

After executing the above sql scripts retry the import The error should disappears

Select granteegranted_name from dba_role_privs

UNDOTBS

Guidelines for Using Partition-Level Import

Partition-level Import can only be specified in table mode It lets you selectively load data from specified partitions or subpartitions in an export file Keep the following guidelines in mind when using partition-level import

bull Import always stores the rows according to the partitioning scheme of the target table

bull Partition-level Import inserts only the row data from the specified source partitions or subpartitions

bull If the target table is partitioned partition-level Import rejects any rows that fall above the highest partition of the target table

bull Partition-level Import cannot import a nonpartitioned exported table However a partitioned table can be imported from a nonpartitioned exported table using table-level Import

bull Partition-level Import is legal only if the source table (that is the table called tablename at export time) was partitioned and exists in the Export file

bull If the partition or subpartition name is not a valid partition in the export file Import generates a warning

bull The partition or subpartition name in the parameter refers to only the partition or subpartition in the Export file which may not contain all of the data of the table on the export source system

bull If ROWS=y (default) and the table does not exist in the Import target system the table is created and all rows from the source partition or subpartition are inserted into the partition or subpartition of the target table

bull If ROWS=y (default) and IGNORE=y but the table already existed before Import all rows for the specified partition or subpartition in the table are inserted into the table The rows are stored according to the existing partitioning scheme of the target table

bull If ROWS=n Import does not insert data into the target table and continues to process other objects associated with the specified table and partition or subpartition in the file

bull If the target table is nonpartitioned the partitions and subpartitions are imported into the entire table Import requires IGNORE=y to import one or more partitions or subpartitions from the Export file into a nonpartitioned table on the import target system

USER CREATION IN OS

useradd -d exporthomeS106255 -m s106462

FIND UPTIME OF THE DATABASE

SQLgt select TO_CHAR(startup_timemm-dd-yy hh24miss) from v$instance

SQLgt select property_name property_value from database_properties

The SQL will return the following results look for DEFAULT_TEMP_TABLESPACE for the setting

PROPERTY_NAME PROPERTY_VALUEmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashDICTBASE 2DEFAULT_TEMP_TABLESPACE TEMPDBTIMEZONE +0100NLS_NCHAR_CHARACTERSET AL16UTF16GLOBAL_DB_NAME ARONGENERALICHEXPORT_VIEWS_VERSION 8NLS_LANGUAGE AMERICANNLS_TERRITORY AMERICANLS_CURRENCY $NLS_ISO_CURRENCY AMERICANLS_NUMERIC_CHARACTERS NLS_CHARACTERSET WE8ISO8859P1NLS_CALENDAR GREGORIANNLS_DATE_FORMAT DD-MON-RRNLS_DATE_LANGUAGE AMERICANNLS_SORT BINARYNLS_TIME_FORMAT HHMISSXFF AMNLS_TIMESTAMP_FORMAT DD-MON-RR HHMISSXFF AMNLS_TIME_TZ_FORMAT HHMISSXFF AM TZRNLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HHMISSXFF AM TZRNLS_DUAL_CURRENCY $

NLS_COMP BINARYNLS_LENGTH_SEMANTICS BYTENLS_NCHAR_CONV_EXCP FALSENLS_RDBMS_VERSION 92060

If default temporary tablespace is wrong the alter it with the following command

SQLgt alter database default temporary tablespace temp

To check default temporary tablespace for all users of the database

SQLgt select username temporary_tablespace account_status from dba_users

will return the following result check if all users TEMPORARY_TABLESPACE is set to correct settings

USERNAME TEMPORARY_TABLESPACE ACCOUNT_STATUSmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndashSYS TEMPRY OPENSYSTEM TEMP OPENOUTLN TEMP OPENDBSNMP TEMP OPENDBMONITOR TEMP OPENTEST TEMP OPENWMSYS TEMP EXPIRED amp LOCKED

If wrong temporary tablespace is found alter it with the correct tablespace name (for example sys) with the following SQL

SQLgt alter user sys temporary tablespace temp

Alternatively recreate or add a datafile to your temporary tablespace and change the default temporary tablespace for your database

SQLgt drop tablespace temp including contents and datafiles

SQLgt create temporary tablespace temp tempfile lsquodbtemp01dbfrsquo size 100m autoextend off extent management local uniform size 1m

SQLgt alter database default temporary tablespace temp

HOW TO DISABLE CONSTAINT

alter table PPM_PRODPPM_SAMPLE_HH_CHRSTCS disable constraint SHC_S_FK

WHAT WILL HAPPEN IF U DELETE THOSE FILES

And what can happen if I delete them

Its two very large files (150-160 MB each)920assistantsdbcatemplatesData_Warehousedfj920assistantsdbcatemplatesTransaction_Processingdfj

Re What are these files GOOD for [message 126248 is a reply to message 126216 ]

Sat 02 July 2005 0009

AchchanMessages 86Registered June 2005

Member

HiFiles that have a DJF extension contain the predefined redo logs and datafiles for seed templates in DBCAIf you delete them you wont be able to use those db creation templates in future

FOR JVM INSTALLATION IN ORACLE

For JVM installation

We have to run this scriptinitjvmasql

DB_DOMAIN NAME PARAMETER

db_domainGLOBAL_NAMES=TRUEALTER DATABASE RENAME GLOBAL_NAME TO WEBDVCTSCOM

CREATE TABLE STRUCTURE

SQLgt select DBMS_METADATAGET_DDL(TABLELOGOFF_TBLCOORS_TARGET) from dual CREATE OR REPLACE TRIGGER SYStrg_logoff

BEFORE logoff ON DATABASEBEGIN INSERT INTO SYSlogoff_tbl VALUES(sys_context(userenvsession_user) SYSDATE)END

BACKUP PATH

507 mount 1023710137unixbkp backup 508 cd backup 509 df -k 510 cd backup 511 ls 512 clear 513 ls 514 mkdir jpmc_bak 515 cd jpmc_bak 516 ls 517 df -k u02 518 pwd 519 ls u02 520 pwd 521 cp -rpf u02ccsystst 522 ls -ltr 523 history

NOOF CPU

isainfo ndashv

HOW TO LOCK THE USER ACCOUNT IN ORACLE

Alter user user_name account lock

CHANGE TABLESPACE BLOCK SIZE ISSUE

Db_2k_cache_size=10m

OEM IN ORACLE 10G

Emctl status dbconsole

httphostnameportem

SET ENVIRONMENT VARIABLE IN ORACLE

export PATH=optjava14bin$PATHexport JAVA_HOME=optjava14jre ora9i 8837 1 0 May 24 1147 ora_pmon_poi ora9i 2305 1 0 Mar 29 2359 ora_pmon_portal ora9i 2321 1 0 Mar 29 2417 ora_pmon_EDMS ora10g 17394 1 0 Apr 02 12857 ora_pmon_POI2 orainst 14743 14365 0 110243 pts3 000 grep pmonCREATE DIRECTORY create directory utl_dir as lsquopathrsquogrant all on directory utl

Modify the given parameter

utl_file_dir

If any timeout request

SqlnetInbound_connect_Timeout

Any privilege for DBMS package

Grant execute on dbmsstats to username

select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = lsquo27229rsquo and spaddr = paddr

Load dump to the Sybase database

Load database database_name from

Load database database_name from ldquocompresspathrdquo

Load database database_name from stripe_on ldquocompresspath01rdquo

Stripe on ldquocompresspath02rdquo

Dump database database_name to lsquopathrsquos

Those scripts should run for install JVM

javavminstallinitjvmsql

optoracle10gxdkadmininitxmlsql

optoracle10gxdkadminxmljasql

optoracle10grdbmsadmincatjavasql

optoracle10grdbmsadmincatexfsql

Once the database has been restarted resolve any invalid objects by

running the utlrpsql script eg

optoracle10grdbmsadminutlrpsql

Those scripts should run for Uninstall JVM

rdbmsadmincatnoexfsql

rdbmsadminrmaqjmssql

rdbmsadminrmcdcsql

xdkadminrmxmlsql

javavminstallrmjvmsql

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name) FROM dba_tablespaces

1023710137mdashBackup Report

SYBASE ndashDatabase

1su ndash syb

2dscp

3open

4listall

5isql ndashUsa ndashSddm(database Name)

6sp-who

7go

8shutdown with nowait

9Sybasesyb125ASE-12-5install

10 startserver ndashf RUNndashgsms

online database gem_curr

11sp ndashhelpdb

12sp-configure

13sp-config parameter newvalue=0is

vgdisplay -v vg02 | grep LV Name |more

For Truncate the table

dump tran test_saatchi with truncate_only

backupsp_helpdb test_saatchi

cd $SYSBASE

MORE INTERFACE

Sybadmin-pW

MAX_ENABLED_ROLES = 70

svrmgrlconnect internalstartupshutdown abort

The following command for gather statistics of Number of rows in each table

exec dbms_statsgather_database_stats()

create or replace procedure sess1kill_session ( v_sid number v_serial number ) as v_varchar2 varchar2(100) begin execute immediate ALTER SYSTEM KILL SESSION || v_sid || || v_serial || end

HARD MOUNT DRIVE

mount -o hardrwnoacrsize=32768wsize=32768suidproto=tcpvers=3 1023710137unixbkp backup

oradim -delete -sid EPToradim -new -sid EPT -pfile DoracleHome2databaseinitEPToraSET TNS_ADMIN=Coracleora92networkadmin

Alter user user_name quota unlimted on tablespace_name

This is most likely a bug I would recommend to apply patchset 9207 As oracle recommends at least 9203 versiyon Anyway you can try below fix

Change the listener and database services Log On user to domain user who is a member of the groups domain admin and ORA_DBA group The default setting is Local System Account

- Run regedit- Drill down to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServices- Locate and delete the OracleOraHome92TNSListener (or whatever the listener name is)- Reboot the entire Windows box- When started and logged on as the Oracle user go to a DOS Command prompt- Run lsnrctl start ltlistener_namegt without the single quotes and replacing ltlistener_namegt with the name- An OS error of 1060 will be seen (normal) as the service is missing- The listener should start correctly or the next logical error may display

By the way can you explain backward of the problem Did you do any upgrade May you use double oracle_homelsquovaroptoracle--Installloc

zfs set quota=10G datapoolzfsoracle

select oracle_username os_user_namelocked_modeobject_nameobject_type from v$locked_object a dba_objects b where aobject_id=bobject_id

Select distinct busername bosuser bmachinebterminalmode_heldmode_requested

blogon_time SESSION WAIT sw From dba_ddl_locks a v$session b v$session_wait sw Where name= and asession_id=bsid and status=ACTIVE and swsid=bsid

spcreatesqlspreportsqlfor i in SAGACEND SAGACENB GLATTD STNC wrkshdevdoORACLE_SID=$iexport ORACLE_SIDsqlplus as sysdba ltlt select sum(bytes)10241024 from dba_data_filesexitdone

optinfoallinfo

For Hp-ux File Extend

fuser -c oradata2

umount oradata2

lvextend -L 40000M devvg00lvol7extendfs devvg00rlvoradata2

mount oradata2

devvg01lvwls 2097152 1457349 610113 70 weblogic

ALL_TAB_PRIVS All object grants where the user or public is granteeALL_TAB_PRIVS_MADE All object grants made by user or on user owned objectsALL_TAB_PRIVS_RECD All object grants to user or publicDBA_SYS_PRIVS System privileges granted to users and rolesDBA_ROLES List of all roles in the databaseDBA_ROLE_PRIVS Roles granted to users and to other rolesROLE_ROLE_PRIVS Roles granted to other rolesROLE_SYS_PRIVS System privileges granted to rolesROLE_TAB_PRIVS Table privileges granted to rolesSESSION_PRIVS All privileges currently available to userSESSION_ROLES All roles currently available to userUSER_SYS_PRIVS System privileges granted to current userUSER_TAB_PRIV Grants on objects where current user is grantee grantor or owner

DBA_TAB_PRIVSetcftpdftpusers

bash-300 zfs create datapooltelebash-300 zfs set mountpoint=app datapoolappbash-300 zfs set quota=10G datapoolapp

EXECUTE dbms_sessionset_sql_trace (FALSE)

SELECT SUBSTR (dfNAME 1 70) file_name dfbytes 1024 1024 allocated_mb((dfbytes 1024 1024) - NVL (SUM (dfsbytes) 1024 1024 0))used_mbNVL (SUM (dfsbytes) 1024 1024 0) free_space_mbFROM v$datafile df dba_free_space dfsWHERE dffile = dfsfile_id(+)GROUP BY dfsfile_id dfNAME dffile dfbytesORDER BY file_name

purge table name-of-table

purge index name-of-table

purge recyclebin

purge dba_recyclebin

purge tablespace name-of-tablespace

purge tablespace name-of-tablespaces user name-of-user

date dd if = ltinput filegt of = lt output file gt date

isainfo ndashv-output of the o2 is 32 bit or 64 bit

1023720911

isql -Udba -Scso_otpwSQL

for start and stop the databasescript sybdata1syb126IQcso_ot

Recover databaseAlter database open

1023720469

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name)FROM dba_tablespaces

Here is the query to get the details based on Unix PID select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = ltunix_pidgt and spaddr = paddr

CREATE CONTROLFILE SET DATABASE GMACDEV RESETLOGS NOARCHIVELOG-- SET STANDBY TO MAXIMIZE PERFORMANCE MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 30 MAXINSTANCES 1 MAXLOGHISTORY 112LOGFILE GROUP 1 gmacGMACDEVloglog1rdo SIZE 100M GROUP 2 gmacGMACDEVloglog2rdo SIZE 100M-- STANDBY LOGFILEDATAFILE gmacGMACDEVdatasystemdbf gmacGMACDEVdataundodbf gmacGMACDEVdatauserdbf gmacGMACDEVdatatestdbfCHARACTER SET US7ASCIIselect from nls_database_parameters 2 where parameter = any(NLS_CHARACTERSETNLS_NCHAR_CHARACTERSET)

EXECUTE dbms_sessionset_sql_trace (TRUE)PLSQL PROFILER

If you are not already configured DBMS_PROFILE package look the following script in Oracle Home-gtrdbms-gtadminSCRIPT PROFLOADSQL and PROFTABSQL

102375164Softwares

My problemWhen I dont use tnsnames and want to use ipc protocol then I get the followingerrorSQLgt connect myuseridmypasswordERROR

ORA-01034 ORACLE not availableORA-27121 unable to determine size of shared memory segmentSVR4 Error 13 Permission denied

Answer to your problem=======================Make sure file oracle has the following permissionscd $ORACLE_HOMEbin6751 If not1 Login as oracle user2 Shutdown (normal) the db3 Go to $ORACLE_HOMEbin4 Execute the followingchmod 6751 oracle5 Check the file permissions on oracle using the followingls -l oracle they should be-rwsr-s--x Startup the db and try connecting as dba or non-oracle user if this will not work- make sure permisions on oracle filesystems are set correctly ( set to 755)- filesystem must be mounted with correct setuid

date +DATE mdynTIMEHMS

Now start the Oracle EM dbconsole Build Script ($ORACLE_HOMEbinemca for Linux and $ORACLE_HOMEBinemcabat for Windows)

$ emca -repos create$ emca -config dbcontrol db

STARTED EMCA at Fri May 14 104322 MEST 2004Enter the following information about the databaseto be configured

Listener port number 1521Database SID AKI1Service name AKI1WORLDEmail address for notification martinzahnakadiacomEmail gateway for notification mailhostPassword for dbsnmp xxxxxxx

Password for sysman xxxxxxxPassword for sys xxxxxxx

---------------------------------------------------------You have specified the following settings

Database ORACLE_HOME optoracleproduct1010Enterprise Manager ORACLE_HOME optoracleproduct1010

Database host name akiraListener port number 1521Database SID AKI1Service name AKI1Email address for notification martinzahnakadiacomEmail gateway for notification mailhost---------------------------------------------------------Do you wish to continue [yesno] yesAM oraclesysmanemcpEMConfig updateReposVarsINFO Updating file configrepositoryvariables

Now wait about 10 Minutes to complete

M oraclesysmanemcpEMConfig createRepositoryINFO Creating repository M oraclesysmanemcpEMConfig performINFO Repository was created successfullyM oraclesysmanemcputilPortQuery findUsedPortsINFO Searching services file for used portAM oraclesysmanemcpEMConfig getPropertiesINFO Starting the DBConsole AM oraclesysmanemcpEMConfig performINFO DBConsole is started successfullyINFO gtgtgtgtgtgtgtgtgtgtgt The Enterprise Manager URL is httpakira5500em ltltltltltltltltltltltEnterprise Manager configuration is completed successfullyFINISHED EMCA at Fri May 14 105525 MEST 2004

Try to connect to the database Control

httpakira5500em

emca -deconfig dbcontrol db -repos drop

1gt select name from sysconfigures2gt goMsg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go

Msg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go name

-------------------------------------------------------------------------------- number of devices

suspend audit when device full

(2 rows affected)1gt sp_configure number of devices2gt go Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 36 60 60 number dynamic

(1 row affected)(return status = 0)1gt sp_configure number of devices702gt go00000000002720070315 14521046 server Configuration file sybasesyb125ASE-12_5ddmcfg has been written and the previous version has been renamed to sybasesyb125ASE-12_5ddm04600000000002720070315 14521048 server The configuration option number of devices has been changed by sa from 60 to 70 Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 44 70 70 number dynamic

(1 row affected)Configuration option changed The SQL Server need not be rebooted since theoption is dynamicChanging the value of number of devices to 70 increases the amount of memoryASE uses by 12 K(return status = 0)

disk initname=gem_hist_data7physname=datasyb125gem_histgem_hist_data7datsize=1600Mgo

This Query is used to find out the object name and lock id

select cownercobject_namecobject_typebsidbserialbstatusbosuserbmachine from v$locked_object a v$session bdba_objects c where bsid = asession_id and aobject_id = cobject_id

For apply patches for migration one version to another version1Run the setupexe2shu down the database3startup migrate4Run the below scriptscatpatchsqlcatciosqlutlrpsqlcatexpsql5shu immediate

Find out the locked object and sql query

select aobject_name boracle_username bos_user_namecsid cserialcterminal dsql_textfrom sysdba_objects av$locked_object bv$session cv$sqltext d where aobject_id = bobject_id and csid = bsession_id and csql_hash_value = dhash_value

HP-UX Cron tab

NAME

crontab - user crontab file

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 4: 7101564-Daily-Work

ical Forum

Displayed below are the messages of the selected thread

Thread Status Closed

From Sara Dyer 18-Feb-05 1457 Subject ORA-04020 on startup of database

RDBMS Version 92040Operating System and Version HP-UX B1100Error Number (if applicable) ORA-04020Product (ie SQLLoader Import etc) Product Version

ORA-04020 on startup of database

Irsquom attempting to set up multi-master replication I ran catalogsql as sys as suggested in Note1220391 The below error occurred when running catalogsql I now cannot connect to the database using enterprise manager or a web application I can only connect via sqlplus I have restarted the database several times and each time I startup the below error occurs I have tried running utlrpsql and receive the same error ERROR at line 15 ORA-04020 deadlock detected while trying to lock object SYSDBMS_REPUTIL ORA-06508 PLSQL could not find program unit being called ORA-06512 at line 24 ORA-06508 PLSQL could not find program unit being called ORA-06512 at line 24

Also the following error occurs when attempting to access the web application

Fri 18 Feb 2005 120621 GMT ORA-04020 deadlock detected while trying to lock object SYSDBMS_STANDARD DAD name devltimetrk PROCEDURE time_sheetdisplay URL http144101261441643plsdevlTimeTrktime_sheetdisplay

From Otto Rodriguez 18-Feb-05 2159

Subject Re ORA-04020 on startup of database

Try the following 1 Set parameters in your updated initSIDora (create from spfile) AQ_TM_PROCESSES=0 _SYSTEM_TRIG_ENABLED=FALSE

2 Rename spfile shutdown and STARTUP MIGRATE 3 Run catalogsql again 4 Comment parameters added in step 1 5 Rename back your spfile 6 Shutdown and STARTUP normal

From Sara Dyer 22-Feb-05 1634 Subject Re ORA-04020 on startup of database

That fixed my original problem I put my pfile back the way it was and now I am getting this - ORACLE instance started

Total System Global Area 488075536 bytes Fixed Size 737552 bytes Variable Size 452984832 bytes Database Buffers 33554432 bytes Redo Buffers 798720 bytes Database mounted ORA-00604 error occurred at recursive SQL level 1 ORA-04045 errors during recompilationrevalidation of XDBDBMS_XDBZ0 ORA-04098 trigger SYST_ALTER_USER_B is invalid and failed re-validation

I tried recompiling everything with utlrpsql but recieved the trigger is invalid error and I tried adding _system_trig_enabled and setting it to _system_trig_enabled=TRUE in my pfile no help

Thank you

Sara

ORA-12518 TNSlistener could not hand off client connection

Your server is probably running out of memory and need to swap memory to disk One cause can be an Oracle process consuming too much memory

A possible workaround is to set following parameter in the listenerora and restart the listenerDIRECT_HANDOFF_TTC_LISTENER=OFF

Should you be working with Multi threaded server connections you might need to increase the value of large_pool_size

START INFORMICS REPOSITARY

Su ndash informatCd informaticarepositoryserver pmrepserverhttpwwworaclecomtechnologybooks10g_bookshtml

FOR SUN-SOLARIS 10G CONSOLE

smcamp

AUTO EXTEND ON DATABASE TEMPFILE OR DATAFILE

alter database tempfile lsquofile_namersquo autoextend on

HOW TO CREATE DATABASE MANUALLY

A)INITORA PARAMETER

spool off$ ksh$ set -o viinstance_name=DWDEVdb_name=DWDEVbackground_dump_dest=oradata2oracle9iadminDWDEVbdumpuser_dump_dest=oradata2oracle9iadminDWDEVudumpcore_dump_dest=oradata2oracle9iadminDWDEVcdumpcontrol_files=(oradata2oracle9iadminDWDEVcontrol01ctloradata2oracle9iadminDWDEVcontrol02ctl)compatible=92000remote_login_passwordfile=EXCLUSIVEundo_management=AUTOundo_tablespace=undo1

B) STARTUP NOMOUNT

C)

SQLgt create database DWDEV

d 2 atafile oradata2oracle9iadminDWDEVDWDEV1dbf size 2048m 3 logfile group 1 oradata2oracle9iadminDWDEVlog1rdo size 200m 4 group 2 oradata2oracle9iadminDWDEVlog2rdo size 200md 5 efault temporary tablespace temp 6 tempfile oradata2oracle9iadminDWDEVtemp01dbf size 10m 7 undo tablespace undot1 datafile oradata2oracle9iadminDWDEVundot1dbf size 100M

D) Run catalog amp catproc

NO OF CPU RUNNING IN THE SERVER

psrinfopsrinfo ndashv

ASSIGN DEFAULT TABLESPACE FOR THE USER

alter user SAMCORE default tablespace smdest_data quota unlimited on

CREATE CONSTRAINT

create table ri_primary_key_1 (

a number

b number

c number

constraint pk_name primary key (a b)

)

Alter table table_name add constraint some_name primary key (columname1coulumname2)

ENABLE NO VALIDATE amp DROP CONSTAINT

alter table test1 modify DAY_OF_WEEK varchar2(1) not null enable novalidate

ALTER TABLE egg DROP CONSTRAINT eggREFchickenALTER TABLE chicken DROP CONSTRAINT chickenREFegg

Insert into table_name select from table_nameCreate table table_name as selet from table_name

DROP THE DATABASE

The following shows the steps to drop a database in Unix enviroment In order to delete a database there are few things need to be taken care of First all the database related files eg dbf ctl rdo arc need to be deleted Then the entry in listenerora and tnsnamesora need to be removed Third all the database links need to be removed since it will be invalid anyways

It depends how you login to oracle account in Unix you should have environment set for the user oracle To confirm that the environment variable is set do a env|grep ORACLE and you will notice that your ORACLE_SID=SOME_SID and ORACLE_HOME=SOME_PATH If you do not already have the ORACLE_SID and ORACLE_HOME set do it now

Make sure also that you set the ORACLE_SID and ORACLE_HOME correct else you will end up deleting other database Next you will have to query all the database related files from dictionaries in order to identify which files to delete Do the following

01 Login as connect as sysdba at svrmgrl02 startup the database if its not already started The database must at least mounted03 spool tmpdeletelistlst 04 select name from v$datafile (This will get all the datafiles alternatively you can select file_name from dba_data_files) 05 select member from v$logfile06 select name from v$controlfile07 archive log list (archive_log_dest is where the archived destination is)08 locating ifile by issuing show parameter ifile (alternatively check the content of initora)09 spool off 10 Delete in OS level the files listed in tmpdeletelistlst11 remove all the entries which refer to the deleted database in tnsnamesora and listenerora (located in $ORACLE_HOMEnetworkadmin) 12 remove all database links referring to the deleted database13 check varoptoracleoratab to make sure there is no entry of the database deleted If yes remove it14 DONE

SQLgt select DAY_OF_WEEKcount(DAY_OF_WEEK) as cnt from PPM_AR_BROADCAST_HOUR group by DAY_OF_WEEK

CHANGE THE NLS_DATABASE_PARAMETER

select from nls_database_parameters where parameter=NLS_CHARACTERSET

ALTER THE FILE TO OFFLINE

alter database tempfile oradata2rating9idatatemp01dbf offline

alter database tempfile oradata2rating9idatatemp01dbf online

SQLgt alter table PPMENO_PRODPPMENO_MEDIA_ENCODING_ENTITY modify (MINOR_STATION_IND varchar2(2 byte) constraint a1 not null enable novalidate)migrationoracle9ibinpupbldmigrationoracle9isqlplusadminpupbldsqlu01apporacleproduct816binpupbldu01apporacleproduct816sqlplusadminpupbldsqldata1ora92orainstallbinpupblddata1ora92orainstallsqlplusadminpupbldsql

STATSPACK INSTALLATION

Statspack Installation

Steps1Create tablespace tablespace_name datafile lsquofilenamedbfrsquorsquo size 500M 2 optoraclerdbmsadmin 3run the command in sql prompt optoraclerdbmsadminspcreatesql 4

grant select on PPMDP_STENppmdp_media_stream to public

create public synonym ppmdp_media_stream for PPMDP_STENppmdp_media_stream

IMP UTILITY

connected to ORACLE

The errors occur on Oracle database installed in Windows machine too Actually the problem can occurs in any platform of Oracle database It usually happens when try to import into new database

The problem occurs because imp utiliy encounters error out when trying to execute some commands

The solution to solve the problem is as following

Login as sys in the SQLPLUS and run the following sqls

$OHrdbmsadmindbmsreadsql$OHrdbmsadminprvtreadplb

After executing the above sql scripts retry the import The error should disappears

Select granteegranted_name from dba_role_privs

UNDOTBS

Guidelines for Using Partition-Level Import

Partition-level Import can only be specified in table mode It lets you selectively load data from specified partitions or subpartitions in an export file Keep the following guidelines in mind when using partition-level import

bull Import always stores the rows according to the partitioning scheme of the target table

bull Partition-level Import inserts only the row data from the specified source partitions or subpartitions

bull If the target table is partitioned partition-level Import rejects any rows that fall above the highest partition of the target table

bull Partition-level Import cannot import a nonpartitioned exported table However a partitioned table can be imported from a nonpartitioned exported table using table-level Import

bull Partition-level Import is legal only if the source table (that is the table called tablename at export time) was partitioned and exists in the Export file

bull If the partition or subpartition name is not a valid partition in the export file Import generates a warning

bull The partition or subpartition name in the parameter refers to only the partition or subpartition in the Export file which may not contain all of the data of the table on the export source system

bull If ROWS=y (default) and the table does not exist in the Import target system the table is created and all rows from the source partition or subpartition are inserted into the partition or subpartition of the target table

bull If ROWS=y (default) and IGNORE=y but the table already existed before Import all rows for the specified partition or subpartition in the table are inserted into the table The rows are stored according to the existing partitioning scheme of the target table

bull If ROWS=n Import does not insert data into the target table and continues to process other objects associated with the specified table and partition or subpartition in the file

bull If the target table is nonpartitioned the partitions and subpartitions are imported into the entire table Import requires IGNORE=y to import one or more partitions or subpartitions from the Export file into a nonpartitioned table on the import target system

USER CREATION IN OS

useradd -d exporthomeS106255 -m s106462

FIND UPTIME OF THE DATABASE

SQLgt select TO_CHAR(startup_timemm-dd-yy hh24miss) from v$instance

SQLgt select property_name property_value from database_properties

The SQL will return the following results look for DEFAULT_TEMP_TABLESPACE for the setting

PROPERTY_NAME PROPERTY_VALUEmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashDICTBASE 2DEFAULT_TEMP_TABLESPACE TEMPDBTIMEZONE +0100NLS_NCHAR_CHARACTERSET AL16UTF16GLOBAL_DB_NAME ARONGENERALICHEXPORT_VIEWS_VERSION 8NLS_LANGUAGE AMERICANNLS_TERRITORY AMERICANLS_CURRENCY $NLS_ISO_CURRENCY AMERICANLS_NUMERIC_CHARACTERS NLS_CHARACTERSET WE8ISO8859P1NLS_CALENDAR GREGORIANNLS_DATE_FORMAT DD-MON-RRNLS_DATE_LANGUAGE AMERICANNLS_SORT BINARYNLS_TIME_FORMAT HHMISSXFF AMNLS_TIMESTAMP_FORMAT DD-MON-RR HHMISSXFF AMNLS_TIME_TZ_FORMAT HHMISSXFF AM TZRNLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HHMISSXFF AM TZRNLS_DUAL_CURRENCY $

NLS_COMP BINARYNLS_LENGTH_SEMANTICS BYTENLS_NCHAR_CONV_EXCP FALSENLS_RDBMS_VERSION 92060

If default temporary tablespace is wrong the alter it with the following command

SQLgt alter database default temporary tablespace temp

To check default temporary tablespace for all users of the database

SQLgt select username temporary_tablespace account_status from dba_users

will return the following result check if all users TEMPORARY_TABLESPACE is set to correct settings

USERNAME TEMPORARY_TABLESPACE ACCOUNT_STATUSmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndashSYS TEMPRY OPENSYSTEM TEMP OPENOUTLN TEMP OPENDBSNMP TEMP OPENDBMONITOR TEMP OPENTEST TEMP OPENWMSYS TEMP EXPIRED amp LOCKED

If wrong temporary tablespace is found alter it with the correct tablespace name (for example sys) with the following SQL

SQLgt alter user sys temporary tablespace temp

Alternatively recreate or add a datafile to your temporary tablespace and change the default temporary tablespace for your database

SQLgt drop tablespace temp including contents and datafiles

SQLgt create temporary tablespace temp tempfile lsquodbtemp01dbfrsquo size 100m autoextend off extent management local uniform size 1m

SQLgt alter database default temporary tablespace temp

HOW TO DISABLE CONSTAINT

alter table PPM_PRODPPM_SAMPLE_HH_CHRSTCS disable constraint SHC_S_FK

WHAT WILL HAPPEN IF U DELETE THOSE FILES

And what can happen if I delete them

Its two very large files (150-160 MB each)920assistantsdbcatemplatesData_Warehousedfj920assistantsdbcatemplatesTransaction_Processingdfj

Re What are these files GOOD for [message 126248 is a reply to message 126216 ]

Sat 02 July 2005 0009

AchchanMessages 86Registered June 2005

Member

HiFiles that have a DJF extension contain the predefined redo logs and datafiles for seed templates in DBCAIf you delete them you wont be able to use those db creation templates in future

FOR JVM INSTALLATION IN ORACLE

For JVM installation

We have to run this scriptinitjvmasql

DB_DOMAIN NAME PARAMETER

db_domainGLOBAL_NAMES=TRUEALTER DATABASE RENAME GLOBAL_NAME TO WEBDVCTSCOM

CREATE TABLE STRUCTURE

SQLgt select DBMS_METADATAGET_DDL(TABLELOGOFF_TBLCOORS_TARGET) from dual CREATE OR REPLACE TRIGGER SYStrg_logoff

BEFORE logoff ON DATABASEBEGIN INSERT INTO SYSlogoff_tbl VALUES(sys_context(userenvsession_user) SYSDATE)END

BACKUP PATH

507 mount 1023710137unixbkp backup 508 cd backup 509 df -k 510 cd backup 511 ls 512 clear 513 ls 514 mkdir jpmc_bak 515 cd jpmc_bak 516 ls 517 df -k u02 518 pwd 519 ls u02 520 pwd 521 cp -rpf u02ccsystst 522 ls -ltr 523 history

NOOF CPU

isainfo ndashv

HOW TO LOCK THE USER ACCOUNT IN ORACLE

Alter user user_name account lock

CHANGE TABLESPACE BLOCK SIZE ISSUE

Db_2k_cache_size=10m

OEM IN ORACLE 10G

Emctl status dbconsole

httphostnameportem

SET ENVIRONMENT VARIABLE IN ORACLE

export PATH=optjava14bin$PATHexport JAVA_HOME=optjava14jre ora9i 8837 1 0 May 24 1147 ora_pmon_poi ora9i 2305 1 0 Mar 29 2359 ora_pmon_portal ora9i 2321 1 0 Mar 29 2417 ora_pmon_EDMS ora10g 17394 1 0 Apr 02 12857 ora_pmon_POI2 orainst 14743 14365 0 110243 pts3 000 grep pmonCREATE DIRECTORY create directory utl_dir as lsquopathrsquogrant all on directory utl

Modify the given parameter

utl_file_dir

If any timeout request

SqlnetInbound_connect_Timeout

Any privilege for DBMS package

Grant execute on dbmsstats to username

select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = lsquo27229rsquo and spaddr = paddr

Load dump to the Sybase database

Load database database_name from

Load database database_name from ldquocompresspathrdquo

Load database database_name from stripe_on ldquocompresspath01rdquo

Stripe on ldquocompresspath02rdquo

Dump database database_name to lsquopathrsquos

Those scripts should run for install JVM

javavminstallinitjvmsql

optoracle10gxdkadmininitxmlsql

optoracle10gxdkadminxmljasql

optoracle10grdbmsadmincatjavasql

optoracle10grdbmsadmincatexfsql

Once the database has been restarted resolve any invalid objects by

running the utlrpsql script eg

optoracle10grdbmsadminutlrpsql

Those scripts should run for Uninstall JVM

rdbmsadmincatnoexfsql

rdbmsadminrmaqjmssql

rdbmsadminrmcdcsql

xdkadminrmxmlsql

javavminstallrmjvmsql

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name) FROM dba_tablespaces

1023710137mdashBackup Report

SYBASE ndashDatabase

1su ndash syb

2dscp

3open

4listall

5isql ndashUsa ndashSddm(database Name)

6sp-who

7go

8shutdown with nowait

9Sybasesyb125ASE-12-5install

10 startserver ndashf RUNndashgsms

online database gem_curr

11sp ndashhelpdb

12sp-configure

13sp-config parameter newvalue=0is

vgdisplay -v vg02 | grep LV Name |more

For Truncate the table

dump tran test_saatchi with truncate_only

backupsp_helpdb test_saatchi

cd $SYSBASE

MORE INTERFACE

Sybadmin-pW

MAX_ENABLED_ROLES = 70

svrmgrlconnect internalstartupshutdown abort

The following command for gather statistics of Number of rows in each table

exec dbms_statsgather_database_stats()

create or replace procedure sess1kill_session ( v_sid number v_serial number ) as v_varchar2 varchar2(100) begin execute immediate ALTER SYSTEM KILL SESSION || v_sid || || v_serial || end

HARD MOUNT DRIVE

mount -o hardrwnoacrsize=32768wsize=32768suidproto=tcpvers=3 1023710137unixbkp backup

oradim -delete -sid EPToradim -new -sid EPT -pfile DoracleHome2databaseinitEPToraSET TNS_ADMIN=Coracleora92networkadmin

Alter user user_name quota unlimted on tablespace_name

This is most likely a bug I would recommend to apply patchset 9207 As oracle recommends at least 9203 versiyon Anyway you can try below fix

Change the listener and database services Log On user to domain user who is a member of the groups domain admin and ORA_DBA group The default setting is Local System Account

- Run regedit- Drill down to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServices- Locate and delete the OracleOraHome92TNSListener (or whatever the listener name is)- Reboot the entire Windows box- When started and logged on as the Oracle user go to a DOS Command prompt- Run lsnrctl start ltlistener_namegt without the single quotes and replacing ltlistener_namegt with the name- An OS error of 1060 will be seen (normal) as the service is missing- The listener should start correctly or the next logical error may display

By the way can you explain backward of the problem Did you do any upgrade May you use double oracle_homelsquovaroptoracle--Installloc

zfs set quota=10G datapoolzfsoracle

select oracle_username os_user_namelocked_modeobject_nameobject_type from v$locked_object a dba_objects b where aobject_id=bobject_id

Select distinct busername bosuser bmachinebterminalmode_heldmode_requested

blogon_time SESSION WAIT sw From dba_ddl_locks a v$session b v$session_wait sw Where name= and asession_id=bsid and status=ACTIVE and swsid=bsid

spcreatesqlspreportsqlfor i in SAGACEND SAGACENB GLATTD STNC wrkshdevdoORACLE_SID=$iexport ORACLE_SIDsqlplus as sysdba ltlt select sum(bytes)10241024 from dba_data_filesexitdone

optinfoallinfo

For Hp-ux File Extend

fuser -c oradata2

umount oradata2

lvextend -L 40000M devvg00lvol7extendfs devvg00rlvoradata2

mount oradata2

devvg01lvwls 2097152 1457349 610113 70 weblogic

ALL_TAB_PRIVS All object grants where the user or public is granteeALL_TAB_PRIVS_MADE All object grants made by user or on user owned objectsALL_TAB_PRIVS_RECD All object grants to user or publicDBA_SYS_PRIVS System privileges granted to users and rolesDBA_ROLES List of all roles in the databaseDBA_ROLE_PRIVS Roles granted to users and to other rolesROLE_ROLE_PRIVS Roles granted to other rolesROLE_SYS_PRIVS System privileges granted to rolesROLE_TAB_PRIVS Table privileges granted to rolesSESSION_PRIVS All privileges currently available to userSESSION_ROLES All roles currently available to userUSER_SYS_PRIVS System privileges granted to current userUSER_TAB_PRIV Grants on objects where current user is grantee grantor or owner

DBA_TAB_PRIVSetcftpdftpusers

bash-300 zfs create datapooltelebash-300 zfs set mountpoint=app datapoolappbash-300 zfs set quota=10G datapoolapp

EXECUTE dbms_sessionset_sql_trace (FALSE)

SELECT SUBSTR (dfNAME 1 70) file_name dfbytes 1024 1024 allocated_mb((dfbytes 1024 1024) - NVL (SUM (dfsbytes) 1024 1024 0))used_mbNVL (SUM (dfsbytes) 1024 1024 0) free_space_mbFROM v$datafile df dba_free_space dfsWHERE dffile = dfsfile_id(+)GROUP BY dfsfile_id dfNAME dffile dfbytesORDER BY file_name

purge table name-of-table

purge index name-of-table

purge recyclebin

purge dba_recyclebin

purge tablespace name-of-tablespace

purge tablespace name-of-tablespaces user name-of-user

date dd if = ltinput filegt of = lt output file gt date

isainfo ndashv-output of the o2 is 32 bit or 64 bit

1023720911

isql -Udba -Scso_otpwSQL

for start and stop the databasescript sybdata1syb126IQcso_ot

Recover databaseAlter database open

1023720469

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name)FROM dba_tablespaces

Here is the query to get the details based on Unix PID select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = ltunix_pidgt and spaddr = paddr

CREATE CONTROLFILE SET DATABASE GMACDEV RESETLOGS NOARCHIVELOG-- SET STANDBY TO MAXIMIZE PERFORMANCE MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 30 MAXINSTANCES 1 MAXLOGHISTORY 112LOGFILE GROUP 1 gmacGMACDEVloglog1rdo SIZE 100M GROUP 2 gmacGMACDEVloglog2rdo SIZE 100M-- STANDBY LOGFILEDATAFILE gmacGMACDEVdatasystemdbf gmacGMACDEVdataundodbf gmacGMACDEVdatauserdbf gmacGMACDEVdatatestdbfCHARACTER SET US7ASCIIselect from nls_database_parameters 2 where parameter = any(NLS_CHARACTERSETNLS_NCHAR_CHARACTERSET)

EXECUTE dbms_sessionset_sql_trace (TRUE)PLSQL PROFILER

If you are not already configured DBMS_PROFILE package look the following script in Oracle Home-gtrdbms-gtadminSCRIPT PROFLOADSQL and PROFTABSQL

102375164Softwares

My problemWhen I dont use tnsnames and want to use ipc protocol then I get the followingerrorSQLgt connect myuseridmypasswordERROR

ORA-01034 ORACLE not availableORA-27121 unable to determine size of shared memory segmentSVR4 Error 13 Permission denied

Answer to your problem=======================Make sure file oracle has the following permissionscd $ORACLE_HOMEbin6751 If not1 Login as oracle user2 Shutdown (normal) the db3 Go to $ORACLE_HOMEbin4 Execute the followingchmod 6751 oracle5 Check the file permissions on oracle using the followingls -l oracle they should be-rwsr-s--x Startup the db and try connecting as dba or non-oracle user if this will not work- make sure permisions on oracle filesystems are set correctly ( set to 755)- filesystem must be mounted with correct setuid

date +DATE mdynTIMEHMS

Now start the Oracle EM dbconsole Build Script ($ORACLE_HOMEbinemca for Linux and $ORACLE_HOMEBinemcabat for Windows)

$ emca -repos create$ emca -config dbcontrol db

STARTED EMCA at Fri May 14 104322 MEST 2004Enter the following information about the databaseto be configured

Listener port number 1521Database SID AKI1Service name AKI1WORLDEmail address for notification martinzahnakadiacomEmail gateway for notification mailhostPassword for dbsnmp xxxxxxx

Password for sysman xxxxxxxPassword for sys xxxxxxx

---------------------------------------------------------You have specified the following settings

Database ORACLE_HOME optoracleproduct1010Enterprise Manager ORACLE_HOME optoracleproduct1010

Database host name akiraListener port number 1521Database SID AKI1Service name AKI1Email address for notification martinzahnakadiacomEmail gateway for notification mailhost---------------------------------------------------------Do you wish to continue [yesno] yesAM oraclesysmanemcpEMConfig updateReposVarsINFO Updating file configrepositoryvariables

Now wait about 10 Minutes to complete

M oraclesysmanemcpEMConfig createRepositoryINFO Creating repository M oraclesysmanemcpEMConfig performINFO Repository was created successfullyM oraclesysmanemcputilPortQuery findUsedPortsINFO Searching services file for used portAM oraclesysmanemcpEMConfig getPropertiesINFO Starting the DBConsole AM oraclesysmanemcpEMConfig performINFO DBConsole is started successfullyINFO gtgtgtgtgtgtgtgtgtgtgt The Enterprise Manager URL is httpakira5500em ltltltltltltltltltltltEnterprise Manager configuration is completed successfullyFINISHED EMCA at Fri May 14 105525 MEST 2004

Try to connect to the database Control

httpakira5500em

emca -deconfig dbcontrol db -repos drop

1gt select name from sysconfigures2gt goMsg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go

Msg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go name

-------------------------------------------------------------------------------- number of devices

suspend audit when device full

(2 rows affected)1gt sp_configure number of devices2gt go Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 36 60 60 number dynamic

(1 row affected)(return status = 0)1gt sp_configure number of devices702gt go00000000002720070315 14521046 server Configuration file sybasesyb125ASE-12_5ddmcfg has been written and the previous version has been renamed to sybasesyb125ASE-12_5ddm04600000000002720070315 14521048 server The configuration option number of devices has been changed by sa from 60 to 70 Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 44 70 70 number dynamic

(1 row affected)Configuration option changed The SQL Server need not be rebooted since theoption is dynamicChanging the value of number of devices to 70 increases the amount of memoryASE uses by 12 K(return status = 0)

disk initname=gem_hist_data7physname=datasyb125gem_histgem_hist_data7datsize=1600Mgo

This Query is used to find out the object name and lock id

select cownercobject_namecobject_typebsidbserialbstatusbosuserbmachine from v$locked_object a v$session bdba_objects c where bsid = asession_id and aobject_id = cobject_id

For apply patches for migration one version to another version1Run the setupexe2shu down the database3startup migrate4Run the below scriptscatpatchsqlcatciosqlutlrpsqlcatexpsql5shu immediate

Find out the locked object and sql query

select aobject_name boracle_username bos_user_namecsid cserialcterminal dsql_textfrom sysdba_objects av$locked_object bv$session cv$sqltext d where aobject_id = bobject_id and csid = bsession_id and csql_hash_value = dhash_value

HP-UX Cron tab

NAME

crontab - user crontab file

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 5: 7101564-Daily-Work

Subject Re ORA-04020 on startup of database

Try the following 1 Set parameters in your updated initSIDora (create from spfile) AQ_TM_PROCESSES=0 _SYSTEM_TRIG_ENABLED=FALSE

2 Rename spfile shutdown and STARTUP MIGRATE 3 Run catalogsql again 4 Comment parameters added in step 1 5 Rename back your spfile 6 Shutdown and STARTUP normal

From Sara Dyer 22-Feb-05 1634 Subject Re ORA-04020 on startup of database

That fixed my original problem I put my pfile back the way it was and now I am getting this - ORACLE instance started

Total System Global Area 488075536 bytes Fixed Size 737552 bytes Variable Size 452984832 bytes Database Buffers 33554432 bytes Redo Buffers 798720 bytes Database mounted ORA-00604 error occurred at recursive SQL level 1 ORA-04045 errors during recompilationrevalidation of XDBDBMS_XDBZ0 ORA-04098 trigger SYST_ALTER_USER_B is invalid and failed re-validation

I tried recompiling everything with utlrpsql but recieved the trigger is invalid error and I tried adding _system_trig_enabled and setting it to _system_trig_enabled=TRUE in my pfile no help

Thank you

Sara

ORA-12518 TNSlistener could not hand off client connection

Your server is probably running out of memory and need to swap memory to disk One cause can be an Oracle process consuming too much memory

A possible workaround is to set following parameter in the listenerora and restart the listenerDIRECT_HANDOFF_TTC_LISTENER=OFF

Should you be working with Multi threaded server connections you might need to increase the value of large_pool_size

START INFORMICS REPOSITARY

Su ndash informatCd informaticarepositoryserver pmrepserverhttpwwworaclecomtechnologybooks10g_bookshtml

FOR SUN-SOLARIS 10G CONSOLE

smcamp

AUTO EXTEND ON DATABASE TEMPFILE OR DATAFILE

alter database tempfile lsquofile_namersquo autoextend on

HOW TO CREATE DATABASE MANUALLY

A)INITORA PARAMETER

spool off$ ksh$ set -o viinstance_name=DWDEVdb_name=DWDEVbackground_dump_dest=oradata2oracle9iadminDWDEVbdumpuser_dump_dest=oradata2oracle9iadminDWDEVudumpcore_dump_dest=oradata2oracle9iadminDWDEVcdumpcontrol_files=(oradata2oracle9iadminDWDEVcontrol01ctloradata2oracle9iadminDWDEVcontrol02ctl)compatible=92000remote_login_passwordfile=EXCLUSIVEundo_management=AUTOundo_tablespace=undo1

B) STARTUP NOMOUNT

C)

SQLgt create database DWDEV

d 2 atafile oradata2oracle9iadminDWDEVDWDEV1dbf size 2048m 3 logfile group 1 oradata2oracle9iadminDWDEVlog1rdo size 200m 4 group 2 oradata2oracle9iadminDWDEVlog2rdo size 200md 5 efault temporary tablespace temp 6 tempfile oradata2oracle9iadminDWDEVtemp01dbf size 10m 7 undo tablespace undot1 datafile oradata2oracle9iadminDWDEVundot1dbf size 100M

D) Run catalog amp catproc

NO OF CPU RUNNING IN THE SERVER

psrinfopsrinfo ndashv

ASSIGN DEFAULT TABLESPACE FOR THE USER

alter user SAMCORE default tablespace smdest_data quota unlimited on

CREATE CONSTRAINT

create table ri_primary_key_1 (

a number

b number

c number

constraint pk_name primary key (a b)

)

Alter table table_name add constraint some_name primary key (columname1coulumname2)

ENABLE NO VALIDATE amp DROP CONSTAINT

alter table test1 modify DAY_OF_WEEK varchar2(1) not null enable novalidate

ALTER TABLE egg DROP CONSTRAINT eggREFchickenALTER TABLE chicken DROP CONSTRAINT chickenREFegg

Insert into table_name select from table_nameCreate table table_name as selet from table_name

DROP THE DATABASE

The following shows the steps to drop a database in Unix enviroment In order to delete a database there are few things need to be taken care of First all the database related files eg dbf ctl rdo arc need to be deleted Then the entry in listenerora and tnsnamesora need to be removed Third all the database links need to be removed since it will be invalid anyways

It depends how you login to oracle account in Unix you should have environment set for the user oracle To confirm that the environment variable is set do a env|grep ORACLE and you will notice that your ORACLE_SID=SOME_SID and ORACLE_HOME=SOME_PATH If you do not already have the ORACLE_SID and ORACLE_HOME set do it now

Make sure also that you set the ORACLE_SID and ORACLE_HOME correct else you will end up deleting other database Next you will have to query all the database related files from dictionaries in order to identify which files to delete Do the following

01 Login as connect as sysdba at svrmgrl02 startup the database if its not already started The database must at least mounted03 spool tmpdeletelistlst 04 select name from v$datafile (This will get all the datafiles alternatively you can select file_name from dba_data_files) 05 select member from v$logfile06 select name from v$controlfile07 archive log list (archive_log_dest is where the archived destination is)08 locating ifile by issuing show parameter ifile (alternatively check the content of initora)09 spool off 10 Delete in OS level the files listed in tmpdeletelistlst11 remove all the entries which refer to the deleted database in tnsnamesora and listenerora (located in $ORACLE_HOMEnetworkadmin) 12 remove all database links referring to the deleted database13 check varoptoracleoratab to make sure there is no entry of the database deleted If yes remove it14 DONE

SQLgt select DAY_OF_WEEKcount(DAY_OF_WEEK) as cnt from PPM_AR_BROADCAST_HOUR group by DAY_OF_WEEK

CHANGE THE NLS_DATABASE_PARAMETER

select from nls_database_parameters where parameter=NLS_CHARACTERSET

ALTER THE FILE TO OFFLINE

alter database tempfile oradata2rating9idatatemp01dbf offline

alter database tempfile oradata2rating9idatatemp01dbf online

SQLgt alter table PPMENO_PRODPPMENO_MEDIA_ENCODING_ENTITY modify (MINOR_STATION_IND varchar2(2 byte) constraint a1 not null enable novalidate)migrationoracle9ibinpupbldmigrationoracle9isqlplusadminpupbldsqlu01apporacleproduct816binpupbldu01apporacleproduct816sqlplusadminpupbldsqldata1ora92orainstallbinpupblddata1ora92orainstallsqlplusadminpupbldsql

STATSPACK INSTALLATION

Statspack Installation

Steps1Create tablespace tablespace_name datafile lsquofilenamedbfrsquorsquo size 500M 2 optoraclerdbmsadmin 3run the command in sql prompt optoraclerdbmsadminspcreatesql 4

grant select on PPMDP_STENppmdp_media_stream to public

create public synonym ppmdp_media_stream for PPMDP_STENppmdp_media_stream

IMP UTILITY

connected to ORACLE

The errors occur on Oracle database installed in Windows machine too Actually the problem can occurs in any platform of Oracle database It usually happens when try to import into new database

The problem occurs because imp utiliy encounters error out when trying to execute some commands

The solution to solve the problem is as following

Login as sys in the SQLPLUS and run the following sqls

$OHrdbmsadmindbmsreadsql$OHrdbmsadminprvtreadplb

After executing the above sql scripts retry the import The error should disappears

Select granteegranted_name from dba_role_privs

UNDOTBS

Guidelines for Using Partition-Level Import

Partition-level Import can only be specified in table mode It lets you selectively load data from specified partitions or subpartitions in an export file Keep the following guidelines in mind when using partition-level import

bull Import always stores the rows according to the partitioning scheme of the target table

bull Partition-level Import inserts only the row data from the specified source partitions or subpartitions

bull If the target table is partitioned partition-level Import rejects any rows that fall above the highest partition of the target table

bull Partition-level Import cannot import a nonpartitioned exported table However a partitioned table can be imported from a nonpartitioned exported table using table-level Import

bull Partition-level Import is legal only if the source table (that is the table called tablename at export time) was partitioned and exists in the Export file

bull If the partition or subpartition name is not a valid partition in the export file Import generates a warning

bull The partition or subpartition name in the parameter refers to only the partition or subpartition in the Export file which may not contain all of the data of the table on the export source system

bull If ROWS=y (default) and the table does not exist in the Import target system the table is created and all rows from the source partition or subpartition are inserted into the partition or subpartition of the target table

bull If ROWS=y (default) and IGNORE=y but the table already existed before Import all rows for the specified partition or subpartition in the table are inserted into the table The rows are stored according to the existing partitioning scheme of the target table

bull If ROWS=n Import does not insert data into the target table and continues to process other objects associated with the specified table and partition or subpartition in the file

bull If the target table is nonpartitioned the partitions and subpartitions are imported into the entire table Import requires IGNORE=y to import one or more partitions or subpartitions from the Export file into a nonpartitioned table on the import target system

USER CREATION IN OS

useradd -d exporthomeS106255 -m s106462

FIND UPTIME OF THE DATABASE

SQLgt select TO_CHAR(startup_timemm-dd-yy hh24miss) from v$instance

SQLgt select property_name property_value from database_properties

The SQL will return the following results look for DEFAULT_TEMP_TABLESPACE for the setting

PROPERTY_NAME PROPERTY_VALUEmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashDICTBASE 2DEFAULT_TEMP_TABLESPACE TEMPDBTIMEZONE +0100NLS_NCHAR_CHARACTERSET AL16UTF16GLOBAL_DB_NAME ARONGENERALICHEXPORT_VIEWS_VERSION 8NLS_LANGUAGE AMERICANNLS_TERRITORY AMERICANLS_CURRENCY $NLS_ISO_CURRENCY AMERICANLS_NUMERIC_CHARACTERS NLS_CHARACTERSET WE8ISO8859P1NLS_CALENDAR GREGORIANNLS_DATE_FORMAT DD-MON-RRNLS_DATE_LANGUAGE AMERICANNLS_SORT BINARYNLS_TIME_FORMAT HHMISSXFF AMNLS_TIMESTAMP_FORMAT DD-MON-RR HHMISSXFF AMNLS_TIME_TZ_FORMAT HHMISSXFF AM TZRNLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HHMISSXFF AM TZRNLS_DUAL_CURRENCY $

NLS_COMP BINARYNLS_LENGTH_SEMANTICS BYTENLS_NCHAR_CONV_EXCP FALSENLS_RDBMS_VERSION 92060

If default temporary tablespace is wrong the alter it with the following command

SQLgt alter database default temporary tablespace temp

To check default temporary tablespace for all users of the database

SQLgt select username temporary_tablespace account_status from dba_users

will return the following result check if all users TEMPORARY_TABLESPACE is set to correct settings

USERNAME TEMPORARY_TABLESPACE ACCOUNT_STATUSmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndashSYS TEMPRY OPENSYSTEM TEMP OPENOUTLN TEMP OPENDBSNMP TEMP OPENDBMONITOR TEMP OPENTEST TEMP OPENWMSYS TEMP EXPIRED amp LOCKED

If wrong temporary tablespace is found alter it with the correct tablespace name (for example sys) with the following SQL

SQLgt alter user sys temporary tablespace temp

Alternatively recreate or add a datafile to your temporary tablespace and change the default temporary tablespace for your database

SQLgt drop tablespace temp including contents and datafiles

SQLgt create temporary tablespace temp tempfile lsquodbtemp01dbfrsquo size 100m autoextend off extent management local uniform size 1m

SQLgt alter database default temporary tablespace temp

HOW TO DISABLE CONSTAINT

alter table PPM_PRODPPM_SAMPLE_HH_CHRSTCS disable constraint SHC_S_FK

WHAT WILL HAPPEN IF U DELETE THOSE FILES

And what can happen if I delete them

Its two very large files (150-160 MB each)920assistantsdbcatemplatesData_Warehousedfj920assistantsdbcatemplatesTransaction_Processingdfj

Re What are these files GOOD for [message 126248 is a reply to message 126216 ]

Sat 02 July 2005 0009

AchchanMessages 86Registered June 2005

Member

HiFiles that have a DJF extension contain the predefined redo logs and datafiles for seed templates in DBCAIf you delete them you wont be able to use those db creation templates in future

FOR JVM INSTALLATION IN ORACLE

For JVM installation

We have to run this scriptinitjvmasql

DB_DOMAIN NAME PARAMETER

db_domainGLOBAL_NAMES=TRUEALTER DATABASE RENAME GLOBAL_NAME TO WEBDVCTSCOM

CREATE TABLE STRUCTURE

SQLgt select DBMS_METADATAGET_DDL(TABLELOGOFF_TBLCOORS_TARGET) from dual CREATE OR REPLACE TRIGGER SYStrg_logoff

BEFORE logoff ON DATABASEBEGIN INSERT INTO SYSlogoff_tbl VALUES(sys_context(userenvsession_user) SYSDATE)END

BACKUP PATH

507 mount 1023710137unixbkp backup 508 cd backup 509 df -k 510 cd backup 511 ls 512 clear 513 ls 514 mkdir jpmc_bak 515 cd jpmc_bak 516 ls 517 df -k u02 518 pwd 519 ls u02 520 pwd 521 cp -rpf u02ccsystst 522 ls -ltr 523 history

NOOF CPU

isainfo ndashv

HOW TO LOCK THE USER ACCOUNT IN ORACLE

Alter user user_name account lock

CHANGE TABLESPACE BLOCK SIZE ISSUE

Db_2k_cache_size=10m

OEM IN ORACLE 10G

Emctl status dbconsole

httphostnameportem

SET ENVIRONMENT VARIABLE IN ORACLE

export PATH=optjava14bin$PATHexport JAVA_HOME=optjava14jre ora9i 8837 1 0 May 24 1147 ora_pmon_poi ora9i 2305 1 0 Mar 29 2359 ora_pmon_portal ora9i 2321 1 0 Mar 29 2417 ora_pmon_EDMS ora10g 17394 1 0 Apr 02 12857 ora_pmon_POI2 orainst 14743 14365 0 110243 pts3 000 grep pmonCREATE DIRECTORY create directory utl_dir as lsquopathrsquogrant all on directory utl

Modify the given parameter

utl_file_dir

If any timeout request

SqlnetInbound_connect_Timeout

Any privilege for DBMS package

Grant execute on dbmsstats to username

select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = lsquo27229rsquo and spaddr = paddr

Load dump to the Sybase database

Load database database_name from

Load database database_name from ldquocompresspathrdquo

Load database database_name from stripe_on ldquocompresspath01rdquo

Stripe on ldquocompresspath02rdquo

Dump database database_name to lsquopathrsquos

Those scripts should run for install JVM

javavminstallinitjvmsql

optoracle10gxdkadmininitxmlsql

optoracle10gxdkadminxmljasql

optoracle10grdbmsadmincatjavasql

optoracle10grdbmsadmincatexfsql

Once the database has been restarted resolve any invalid objects by

running the utlrpsql script eg

optoracle10grdbmsadminutlrpsql

Those scripts should run for Uninstall JVM

rdbmsadmincatnoexfsql

rdbmsadminrmaqjmssql

rdbmsadminrmcdcsql

xdkadminrmxmlsql

javavminstallrmjvmsql

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name) FROM dba_tablespaces

1023710137mdashBackup Report

SYBASE ndashDatabase

1su ndash syb

2dscp

3open

4listall

5isql ndashUsa ndashSddm(database Name)

6sp-who

7go

8shutdown with nowait

9Sybasesyb125ASE-12-5install

10 startserver ndashf RUNndashgsms

online database gem_curr

11sp ndashhelpdb

12sp-configure

13sp-config parameter newvalue=0is

vgdisplay -v vg02 | grep LV Name |more

For Truncate the table

dump tran test_saatchi with truncate_only

backupsp_helpdb test_saatchi

cd $SYSBASE

MORE INTERFACE

Sybadmin-pW

MAX_ENABLED_ROLES = 70

svrmgrlconnect internalstartupshutdown abort

The following command for gather statistics of Number of rows in each table

exec dbms_statsgather_database_stats()

create or replace procedure sess1kill_session ( v_sid number v_serial number ) as v_varchar2 varchar2(100) begin execute immediate ALTER SYSTEM KILL SESSION || v_sid || || v_serial || end

HARD MOUNT DRIVE

mount -o hardrwnoacrsize=32768wsize=32768suidproto=tcpvers=3 1023710137unixbkp backup

oradim -delete -sid EPToradim -new -sid EPT -pfile DoracleHome2databaseinitEPToraSET TNS_ADMIN=Coracleora92networkadmin

Alter user user_name quota unlimted on tablespace_name

This is most likely a bug I would recommend to apply patchset 9207 As oracle recommends at least 9203 versiyon Anyway you can try below fix

Change the listener and database services Log On user to domain user who is a member of the groups domain admin and ORA_DBA group The default setting is Local System Account

- Run regedit- Drill down to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServices- Locate and delete the OracleOraHome92TNSListener (or whatever the listener name is)- Reboot the entire Windows box- When started and logged on as the Oracle user go to a DOS Command prompt- Run lsnrctl start ltlistener_namegt without the single quotes and replacing ltlistener_namegt with the name- An OS error of 1060 will be seen (normal) as the service is missing- The listener should start correctly or the next logical error may display

By the way can you explain backward of the problem Did you do any upgrade May you use double oracle_homelsquovaroptoracle--Installloc

zfs set quota=10G datapoolzfsoracle

select oracle_username os_user_namelocked_modeobject_nameobject_type from v$locked_object a dba_objects b where aobject_id=bobject_id

Select distinct busername bosuser bmachinebterminalmode_heldmode_requested

blogon_time SESSION WAIT sw From dba_ddl_locks a v$session b v$session_wait sw Where name= and asession_id=bsid and status=ACTIVE and swsid=bsid

spcreatesqlspreportsqlfor i in SAGACEND SAGACENB GLATTD STNC wrkshdevdoORACLE_SID=$iexport ORACLE_SIDsqlplus as sysdba ltlt select sum(bytes)10241024 from dba_data_filesexitdone

optinfoallinfo

For Hp-ux File Extend

fuser -c oradata2

umount oradata2

lvextend -L 40000M devvg00lvol7extendfs devvg00rlvoradata2

mount oradata2

devvg01lvwls 2097152 1457349 610113 70 weblogic

ALL_TAB_PRIVS All object grants where the user or public is granteeALL_TAB_PRIVS_MADE All object grants made by user or on user owned objectsALL_TAB_PRIVS_RECD All object grants to user or publicDBA_SYS_PRIVS System privileges granted to users and rolesDBA_ROLES List of all roles in the databaseDBA_ROLE_PRIVS Roles granted to users and to other rolesROLE_ROLE_PRIVS Roles granted to other rolesROLE_SYS_PRIVS System privileges granted to rolesROLE_TAB_PRIVS Table privileges granted to rolesSESSION_PRIVS All privileges currently available to userSESSION_ROLES All roles currently available to userUSER_SYS_PRIVS System privileges granted to current userUSER_TAB_PRIV Grants on objects where current user is grantee grantor or owner

DBA_TAB_PRIVSetcftpdftpusers

bash-300 zfs create datapooltelebash-300 zfs set mountpoint=app datapoolappbash-300 zfs set quota=10G datapoolapp

EXECUTE dbms_sessionset_sql_trace (FALSE)

SELECT SUBSTR (dfNAME 1 70) file_name dfbytes 1024 1024 allocated_mb((dfbytes 1024 1024) - NVL (SUM (dfsbytes) 1024 1024 0))used_mbNVL (SUM (dfsbytes) 1024 1024 0) free_space_mbFROM v$datafile df dba_free_space dfsWHERE dffile = dfsfile_id(+)GROUP BY dfsfile_id dfNAME dffile dfbytesORDER BY file_name

purge table name-of-table

purge index name-of-table

purge recyclebin

purge dba_recyclebin

purge tablespace name-of-tablespace

purge tablespace name-of-tablespaces user name-of-user

date dd if = ltinput filegt of = lt output file gt date

isainfo ndashv-output of the o2 is 32 bit or 64 bit

1023720911

isql -Udba -Scso_otpwSQL

for start and stop the databasescript sybdata1syb126IQcso_ot

Recover databaseAlter database open

1023720469

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name)FROM dba_tablespaces

Here is the query to get the details based on Unix PID select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = ltunix_pidgt and spaddr = paddr

CREATE CONTROLFILE SET DATABASE GMACDEV RESETLOGS NOARCHIVELOG-- SET STANDBY TO MAXIMIZE PERFORMANCE MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 30 MAXINSTANCES 1 MAXLOGHISTORY 112LOGFILE GROUP 1 gmacGMACDEVloglog1rdo SIZE 100M GROUP 2 gmacGMACDEVloglog2rdo SIZE 100M-- STANDBY LOGFILEDATAFILE gmacGMACDEVdatasystemdbf gmacGMACDEVdataundodbf gmacGMACDEVdatauserdbf gmacGMACDEVdatatestdbfCHARACTER SET US7ASCIIselect from nls_database_parameters 2 where parameter = any(NLS_CHARACTERSETNLS_NCHAR_CHARACTERSET)

EXECUTE dbms_sessionset_sql_trace (TRUE)PLSQL PROFILER

If you are not already configured DBMS_PROFILE package look the following script in Oracle Home-gtrdbms-gtadminSCRIPT PROFLOADSQL and PROFTABSQL

102375164Softwares

My problemWhen I dont use tnsnames and want to use ipc protocol then I get the followingerrorSQLgt connect myuseridmypasswordERROR

ORA-01034 ORACLE not availableORA-27121 unable to determine size of shared memory segmentSVR4 Error 13 Permission denied

Answer to your problem=======================Make sure file oracle has the following permissionscd $ORACLE_HOMEbin6751 If not1 Login as oracle user2 Shutdown (normal) the db3 Go to $ORACLE_HOMEbin4 Execute the followingchmod 6751 oracle5 Check the file permissions on oracle using the followingls -l oracle they should be-rwsr-s--x Startup the db and try connecting as dba or non-oracle user if this will not work- make sure permisions on oracle filesystems are set correctly ( set to 755)- filesystem must be mounted with correct setuid

date +DATE mdynTIMEHMS

Now start the Oracle EM dbconsole Build Script ($ORACLE_HOMEbinemca for Linux and $ORACLE_HOMEBinemcabat for Windows)

$ emca -repos create$ emca -config dbcontrol db

STARTED EMCA at Fri May 14 104322 MEST 2004Enter the following information about the databaseto be configured

Listener port number 1521Database SID AKI1Service name AKI1WORLDEmail address for notification martinzahnakadiacomEmail gateway for notification mailhostPassword for dbsnmp xxxxxxx

Password for sysman xxxxxxxPassword for sys xxxxxxx

---------------------------------------------------------You have specified the following settings

Database ORACLE_HOME optoracleproduct1010Enterprise Manager ORACLE_HOME optoracleproduct1010

Database host name akiraListener port number 1521Database SID AKI1Service name AKI1Email address for notification martinzahnakadiacomEmail gateway for notification mailhost---------------------------------------------------------Do you wish to continue [yesno] yesAM oraclesysmanemcpEMConfig updateReposVarsINFO Updating file configrepositoryvariables

Now wait about 10 Minutes to complete

M oraclesysmanemcpEMConfig createRepositoryINFO Creating repository M oraclesysmanemcpEMConfig performINFO Repository was created successfullyM oraclesysmanemcputilPortQuery findUsedPortsINFO Searching services file for used portAM oraclesysmanemcpEMConfig getPropertiesINFO Starting the DBConsole AM oraclesysmanemcpEMConfig performINFO DBConsole is started successfullyINFO gtgtgtgtgtgtgtgtgtgtgt The Enterprise Manager URL is httpakira5500em ltltltltltltltltltltltEnterprise Manager configuration is completed successfullyFINISHED EMCA at Fri May 14 105525 MEST 2004

Try to connect to the database Control

httpakira5500em

emca -deconfig dbcontrol db -repos drop

1gt select name from sysconfigures2gt goMsg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go

Msg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go name

-------------------------------------------------------------------------------- number of devices

suspend audit when device full

(2 rows affected)1gt sp_configure number of devices2gt go Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 36 60 60 number dynamic

(1 row affected)(return status = 0)1gt sp_configure number of devices702gt go00000000002720070315 14521046 server Configuration file sybasesyb125ASE-12_5ddmcfg has been written and the previous version has been renamed to sybasesyb125ASE-12_5ddm04600000000002720070315 14521048 server The configuration option number of devices has been changed by sa from 60 to 70 Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 44 70 70 number dynamic

(1 row affected)Configuration option changed The SQL Server need not be rebooted since theoption is dynamicChanging the value of number of devices to 70 increases the amount of memoryASE uses by 12 K(return status = 0)

disk initname=gem_hist_data7physname=datasyb125gem_histgem_hist_data7datsize=1600Mgo

This Query is used to find out the object name and lock id

select cownercobject_namecobject_typebsidbserialbstatusbosuserbmachine from v$locked_object a v$session bdba_objects c where bsid = asession_id and aobject_id = cobject_id

For apply patches for migration one version to another version1Run the setupexe2shu down the database3startup migrate4Run the below scriptscatpatchsqlcatciosqlutlrpsqlcatexpsql5shu immediate

Find out the locked object and sql query

select aobject_name boracle_username bos_user_namecsid cserialcterminal dsql_textfrom sysdba_objects av$locked_object bv$session cv$sqltext d where aobject_id = bobject_id and csid = bsession_id and csql_hash_value = dhash_value

HP-UX Cron tab

NAME

crontab - user crontab file

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 6: 7101564-Daily-Work

Your server is probably running out of memory and need to swap memory to disk One cause can be an Oracle process consuming too much memory

A possible workaround is to set following parameter in the listenerora and restart the listenerDIRECT_HANDOFF_TTC_LISTENER=OFF

Should you be working with Multi threaded server connections you might need to increase the value of large_pool_size

START INFORMICS REPOSITARY

Su ndash informatCd informaticarepositoryserver pmrepserverhttpwwworaclecomtechnologybooks10g_bookshtml

FOR SUN-SOLARIS 10G CONSOLE

smcamp

AUTO EXTEND ON DATABASE TEMPFILE OR DATAFILE

alter database tempfile lsquofile_namersquo autoextend on

HOW TO CREATE DATABASE MANUALLY

A)INITORA PARAMETER

spool off$ ksh$ set -o viinstance_name=DWDEVdb_name=DWDEVbackground_dump_dest=oradata2oracle9iadminDWDEVbdumpuser_dump_dest=oradata2oracle9iadminDWDEVudumpcore_dump_dest=oradata2oracle9iadminDWDEVcdumpcontrol_files=(oradata2oracle9iadminDWDEVcontrol01ctloradata2oracle9iadminDWDEVcontrol02ctl)compatible=92000remote_login_passwordfile=EXCLUSIVEundo_management=AUTOundo_tablespace=undo1

B) STARTUP NOMOUNT

C)

SQLgt create database DWDEV

d 2 atafile oradata2oracle9iadminDWDEVDWDEV1dbf size 2048m 3 logfile group 1 oradata2oracle9iadminDWDEVlog1rdo size 200m 4 group 2 oradata2oracle9iadminDWDEVlog2rdo size 200md 5 efault temporary tablespace temp 6 tempfile oradata2oracle9iadminDWDEVtemp01dbf size 10m 7 undo tablespace undot1 datafile oradata2oracle9iadminDWDEVundot1dbf size 100M

D) Run catalog amp catproc

NO OF CPU RUNNING IN THE SERVER

psrinfopsrinfo ndashv

ASSIGN DEFAULT TABLESPACE FOR THE USER

alter user SAMCORE default tablespace smdest_data quota unlimited on

CREATE CONSTRAINT

create table ri_primary_key_1 (

a number

b number

c number

constraint pk_name primary key (a b)

)

Alter table table_name add constraint some_name primary key (columname1coulumname2)

ENABLE NO VALIDATE amp DROP CONSTAINT

alter table test1 modify DAY_OF_WEEK varchar2(1) not null enable novalidate

ALTER TABLE egg DROP CONSTRAINT eggREFchickenALTER TABLE chicken DROP CONSTRAINT chickenREFegg

Insert into table_name select from table_nameCreate table table_name as selet from table_name

DROP THE DATABASE

The following shows the steps to drop a database in Unix enviroment In order to delete a database there are few things need to be taken care of First all the database related files eg dbf ctl rdo arc need to be deleted Then the entry in listenerora and tnsnamesora need to be removed Third all the database links need to be removed since it will be invalid anyways

It depends how you login to oracle account in Unix you should have environment set for the user oracle To confirm that the environment variable is set do a env|grep ORACLE and you will notice that your ORACLE_SID=SOME_SID and ORACLE_HOME=SOME_PATH If you do not already have the ORACLE_SID and ORACLE_HOME set do it now

Make sure also that you set the ORACLE_SID and ORACLE_HOME correct else you will end up deleting other database Next you will have to query all the database related files from dictionaries in order to identify which files to delete Do the following

01 Login as connect as sysdba at svrmgrl02 startup the database if its not already started The database must at least mounted03 spool tmpdeletelistlst 04 select name from v$datafile (This will get all the datafiles alternatively you can select file_name from dba_data_files) 05 select member from v$logfile06 select name from v$controlfile07 archive log list (archive_log_dest is where the archived destination is)08 locating ifile by issuing show parameter ifile (alternatively check the content of initora)09 spool off 10 Delete in OS level the files listed in tmpdeletelistlst11 remove all the entries which refer to the deleted database in tnsnamesora and listenerora (located in $ORACLE_HOMEnetworkadmin) 12 remove all database links referring to the deleted database13 check varoptoracleoratab to make sure there is no entry of the database deleted If yes remove it14 DONE

SQLgt select DAY_OF_WEEKcount(DAY_OF_WEEK) as cnt from PPM_AR_BROADCAST_HOUR group by DAY_OF_WEEK

CHANGE THE NLS_DATABASE_PARAMETER

select from nls_database_parameters where parameter=NLS_CHARACTERSET

ALTER THE FILE TO OFFLINE

alter database tempfile oradata2rating9idatatemp01dbf offline

alter database tempfile oradata2rating9idatatemp01dbf online

SQLgt alter table PPMENO_PRODPPMENO_MEDIA_ENCODING_ENTITY modify (MINOR_STATION_IND varchar2(2 byte) constraint a1 not null enable novalidate)migrationoracle9ibinpupbldmigrationoracle9isqlplusadminpupbldsqlu01apporacleproduct816binpupbldu01apporacleproduct816sqlplusadminpupbldsqldata1ora92orainstallbinpupblddata1ora92orainstallsqlplusadminpupbldsql

STATSPACK INSTALLATION

Statspack Installation

Steps1Create tablespace tablespace_name datafile lsquofilenamedbfrsquorsquo size 500M 2 optoraclerdbmsadmin 3run the command in sql prompt optoraclerdbmsadminspcreatesql 4

grant select on PPMDP_STENppmdp_media_stream to public

create public synonym ppmdp_media_stream for PPMDP_STENppmdp_media_stream

IMP UTILITY

connected to ORACLE

The errors occur on Oracle database installed in Windows machine too Actually the problem can occurs in any platform of Oracle database It usually happens when try to import into new database

The problem occurs because imp utiliy encounters error out when trying to execute some commands

The solution to solve the problem is as following

Login as sys in the SQLPLUS and run the following sqls

$OHrdbmsadmindbmsreadsql$OHrdbmsadminprvtreadplb

After executing the above sql scripts retry the import The error should disappears

Select granteegranted_name from dba_role_privs

UNDOTBS

Guidelines for Using Partition-Level Import

Partition-level Import can only be specified in table mode It lets you selectively load data from specified partitions or subpartitions in an export file Keep the following guidelines in mind when using partition-level import

bull Import always stores the rows according to the partitioning scheme of the target table

bull Partition-level Import inserts only the row data from the specified source partitions or subpartitions

bull If the target table is partitioned partition-level Import rejects any rows that fall above the highest partition of the target table

bull Partition-level Import cannot import a nonpartitioned exported table However a partitioned table can be imported from a nonpartitioned exported table using table-level Import

bull Partition-level Import is legal only if the source table (that is the table called tablename at export time) was partitioned and exists in the Export file

bull If the partition or subpartition name is not a valid partition in the export file Import generates a warning

bull The partition or subpartition name in the parameter refers to only the partition or subpartition in the Export file which may not contain all of the data of the table on the export source system

bull If ROWS=y (default) and the table does not exist in the Import target system the table is created and all rows from the source partition or subpartition are inserted into the partition or subpartition of the target table

bull If ROWS=y (default) and IGNORE=y but the table already existed before Import all rows for the specified partition or subpartition in the table are inserted into the table The rows are stored according to the existing partitioning scheme of the target table

bull If ROWS=n Import does not insert data into the target table and continues to process other objects associated with the specified table and partition or subpartition in the file

bull If the target table is nonpartitioned the partitions and subpartitions are imported into the entire table Import requires IGNORE=y to import one or more partitions or subpartitions from the Export file into a nonpartitioned table on the import target system

USER CREATION IN OS

useradd -d exporthomeS106255 -m s106462

FIND UPTIME OF THE DATABASE

SQLgt select TO_CHAR(startup_timemm-dd-yy hh24miss) from v$instance

SQLgt select property_name property_value from database_properties

The SQL will return the following results look for DEFAULT_TEMP_TABLESPACE for the setting

PROPERTY_NAME PROPERTY_VALUEmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashDICTBASE 2DEFAULT_TEMP_TABLESPACE TEMPDBTIMEZONE +0100NLS_NCHAR_CHARACTERSET AL16UTF16GLOBAL_DB_NAME ARONGENERALICHEXPORT_VIEWS_VERSION 8NLS_LANGUAGE AMERICANNLS_TERRITORY AMERICANLS_CURRENCY $NLS_ISO_CURRENCY AMERICANLS_NUMERIC_CHARACTERS NLS_CHARACTERSET WE8ISO8859P1NLS_CALENDAR GREGORIANNLS_DATE_FORMAT DD-MON-RRNLS_DATE_LANGUAGE AMERICANNLS_SORT BINARYNLS_TIME_FORMAT HHMISSXFF AMNLS_TIMESTAMP_FORMAT DD-MON-RR HHMISSXFF AMNLS_TIME_TZ_FORMAT HHMISSXFF AM TZRNLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HHMISSXFF AM TZRNLS_DUAL_CURRENCY $

NLS_COMP BINARYNLS_LENGTH_SEMANTICS BYTENLS_NCHAR_CONV_EXCP FALSENLS_RDBMS_VERSION 92060

If default temporary tablespace is wrong the alter it with the following command

SQLgt alter database default temporary tablespace temp

To check default temporary tablespace for all users of the database

SQLgt select username temporary_tablespace account_status from dba_users

will return the following result check if all users TEMPORARY_TABLESPACE is set to correct settings

USERNAME TEMPORARY_TABLESPACE ACCOUNT_STATUSmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndashSYS TEMPRY OPENSYSTEM TEMP OPENOUTLN TEMP OPENDBSNMP TEMP OPENDBMONITOR TEMP OPENTEST TEMP OPENWMSYS TEMP EXPIRED amp LOCKED

If wrong temporary tablespace is found alter it with the correct tablespace name (for example sys) with the following SQL

SQLgt alter user sys temporary tablespace temp

Alternatively recreate or add a datafile to your temporary tablespace and change the default temporary tablespace for your database

SQLgt drop tablespace temp including contents and datafiles

SQLgt create temporary tablespace temp tempfile lsquodbtemp01dbfrsquo size 100m autoextend off extent management local uniform size 1m

SQLgt alter database default temporary tablespace temp

HOW TO DISABLE CONSTAINT

alter table PPM_PRODPPM_SAMPLE_HH_CHRSTCS disable constraint SHC_S_FK

WHAT WILL HAPPEN IF U DELETE THOSE FILES

And what can happen if I delete them

Its two very large files (150-160 MB each)920assistantsdbcatemplatesData_Warehousedfj920assistantsdbcatemplatesTransaction_Processingdfj

Re What are these files GOOD for [message 126248 is a reply to message 126216 ]

Sat 02 July 2005 0009

AchchanMessages 86Registered June 2005

Member

HiFiles that have a DJF extension contain the predefined redo logs and datafiles for seed templates in DBCAIf you delete them you wont be able to use those db creation templates in future

FOR JVM INSTALLATION IN ORACLE

For JVM installation

We have to run this scriptinitjvmasql

DB_DOMAIN NAME PARAMETER

db_domainGLOBAL_NAMES=TRUEALTER DATABASE RENAME GLOBAL_NAME TO WEBDVCTSCOM

CREATE TABLE STRUCTURE

SQLgt select DBMS_METADATAGET_DDL(TABLELOGOFF_TBLCOORS_TARGET) from dual CREATE OR REPLACE TRIGGER SYStrg_logoff

BEFORE logoff ON DATABASEBEGIN INSERT INTO SYSlogoff_tbl VALUES(sys_context(userenvsession_user) SYSDATE)END

BACKUP PATH

507 mount 1023710137unixbkp backup 508 cd backup 509 df -k 510 cd backup 511 ls 512 clear 513 ls 514 mkdir jpmc_bak 515 cd jpmc_bak 516 ls 517 df -k u02 518 pwd 519 ls u02 520 pwd 521 cp -rpf u02ccsystst 522 ls -ltr 523 history

NOOF CPU

isainfo ndashv

HOW TO LOCK THE USER ACCOUNT IN ORACLE

Alter user user_name account lock

CHANGE TABLESPACE BLOCK SIZE ISSUE

Db_2k_cache_size=10m

OEM IN ORACLE 10G

Emctl status dbconsole

httphostnameportem

SET ENVIRONMENT VARIABLE IN ORACLE

export PATH=optjava14bin$PATHexport JAVA_HOME=optjava14jre ora9i 8837 1 0 May 24 1147 ora_pmon_poi ora9i 2305 1 0 Mar 29 2359 ora_pmon_portal ora9i 2321 1 0 Mar 29 2417 ora_pmon_EDMS ora10g 17394 1 0 Apr 02 12857 ora_pmon_POI2 orainst 14743 14365 0 110243 pts3 000 grep pmonCREATE DIRECTORY create directory utl_dir as lsquopathrsquogrant all on directory utl

Modify the given parameter

utl_file_dir

If any timeout request

SqlnetInbound_connect_Timeout

Any privilege for DBMS package

Grant execute on dbmsstats to username

select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = lsquo27229rsquo and spaddr = paddr

Load dump to the Sybase database

Load database database_name from

Load database database_name from ldquocompresspathrdquo

Load database database_name from stripe_on ldquocompresspath01rdquo

Stripe on ldquocompresspath02rdquo

Dump database database_name to lsquopathrsquos

Those scripts should run for install JVM

javavminstallinitjvmsql

optoracle10gxdkadmininitxmlsql

optoracle10gxdkadminxmljasql

optoracle10grdbmsadmincatjavasql

optoracle10grdbmsadmincatexfsql

Once the database has been restarted resolve any invalid objects by

running the utlrpsql script eg

optoracle10grdbmsadminutlrpsql

Those scripts should run for Uninstall JVM

rdbmsadmincatnoexfsql

rdbmsadminrmaqjmssql

rdbmsadminrmcdcsql

xdkadminrmxmlsql

javavminstallrmjvmsql

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name) FROM dba_tablespaces

1023710137mdashBackup Report

SYBASE ndashDatabase

1su ndash syb

2dscp

3open

4listall

5isql ndashUsa ndashSddm(database Name)

6sp-who

7go

8shutdown with nowait

9Sybasesyb125ASE-12-5install

10 startserver ndashf RUNndashgsms

online database gem_curr

11sp ndashhelpdb

12sp-configure

13sp-config parameter newvalue=0is

vgdisplay -v vg02 | grep LV Name |more

For Truncate the table

dump tran test_saatchi with truncate_only

backupsp_helpdb test_saatchi

cd $SYSBASE

MORE INTERFACE

Sybadmin-pW

MAX_ENABLED_ROLES = 70

svrmgrlconnect internalstartupshutdown abort

The following command for gather statistics of Number of rows in each table

exec dbms_statsgather_database_stats()

create or replace procedure sess1kill_session ( v_sid number v_serial number ) as v_varchar2 varchar2(100) begin execute immediate ALTER SYSTEM KILL SESSION || v_sid || || v_serial || end

HARD MOUNT DRIVE

mount -o hardrwnoacrsize=32768wsize=32768suidproto=tcpvers=3 1023710137unixbkp backup

oradim -delete -sid EPToradim -new -sid EPT -pfile DoracleHome2databaseinitEPToraSET TNS_ADMIN=Coracleora92networkadmin

Alter user user_name quota unlimted on tablespace_name

This is most likely a bug I would recommend to apply patchset 9207 As oracle recommends at least 9203 versiyon Anyway you can try below fix

Change the listener and database services Log On user to domain user who is a member of the groups domain admin and ORA_DBA group The default setting is Local System Account

- Run regedit- Drill down to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServices- Locate and delete the OracleOraHome92TNSListener (or whatever the listener name is)- Reboot the entire Windows box- When started and logged on as the Oracle user go to a DOS Command prompt- Run lsnrctl start ltlistener_namegt without the single quotes and replacing ltlistener_namegt with the name- An OS error of 1060 will be seen (normal) as the service is missing- The listener should start correctly or the next logical error may display

By the way can you explain backward of the problem Did you do any upgrade May you use double oracle_homelsquovaroptoracle--Installloc

zfs set quota=10G datapoolzfsoracle

select oracle_username os_user_namelocked_modeobject_nameobject_type from v$locked_object a dba_objects b where aobject_id=bobject_id

Select distinct busername bosuser bmachinebterminalmode_heldmode_requested

blogon_time SESSION WAIT sw From dba_ddl_locks a v$session b v$session_wait sw Where name= and asession_id=bsid and status=ACTIVE and swsid=bsid

spcreatesqlspreportsqlfor i in SAGACEND SAGACENB GLATTD STNC wrkshdevdoORACLE_SID=$iexport ORACLE_SIDsqlplus as sysdba ltlt select sum(bytes)10241024 from dba_data_filesexitdone

optinfoallinfo

For Hp-ux File Extend

fuser -c oradata2

umount oradata2

lvextend -L 40000M devvg00lvol7extendfs devvg00rlvoradata2

mount oradata2

devvg01lvwls 2097152 1457349 610113 70 weblogic

ALL_TAB_PRIVS All object grants where the user or public is granteeALL_TAB_PRIVS_MADE All object grants made by user or on user owned objectsALL_TAB_PRIVS_RECD All object grants to user or publicDBA_SYS_PRIVS System privileges granted to users and rolesDBA_ROLES List of all roles in the databaseDBA_ROLE_PRIVS Roles granted to users and to other rolesROLE_ROLE_PRIVS Roles granted to other rolesROLE_SYS_PRIVS System privileges granted to rolesROLE_TAB_PRIVS Table privileges granted to rolesSESSION_PRIVS All privileges currently available to userSESSION_ROLES All roles currently available to userUSER_SYS_PRIVS System privileges granted to current userUSER_TAB_PRIV Grants on objects where current user is grantee grantor or owner

DBA_TAB_PRIVSetcftpdftpusers

bash-300 zfs create datapooltelebash-300 zfs set mountpoint=app datapoolappbash-300 zfs set quota=10G datapoolapp

EXECUTE dbms_sessionset_sql_trace (FALSE)

SELECT SUBSTR (dfNAME 1 70) file_name dfbytes 1024 1024 allocated_mb((dfbytes 1024 1024) - NVL (SUM (dfsbytes) 1024 1024 0))used_mbNVL (SUM (dfsbytes) 1024 1024 0) free_space_mbFROM v$datafile df dba_free_space dfsWHERE dffile = dfsfile_id(+)GROUP BY dfsfile_id dfNAME dffile dfbytesORDER BY file_name

purge table name-of-table

purge index name-of-table

purge recyclebin

purge dba_recyclebin

purge tablespace name-of-tablespace

purge tablespace name-of-tablespaces user name-of-user

date dd if = ltinput filegt of = lt output file gt date

isainfo ndashv-output of the o2 is 32 bit or 64 bit

1023720911

isql -Udba -Scso_otpwSQL

for start and stop the databasescript sybdata1syb126IQcso_ot

Recover databaseAlter database open

1023720469

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name)FROM dba_tablespaces

Here is the query to get the details based on Unix PID select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = ltunix_pidgt and spaddr = paddr

CREATE CONTROLFILE SET DATABASE GMACDEV RESETLOGS NOARCHIVELOG-- SET STANDBY TO MAXIMIZE PERFORMANCE MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 30 MAXINSTANCES 1 MAXLOGHISTORY 112LOGFILE GROUP 1 gmacGMACDEVloglog1rdo SIZE 100M GROUP 2 gmacGMACDEVloglog2rdo SIZE 100M-- STANDBY LOGFILEDATAFILE gmacGMACDEVdatasystemdbf gmacGMACDEVdataundodbf gmacGMACDEVdatauserdbf gmacGMACDEVdatatestdbfCHARACTER SET US7ASCIIselect from nls_database_parameters 2 where parameter = any(NLS_CHARACTERSETNLS_NCHAR_CHARACTERSET)

EXECUTE dbms_sessionset_sql_trace (TRUE)PLSQL PROFILER

If you are not already configured DBMS_PROFILE package look the following script in Oracle Home-gtrdbms-gtadminSCRIPT PROFLOADSQL and PROFTABSQL

102375164Softwares

My problemWhen I dont use tnsnames and want to use ipc protocol then I get the followingerrorSQLgt connect myuseridmypasswordERROR

ORA-01034 ORACLE not availableORA-27121 unable to determine size of shared memory segmentSVR4 Error 13 Permission denied

Answer to your problem=======================Make sure file oracle has the following permissionscd $ORACLE_HOMEbin6751 If not1 Login as oracle user2 Shutdown (normal) the db3 Go to $ORACLE_HOMEbin4 Execute the followingchmod 6751 oracle5 Check the file permissions on oracle using the followingls -l oracle they should be-rwsr-s--x Startup the db and try connecting as dba or non-oracle user if this will not work- make sure permisions on oracle filesystems are set correctly ( set to 755)- filesystem must be mounted with correct setuid

date +DATE mdynTIMEHMS

Now start the Oracle EM dbconsole Build Script ($ORACLE_HOMEbinemca for Linux and $ORACLE_HOMEBinemcabat for Windows)

$ emca -repos create$ emca -config dbcontrol db

STARTED EMCA at Fri May 14 104322 MEST 2004Enter the following information about the databaseto be configured

Listener port number 1521Database SID AKI1Service name AKI1WORLDEmail address for notification martinzahnakadiacomEmail gateway for notification mailhostPassword for dbsnmp xxxxxxx

Password for sysman xxxxxxxPassword for sys xxxxxxx

---------------------------------------------------------You have specified the following settings

Database ORACLE_HOME optoracleproduct1010Enterprise Manager ORACLE_HOME optoracleproduct1010

Database host name akiraListener port number 1521Database SID AKI1Service name AKI1Email address for notification martinzahnakadiacomEmail gateway for notification mailhost---------------------------------------------------------Do you wish to continue [yesno] yesAM oraclesysmanemcpEMConfig updateReposVarsINFO Updating file configrepositoryvariables

Now wait about 10 Minutes to complete

M oraclesysmanemcpEMConfig createRepositoryINFO Creating repository M oraclesysmanemcpEMConfig performINFO Repository was created successfullyM oraclesysmanemcputilPortQuery findUsedPortsINFO Searching services file for used portAM oraclesysmanemcpEMConfig getPropertiesINFO Starting the DBConsole AM oraclesysmanemcpEMConfig performINFO DBConsole is started successfullyINFO gtgtgtgtgtgtgtgtgtgtgt The Enterprise Manager URL is httpakira5500em ltltltltltltltltltltltEnterprise Manager configuration is completed successfullyFINISHED EMCA at Fri May 14 105525 MEST 2004

Try to connect to the database Control

httpakira5500em

emca -deconfig dbcontrol db -repos drop

1gt select name from sysconfigures2gt goMsg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go

Msg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go name

-------------------------------------------------------------------------------- number of devices

suspend audit when device full

(2 rows affected)1gt sp_configure number of devices2gt go Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 36 60 60 number dynamic

(1 row affected)(return status = 0)1gt sp_configure number of devices702gt go00000000002720070315 14521046 server Configuration file sybasesyb125ASE-12_5ddmcfg has been written and the previous version has been renamed to sybasesyb125ASE-12_5ddm04600000000002720070315 14521048 server The configuration option number of devices has been changed by sa from 60 to 70 Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 44 70 70 number dynamic

(1 row affected)Configuration option changed The SQL Server need not be rebooted since theoption is dynamicChanging the value of number of devices to 70 increases the amount of memoryASE uses by 12 K(return status = 0)

disk initname=gem_hist_data7physname=datasyb125gem_histgem_hist_data7datsize=1600Mgo

This Query is used to find out the object name and lock id

select cownercobject_namecobject_typebsidbserialbstatusbosuserbmachine from v$locked_object a v$session bdba_objects c where bsid = asession_id and aobject_id = cobject_id

For apply patches for migration one version to another version1Run the setupexe2shu down the database3startup migrate4Run the below scriptscatpatchsqlcatciosqlutlrpsqlcatexpsql5shu immediate

Find out the locked object and sql query

select aobject_name boracle_username bos_user_namecsid cserialcterminal dsql_textfrom sysdba_objects av$locked_object bv$session cv$sqltext d where aobject_id = bobject_id and csid = bsession_id and csql_hash_value = dhash_value

HP-UX Cron tab

NAME

crontab - user crontab file

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 7: 7101564-Daily-Work

d 2 atafile oradata2oracle9iadminDWDEVDWDEV1dbf size 2048m 3 logfile group 1 oradata2oracle9iadminDWDEVlog1rdo size 200m 4 group 2 oradata2oracle9iadminDWDEVlog2rdo size 200md 5 efault temporary tablespace temp 6 tempfile oradata2oracle9iadminDWDEVtemp01dbf size 10m 7 undo tablespace undot1 datafile oradata2oracle9iadminDWDEVundot1dbf size 100M

D) Run catalog amp catproc

NO OF CPU RUNNING IN THE SERVER

psrinfopsrinfo ndashv

ASSIGN DEFAULT TABLESPACE FOR THE USER

alter user SAMCORE default tablespace smdest_data quota unlimited on

CREATE CONSTRAINT

create table ri_primary_key_1 (

a number

b number

c number

constraint pk_name primary key (a b)

)

Alter table table_name add constraint some_name primary key (columname1coulumname2)

ENABLE NO VALIDATE amp DROP CONSTAINT

alter table test1 modify DAY_OF_WEEK varchar2(1) not null enable novalidate

ALTER TABLE egg DROP CONSTRAINT eggREFchickenALTER TABLE chicken DROP CONSTRAINT chickenREFegg

Insert into table_name select from table_nameCreate table table_name as selet from table_name

DROP THE DATABASE

The following shows the steps to drop a database in Unix enviroment In order to delete a database there are few things need to be taken care of First all the database related files eg dbf ctl rdo arc need to be deleted Then the entry in listenerora and tnsnamesora need to be removed Third all the database links need to be removed since it will be invalid anyways

It depends how you login to oracle account in Unix you should have environment set for the user oracle To confirm that the environment variable is set do a env|grep ORACLE and you will notice that your ORACLE_SID=SOME_SID and ORACLE_HOME=SOME_PATH If you do not already have the ORACLE_SID and ORACLE_HOME set do it now

Make sure also that you set the ORACLE_SID and ORACLE_HOME correct else you will end up deleting other database Next you will have to query all the database related files from dictionaries in order to identify which files to delete Do the following

01 Login as connect as sysdba at svrmgrl02 startup the database if its not already started The database must at least mounted03 spool tmpdeletelistlst 04 select name from v$datafile (This will get all the datafiles alternatively you can select file_name from dba_data_files) 05 select member from v$logfile06 select name from v$controlfile07 archive log list (archive_log_dest is where the archived destination is)08 locating ifile by issuing show parameter ifile (alternatively check the content of initora)09 spool off 10 Delete in OS level the files listed in tmpdeletelistlst11 remove all the entries which refer to the deleted database in tnsnamesora and listenerora (located in $ORACLE_HOMEnetworkadmin) 12 remove all database links referring to the deleted database13 check varoptoracleoratab to make sure there is no entry of the database deleted If yes remove it14 DONE

SQLgt select DAY_OF_WEEKcount(DAY_OF_WEEK) as cnt from PPM_AR_BROADCAST_HOUR group by DAY_OF_WEEK

CHANGE THE NLS_DATABASE_PARAMETER

select from nls_database_parameters where parameter=NLS_CHARACTERSET

ALTER THE FILE TO OFFLINE

alter database tempfile oradata2rating9idatatemp01dbf offline

alter database tempfile oradata2rating9idatatemp01dbf online

SQLgt alter table PPMENO_PRODPPMENO_MEDIA_ENCODING_ENTITY modify (MINOR_STATION_IND varchar2(2 byte) constraint a1 not null enable novalidate)migrationoracle9ibinpupbldmigrationoracle9isqlplusadminpupbldsqlu01apporacleproduct816binpupbldu01apporacleproduct816sqlplusadminpupbldsqldata1ora92orainstallbinpupblddata1ora92orainstallsqlplusadminpupbldsql

STATSPACK INSTALLATION

Statspack Installation

Steps1Create tablespace tablespace_name datafile lsquofilenamedbfrsquorsquo size 500M 2 optoraclerdbmsadmin 3run the command in sql prompt optoraclerdbmsadminspcreatesql 4

grant select on PPMDP_STENppmdp_media_stream to public

create public synonym ppmdp_media_stream for PPMDP_STENppmdp_media_stream

IMP UTILITY

connected to ORACLE

The errors occur on Oracle database installed in Windows machine too Actually the problem can occurs in any platform of Oracle database It usually happens when try to import into new database

The problem occurs because imp utiliy encounters error out when trying to execute some commands

The solution to solve the problem is as following

Login as sys in the SQLPLUS and run the following sqls

$OHrdbmsadmindbmsreadsql$OHrdbmsadminprvtreadplb

After executing the above sql scripts retry the import The error should disappears

Select granteegranted_name from dba_role_privs

UNDOTBS

Guidelines for Using Partition-Level Import

Partition-level Import can only be specified in table mode It lets you selectively load data from specified partitions or subpartitions in an export file Keep the following guidelines in mind when using partition-level import

bull Import always stores the rows according to the partitioning scheme of the target table

bull Partition-level Import inserts only the row data from the specified source partitions or subpartitions

bull If the target table is partitioned partition-level Import rejects any rows that fall above the highest partition of the target table

bull Partition-level Import cannot import a nonpartitioned exported table However a partitioned table can be imported from a nonpartitioned exported table using table-level Import

bull Partition-level Import is legal only if the source table (that is the table called tablename at export time) was partitioned and exists in the Export file

bull If the partition or subpartition name is not a valid partition in the export file Import generates a warning

bull The partition or subpartition name in the parameter refers to only the partition or subpartition in the Export file which may not contain all of the data of the table on the export source system

bull If ROWS=y (default) and the table does not exist in the Import target system the table is created and all rows from the source partition or subpartition are inserted into the partition or subpartition of the target table

bull If ROWS=y (default) and IGNORE=y but the table already existed before Import all rows for the specified partition or subpartition in the table are inserted into the table The rows are stored according to the existing partitioning scheme of the target table

bull If ROWS=n Import does not insert data into the target table and continues to process other objects associated with the specified table and partition or subpartition in the file

bull If the target table is nonpartitioned the partitions and subpartitions are imported into the entire table Import requires IGNORE=y to import one or more partitions or subpartitions from the Export file into a nonpartitioned table on the import target system

USER CREATION IN OS

useradd -d exporthomeS106255 -m s106462

FIND UPTIME OF THE DATABASE

SQLgt select TO_CHAR(startup_timemm-dd-yy hh24miss) from v$instance

SQLgt select property_name property_value from database_properties

The SQL will return the following results look for DEFAULT_TEMP_TABLESPACE for the setting

PROPERTY_NAME PROPERTY_VALUEmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashDICTBASE 2DEFAULT_TEMP_TABLESPACE TEMPDBTIMEZONE +0100NLS_NCHAR_CHARACTERSET AL16UTF16GLOBAL_DB_NAME ARONGENERALICHEXPORT_VIEWS_VERSION 8NLS_LANGUAGE AMERICANNLS_TERRITORY AMERICANLS_CURRENCY $NLS_ISO_CURRENCY AMERICANLS_NUMERIC_CHARACTERS NLS_CHARACTERSET WE8ISO8859P1NLS_CALENDAR GREGORIANNLS_DATE_FORMAT DD-MON-RRNLS_DATE_LANGUAGE AMERICANNLS_SORT BINARYNLS_TIME_FORMAT HHMISSXFF AMNLS_TIMESTAMP_FORMAT DD-MON-RR HHMISSXFF AMNLS_TIME_TZ_FORMAT HHMISSXFF AM TZRNLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HHMISSXFF AM TZRNLS_DUAL_CURRENCY $

NLS_COMP BINARYNLS_LENGTH_SEMANTICS BYTENLS_NCHAR_CONV_EXCP FALSENLS_RDBMS_VERSION 92060

If default temporary tablespace is wrong the alter it with the following command

SQLgt alter database default temporary tablespace temp

To check default temporary tablespace for all users of the database

SQLgt select username temporary_tablespace account_status from dba_users

will return the following result check if all users TEMPORARY_TABLESPACE is set to correct settings

USERNAME TEMPORARY_TABLESPACE ACCOUNT_STATUSmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndashSYS TEMPRY OPENSYSTEM TEMP OPENOUTLN TEMP OPENDBSNMP TEMP OPENDBMONITOR TEMP OPENTEST TEMP OPENWMSYS TEMP EXPIRED amp LOCKED

If wrong temporary tablespace is found alter it with the correct tablespace name (for example sys) with the following SQL

SQLgt alter user sys temporary tablespace temp

Alternatively recreate or add a datafile to your temporary tablespace and change the default temporary tablespace for your database

SQLgt drop tablespace temp including contents and datafiles

SQLgt create temporary tablespace temp tempfile lsquodbtemp01dbfrsquo size 100m autoextend off extent management local uniform size 1m

SQLgt alter database default temporary tablespace temp

HOW TO DISABLE CONSTAINT

alter table PPM_PRODPPM_SAMPLE_HH_CHRSTCS disable constraint SHC_S_FK

WHAT WILL HAPPEN IF U DELETE THOSE FILES

And what can happen if I delete them

Its two very large files (150-160 MB each)920assistantsdbcatemplatesData_Warehousedfj920assistantsdbcatemplatesTransaction_Processingdfj

Re What are these files GOOD for [message 126248 is a reply to message 126216 ]

Sat 02 July 2005 0009

AchchanMessages 86Registered June 2005

Member

HiFiles that have a DJF extension contain the predefined redo logs and datafiles for seed templates in DBCAIf you delete them you wont be able to use those db creation templates in future

FOR JVM INSTALLATION IN ORACLE

For JVM installation

We have to run this scriptinitjvmasql

DB_DOMAIN NAME PARAMETER

db_domainGLOBAL_NAMES=TRUEALTER DATABASE RENAME GLOBAL_NAME TO WEBDVCTSCOM

CREATE TABLE STRUCTURE

SQLgt select DBMS_METADATAGET_DDL(TABLELOGOFF_TBLCOORS_TARGET) from dual CREATE OR REPLACE TRIGGER SYStrg_logoff

BEFORE logoff ON DATABASEBEGIN INSERT INTO SYSlogoff_tbl VALUES(sys_context(userenvsession_user) SYSDATE)END

BACKUP PATH

507 mount 1023710137unixbkp backup 508 cd backup 509 df -k 510 cd backup 511 ls 512 clear 513 ls 514 mkdir jpmc_bak 515 cd jpmc_bak 516 ls 517 df -k u02 518 pwd 519 ls u02 520 pwd 521 cp -rpf u02ccsystst 522 ls -ltr 523 history

NOOF CPU

isainfo ndashv

HOW TO LOCK THE USER ACCOUNT IN ORACLE

Alter user user_name account lock

CHANGE TABLESPACE BLOCK SIZE ISSUE

Db_2k_cache_size=10m

OEM IN ORACLE 10G

Emctl status dbconsole

httphostnameportem

SET ENVIRONMENT VARIABLE IN ORACLE

export PATH=optjava14bin$PATHexport JAVA_HOME=optjava14jre ora9i 8837 1 0 May 24 1147 ora_pmon_poi ora9i 2305 1 0 Mar 29 2359 ora_pmon_portal ora9i 2321 1 0 Mar 29 2417 ora_pmon_EDMS ora10g 17394 1 0 Apr 02 12857 ora_pmon_POI2 orainst 14743 14365 0 110243 pts3 000 grep pmonCREATE DIRECTORY create directory utl_dir as lsquopathrsquogrant all on directory utl

Modify the given parameter

utl_file_dir

If any timeout request

SqlnetInbound_connect_Timeout

Any privilege for DBMS package

Grant execute on dbmsstats to username

select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = lsquo27229rsquo and spaddr = paddr

Load dump to the Sybase database

Load database database_name from

Load database database_name from ldquocompresspathrdquo

Load database database_name from stripe_on ldquocompresspath01rdquo

Stripe on ldquocompresspath02rdquo

Dump database database_name to lsquopathrsquos

Those scripts should run for install JVM

javavminstallinitjvmsql

optoracle10gxdkadmininitxmlsql

optoracle10gxdkadminxmljasql

optoracle10grdbmsadmincatjavasql

optoracle10grdbmsadmincatexfsql

Once the database has been restarted resolve any invalid objects by

running the utlrpsql script eg

optoracle10grdbmsadminutlrpsql

Those scripts should run for Uninstall JVM

rdbmsadmincatnoexfsql

rdbmsadminrmaqjmssql

rdbmsadminrmcdcsql

xdkadminrmxmlsql

javavminstallrmjvmsql

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name) FROM dba_tablespaces

1023710137mdashBackup Report

SYBASE ndashDatabase

1su ndash syb

2dscp

3open

4listall

5isql ndashUsa ndashSddm(database Name)

6sp-who

7go

8shutdown with nowait

9Sybasesyb125ASE-12-5install

10 startserver ndashf RUNndashgsms

online database gem_curr

11sp ndashhelpdb

12sp-configure

13sp-config parameter newvalue=0is

vgdisplay -v vg02 | grep LV Name |more

For Truncate the table

dump tran test_saatchi with truncate_only

backupsp_helpdb test_saatchi

cd $SYSBASE

MORE INTERFACE

Sybadmin-pW

MAX_ENABLED_ROLES = 70

svrmgrlconnect internalstartupshutdown abort

The following command for gather statistics of Number of rows in each table

exec dbms_statsgather_database_stats()

create or replace procedure sess1kill_session ( v_sid number v_serial number ) as v_varchar2 varchar2(100) begin execute immediate ALTER SYSTEM KILL SESSION || v_sid || || v_serial || end

HARD MOUNT DRIVE

mount -o hardrwnoacrsize=32768wsize=32768suidproto=tcpvers=3 1023710137unixbkp backup

oradim -delete -sid EPToradim -new -sid EPT -pfile DoracleHome2databaseinitEPToraSET TNS_ADMIN=Coracleora92networkadmin

Alter user user_name quota unlimted on tablespace_name

This is most likely a bug I would recommend to apply patchset 9207 As oracle recommends at least 9203 versiyon Anyway you can try below fix

Change the listener and database services Log On user to domain user who is a member of the groups domain admin and ORA_DBA group The default setting is Local System Account

- Run regedit- Drill down to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServices- Locate and delete the OracleOraHome92TNSListener (or whatever the listener name is)- Reboot the entire Windows box- When started and logged on as the Oracle user go to a DOS Command prompt- Run lsnrctl start ltlistener_namegt without the single quotes and replacing ltlistener_namegt with the name- An OS error of 1060 will be seen (normal) as the service is missing- The listener should start correctly or the next logical error may display

By the way can you explain backward of the problem Did you do any upgrade May you use double oracle_homelsquovaroptoracle--Installloc

zfs set quota=10G datapoolzfsoracle

select oracle_username os_user_namelocked_modeobject_nameobject_type from v$locked_object a dba_objects b where aobject_id=bobject_id

Select distinct busername bosuser bmachinebterminalmode_heldmode_requested

blogon_time SESSION WAIT sw From dba_ddl_locks a v$session b v$session_wait sw Where name= and asession_id=bsid and status=ACTIVE and swsid=bsid

spcreatesqlspreportsqlfor i in SAGACEND SAGACENB GLATTD STNC wrkshdevdoORACLE_SID=$iexport ORACLE_SIDsqlplus as sysdba ltlt select sum(bytes)10241024 from dba_data_filesexitdone

optinfoallinfo

For Hp-ux File Extend

fuser -c oradata2

umount oradata2

lvextend -L 40000M devvg00lvol7extendfs devvg00rlvoradata2

mount oradata2

devvg01lvwls 2097152 1457349 610113 70 weblogic

ALL_TAB_PRIVS All object grants where the user or public is granteeALL_TAB_PRIVS_MADE All object grants made by user or on user owned objectsALL_TAB_PRIVS_RECD All object grants to user or publicDBA_SYS_PRIVS System privileges granted to users and rolesDBA_ROLES List of all roles in the databaseDBA_ROLE_PRIVS Roles granted to users and to other rolesROLE_ROLE_PRIVS Roles granted to other rolesROLE_SYS_PRIVS System privileges granted to rolesROLE_TAB_PRIVS Table privileges granted to rolesSESSION_PRIVS All privileges currently available to userSESSION_ROLES All roles currently available to userUSER_SYS_PRIVS System privileges granted to current userUSER_TAB_PRIV Grants on objects where current user is grantee grantor or owner

DBA_TAB_PRIVSetcftpdftpusers

bash-300 zfs create datapooltelebash-300 zfs set mountpoint=app datapoolappbash-300 zfs set quota=10G datapoolapp

EXECUTE dbms_sessionset_sql_trace (FALSE)

SELECT SUBSTR (dfNAME 1 70) file_name dfbytes 1024 1024 allocated_mb((dfbytes 1024 1024) - NVL (SUM (dfsbytes) 1024 1024 0))used_mbNVL (SUM (dfsbytes) 1024 1024 0) free_space_mbFROM v$datafile df dba_free_space dfsWHERE dffile = dfsfile_id(+)GROUP BY dfsfile_id dfNAME dffile dfbytesORDER BY file_name

purge table name-of-table

purge index name-of-table

purge recyclebin

purge dba_recyclebin

purge tablespace name-of-tablespace

purge tablespace name-of-tablespaces user name-of-user

date dd if = ltinput filegt of = lt output file gt date

isainfo ndashv-output of the o2 is 32 bit or 64 bit

1023720911

isql -Udba -Scso_otpwSQL

for start and stop the databasescript sybdata1syb126IQcso_ot

Recover databaseAlter database open

1023720469

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name)FROM dba_tablespaces

Here is the query to get the details based on Unix PID select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = ltunix_pidgt and spaddr = paddr

CREATE CONTROLFILE SET DATABASE GMACDEV RESETLOGS NOARCHIVELOG-- SET STANDBY TO MAXIMIZE PERFORMANCE MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 30 MAXINSTANCES 1 MAXLOGHISTORY 112LOGFILE GROUP 1 gmacGMACDEVloglog1rdo SIZE 100M GROUP 2 gmacGMACDEVloglog2rdo SIZE 100M-- STANDBY LOGFILEDATAFILE gmacGMACDEVdatasystemdbf gmacGMACDEVdataundodbf gmacGMACDEVdatauserdbf gmacGMACDEVdatatestdbfCHARACTER SET US7ASCIIselect from nls_database_parameters 2 where parameter = any(NLS_CHARACTERSETNLS_NCHAR_CHARACTERSET)

EXECUTE dbms_sessionset_sql_trace (TRUE)PLSQL PROFILER

If you are not already configured DBMS_PROFILE package look the following script in Oracle Home-gtrdbms-gtadminSCRIPT PROFLOADSQL and PROFTABSQL

102375164Softwares

My problemWhen I dont use tnsnames and want to use ipc protocol then I get the followingerrorSQLgt connect myuseridmypasswordERROR

ORA-01034 ORACLE not availableORA-27121 unable to determine size of shared memory segmentSVR4 Error 13 Permission denied

Answer to your problem=======================Make sure file oracle has the following permissionscd $ORACLE_HOMEbin6751 If not1 Login as oracle user2 Shutdown (normal) the db3 Go to $ORACLE_HOMEbin4 Execute the followingchmod 6751 oracle5 Check the file permissions on oracle using the followingls -l oracle they should be-rwsr-s--x Startup the db and try connecting as dba or non-oracle user if this will not work- make sure permisions on oracle filesystems are set correctly ( set to 755)- filesystem must be mounted with correct setuid

date +DATE mdynTIMEHMS

Now start the Oracle EM dbconsole Build Script ($ORACLE_HOMEbinemca for Linux and $ORACLE_HOMEBinemcabat for Windows)

$ emca -repos create$ emca -config dbcontrol db

STARTED EMCA at Fri May 14 104322 MEST 2004Enter the following information about the databaseto be configured

Listener port number 1521Database SID AKI1Service name AKI1WORLDEmail address for notification martinzahnakadiacomEmail gateway for notification mailhostPassword for dbsnmp xxxxxxx

Password for sysman xxxxxxxPassword for sys xxxxxxx

---------------------------------------------------------You have specified the following settings

Database ORACLE_HOME optoracleproduct1010Enterprise Manager ORACLE_HOME optoracleproduct1010

Database host name akiraListener port number 1521Database SID AKI1Service name AKI1Email address for notification martinzahnakadiacomEmail gateway for notification mailhost---------------------------------------------------------Do you wish to continue [yesno] yesAM oraclesysmanemcpEMConfig updateReposVarsINFO Updating file configrepositoryvariables

Now wait about 10 Minutes to complete

M oraclesysmanemcpEMConfig createRepositoryINFO Creating repository M oraclesysmanemcpEMConfig performINFO Repository was created successfullyM oraclesysmanemcputilPortQuery findUsedPortsINFO Searching services file for used portAM oraclesysmanemcpEMConfig getPropertiesINFO Starting the DBConsole AM oraclesysmanemcpEMConfig performINFO DBConsole is started successfullyINFO gtgtgtgtgtgtgtgtgtgtgt The Enterprise Manager URL is httpakira5500em ltltltltltltltltltltltEnterprise Manager configuration is completed successfullyFINISHED EMCA at Fri May 14 105525 MEST 2004

Try to connect to the database Control

httpakira5500em

emca -deconfig dbcontrol db -repos drop

1gt select name from sysconfigures2gt goMsg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go

Msg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go name

-------------------------------------------------------------------------------- number of devices

suspend audit when device full

(2 rows affected)1gt sp_configure number of devices2gt go Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 36 60 60 number dynamic

(1 row affected)(return status = 0)1gt sp_configure number of devices702gt go00000000002720070315 14521046 server Configuration file sybasesyb125ASE-12_5ddmcfg has been written and the previous version has been renamed to sybasesyb125ASE-12_5ddm04600000000002720070315 14521048 server The configuration option number of devices has been changed by sa from 60 to 70 Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 44 70 70 number dynamic

(1 row affected)Configuration option changed The SQL Server need not be rebooted since theoption is dynamicChanging the value of number of devices to 70 increases the amount of memoryASE uses by 12 K(return status = 0)

disk initname=gem_hist_data7physname=datasyb125gem_histgem_hist_data7datsize=1600Mgo

This Query is used to find out the object name and lock id

select cownercobject_namecobject_typebsidbserialbstatusbosuserbmachine from v$locked_object a v$session bdba_objects c where bsid = asession_id and aobject_id = cobject_id

For apply patches for migration one version to another version1Run the setupexe2shu down the database3startup migrate4Run the below scriptscatpatchsqlcatciosqlutlrpsqlcatexpsql5shu immediate

Find out the locked object and sql query

select aobject_name boracle_username bos_user_namecsid cserialcterminal dsql_textfrom sysdba_objects av$locked_object bv$session cv$sqltext d where aobject_id = bobject_id and csid = bsession_id and csql_hash_value = dhash_value

HP-UX Cron tab

NAME

crontab - user crontab file

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 8: 7101564-Daily-Work

DROP THE DATABASE

The following shows the steps to drop a database in Unix enviroment In order to delete a database there are few things need to be taken care of First all the database related files eg dbf ctl rdo arc need to be deleted Then the entry in listenerora and tnsnamesora need to be removed Third all the database links need to be removed since it will be invalid anyways

It depends how you login to oracle account in Unix you should have environment set for the user oracle To confirm that the environment variable is set do a env|grep ORACLE and you will notice that your ORACLE_SID=SOME_SID and ORACLE_HOME=SOME_PATH If you do not already have the ORACLE_SID and ORACLE_HOME set do it now

Make sure also that you set the ORACLE_SID and ORACLE_HOME correct else you will end up deleting other database Next you will have to query all the database related files from dictionaries in order to identify which files to delete Do the following

01 Login as connect as sysdba at svrmgrl02 startup the database if its not already started The database must at least mounted03 spool tmpdeletelistlst 04 select name from v$datafile (This will get all the datafiles alternatively you can select file_name from dba_data_files) 05 select member from v$logfile06 select name from v$controlfile07 archive log list (archive_log_dest is where the archived destination is)08 locating ifile by issuing show parameter ifile (alternatively check the content of initora)09 spool off 10 Delete in OS level the files listed in tmpdeletelistlst11 remove all the entries which refer to the deleted database in tnsnamesora and listenerora (located in $ORACLE_HOMEnetworkadmin) 12 remove all database links referring to the deleted database13 check varoptoracleoratab to make sure there is no entry of the database deleted If yes remove it14 DONE

SQLgt select DAY_OF_WEEKcount(DAY_OF_WEEK) as cnt from PPM_AR_BROADCAST_HOUR group by DAY_OF_WEEK

CHANGE THE NLS_DATABASE_PARAMETER

select from nls_database_parameters where parameter=NLS_CHARACTERSET

ALTER THE FILE TO OFFLINE

alter database tempfile oradata2rating9idatatemp01dbf offline

alter database tempfile oradata2rating9idatatemp01dbf online

SQLgt alter table PPMENO_PRODPPMENO_MEDIA_ENCODING_ENTITY modify (MINOR_STATION_IND varchar2(2 byte) constraint a1 not null enable novalidate)migrationoracle9ibinpupbldmigrationoracle9isqlplusadminpupbldsqlu01apporacleproduct816binpupbldu01apporacleproduct816sqlplusadminpupbldsqldata1ora92orainstallbinpupblddata1ora92orainstallsqlplusadminpupbldsql

STATSPACK INSTALLATION

Statspack Installation

Steps1Create tablespace tablespace_name datafile lsquofilenamedbfrsquorsquo size 500M 2 optoraclerdbmsadmin 3run the command in sql prompt optoraclerdbmsadminspcreatesql 4

grant select on PPMDP_STENppmdp_media_stream to public

create public synonym ppmdp_media_stream for PPMDP_STENppmdp_media_stream

IMP UTILITY

connected to ORACLE

The errors occur on Oracle database installed in Windows machine too Actually the problem can occurs in any platform of Oracle database It usually happens when try to import into new database

The problem occurs because imp utiliy encounters error out when trying to execute some commands

The solution to solve the problem is as following

Login as sys in the SQLPLUS and run the following sqls

$OHrdbmsadmindbmsreadsql$OHrdbmsadminprvtreadplb

After executing the above sql scripts retry the import The error should disappears

Select granteegranted_name from dba_role_privs

UNDOTBS

Guidelines for Using Partition-Level Import

Partition-level Import can only be specified in table mode It lets you selectively load data from specified partitions or subpartitions in an export file Keep the following guidelines in mind when using partition-level import

bull Import always stores the rows according to the partitioning scheme of the target table

bull Partition-level Import inserts only the row data from the specified source partitions or subpartitions

bull If the target table is partitioned partition-level Import rejects any rows that fall above the highest partition of the target table

bull Partition-level Import cannot import a nonpartitioned exported table However a partitioned table can be imported from a nonpartitioned exported table using table-level Import

bull Partition-level Import is legal only if the source table (that is the table called tablename at export time) was partitioned and exists in the Export file

bull If the partition or subpartition name is not a valid partition in the export file Import generates a warning

bull The partition or subpartition name in the parameter refers to only the partition or subpartition in the Export file which may not contain all of the data of the table on the export source system

bull If ROWS=y (default) and the table does not exist in the Import target system the table is created and all rows from the source partition or subpartition are inserted into the partition or subpartition of the target table

bull If ROWS=y (default) and IGNORE=y but the table already existed before Import all rows for the specified partition or subpartition in the table are inserted into the table The rows are stored according to the existing partitioning scheme of the target table

bull If ROWS=n Import does not insert data into the target table and continues to process other objects associated with the specified table and partition or subpartition in the file

bull If the target table is nonpartitioned the partitions and subpartitions are imported into the entire table Import requires IGNORE=y to import one or more partitions or subpartitions from the Export file into a nonpartitioned table on the import target system

USER CREATION IN OS

useradd -d exporthomeS106255 -m s106462

FIND UPTIME OF THE DATABASE

SQLgt select TO_CHAR(startup_timemm-dd-yy hh24miss) from v$instance

SQLgt select property_name property_value from database_properties

The SQL will return the following results look for DEFAULT_TEMP_TABLESPACE for the setting

PROPERTY_NAME PROPERTY_VALUEmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashDICTBASE 2DEFAULT_TEMP_TABLESPACE TEMPDBTIMEZONE +0100NLS_NCHAR_CHARACTERSET AL16UTF16GLOBAL_DB_NAME ARONGENERALICHEXPORT_VIEWS_VERSION 8NLS_LANGUAGE AMERICANNLS_TERRITORY AMERICANLS_CURRENCY $NLS_ISO_CURRENCY AMERICANLS_NUMERIC_CHARACTERS NLS_CHARACTERSET WE8ISO8859P1NLS_CALENDAR GREGORIANNLS_DATE_FORMAT DD-MON-RRNLS_DATE_LANGUAGE AMERICANNLS_SORT BINARYNLS_TIME_FORMAT HHMISSXFF AMNLS_TIMESTAMP_FORMAT DD-MON-RR HHMISSXFF AMNLS_TIME_TZ_FORMAT HHMISSXFF AM TZRNLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HHMISSXFF AM TZRNLS_DUAL_CURRENCY $

NLS_COMP BINARYNLS_LENGTH_SEMANTICS BYTENLS_NCHAR_CONV_EXCP FALSENLS_RDBMS_VERSION 92060

If default temporary tablespace is wrong the alter it with the following command

SQLgt alter database default temporary tablespace temp

To check default temporary tablespace for all users of the database

SQLgt select username temporary_tablespace account_status from dba_users

will return the following result check if all users TEMPORARY_TABLESPACE is set to correct settings

USERNAME TEMPORARY_TABLESPACE ACCOUNT_STATUSmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndashSYS TEMPRY OPENSYSTEM TEMP OPENOUTLN TEMP OPENDBSNMP TEMP OPENDBMONITOR TEMP OPENTEST TEMP OPENWMSYS TEMP EXPIRED amp LOCKED

If wrong temporary tablespace is found alter it with the correct tablespace name (for example sys) with the following SQL

SQLgt alter user sys temporary tablespace temp

Alternatively recreate or add a datafile to your temporary tablespace and change the default temporary tablespace for your database

SQLgt drop tablespace temp including contents and datafiles

SQLgt create temporary tablespace temp tempfile lsquodbtemp01dbfrsquo size 100m autoextend off extent management local uniform size 1m

SQLgt alter database default temporary tablespace temp

HOW TO DISABLE CONSTAINT

alter table PPM_PRODPPM_SAMPLE_HH_CHRSTCS disable constraint SHC_S_FK

WHAT WILL HAPPEN IF U DELETE THOSE FILES

And what can happen if I delete them

Its two very large files (150-160 MB each)920assistantsdbcatemplatesData_Warehousedfj920assistantsdbcatemplatesTransaction_Processingdfj

Re What are these files GOOD for [message 126248 is a reply to message 126216 ]

Sat 02 July 2005 0009

AchchanMessages 86Registered June 2005

Member

HiFiles that have a DJF extension contain the predefined redo logs and datafiles for seed templates in DBCAIf you delete them you wont be able to use those db creation templates in future

FOR JVM INSTALLATION IN ORACLE

For JVM installation

We have to run this scriptinitjvmasql

DB_DOMAIN NAME PARAMETER

db_domainGLOBAL_NAMES=TRUEALTER DATABASE RENAME GLOBAL_NAME TO WEBDVCTSCOM

CREATE TABLE STRUCTURE

SQLgt select DBMS_METADATAGET_DDL(TABLELOGOFF_TBLCOORS_TARGET) from dual CREATE OR REPLACE TRIGGER SYStrg_logoff

BEFORE logoff ON DATABASEBEGIN INSERT INTO SYSlogoff_tbl VALUES(sys_context(userenvsession_user) SYSDATE)END

BACKUP PATH

507 mount 1023710137unixbkp backup 508 cd backup 509 df -k 510 cd backup 511 ls 512 clear 513 ls 514 mkdir jpmc_bak 515 cd jpmc_bak 516 ls 517 df -k u02 518 pwd 519 ls u02 520 pwd 521 cp -rpf u02ccsystst 522 ls -ltr 523 history

NOOF CPU

isainfo ndashv

HOW TO LOCK THE USER ACCOUNT IN ORACLE

Alter user user_name account lock

CHANGE TABLESPACE BLOCK SIZE ISSUE

Db_2k_cache_size=10m

OEM IN ORACLE 10G

Emctl status dbconsole

httphostnameportem

SET ENVIRONMENT VARIABLE IN ORACLE

export PATH=optjava14bin$PATHexport JAVA_HOME=optjava14jre ora9i 8837 1 0 May 24 1147 ora_pmon_poi ora9i 2305 1 0 Mar 29 2359 ora_pmon_portal ora9i 2321 1 0 Mar 29 2417 ora_pmon_EDMS ora10g 17394 1 0 Apr 02 12857 ora_pmon_POI2 orainst 14743 14365 0 110243 pts3 000 grep pmonCREATE DIRECTORY create directory utl_dir as lsquopathrsquogrant all on directory utl

Modify the given parameter

utl_file_dir

If any timeout request

SqlnetInbound_connect_Timeout

Any privilege for DBMS package

Grant execute on dbmsstats to username

select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = lsquo27229rsquo and spaddr = paddr

Load dump to the Sybase database

Load database database_name from

Load database database_name from ldquocompresspathrdquo

Load database database_name from stripe_on ldquocompresspath01rdquo

Stripe on ldquocompresspath02rdquo

Dump database database_name to lsquopathrsquos

Those scripts should run for install JVM

javavminstallinitjvmsql

optoracle10gxdkadmininitxmlsql

optoracle10gxdkadminxmljasql

optoracle10grdbmsadmincatjavasql

optoracle10grdbmsadmincatexfsql

Once the database has been restarted resolve any invalid objects by

running the utlrpsql script eg

optoracle10grdbmsadminutlrpsql

Those scripts should run for Uninstall JVM

rdbmsadmincatnoexfsql

rdbmsadminrmaqjmssql

rdbmsadminrmcdcsql

xdkadminrmxmlsql

javavminstallrmjvmsql

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name) FROM dba_tablespaces

1023710137mdashBackup Report

SYBASE ndashDatabase

1su ndash syb

2dscp

3open

4listall

5isql ndashUsa ndashSddm(database Name)

6sp-who

7go

8shutdown with nowait

9Sybasesyb125ASE-12-5install

10 startserver ndashf RUNndashgsms

online database gem_curr

11sp ndashhelpdb

12sp-configure

13sp-config parameter newvalue=0is

vgdisplay -v vg02 | grep LV Name |more

For Truncate the table

dump tran test_saatchi with truncate_only

backupsp_helpdb test_saatchi

cd $SYSBASE

MORE INTERFACE

Sybadmin-pW

MAX_ENABLED_ROLES = 70

svrmgrlconnect internalstartupshutdown abort

The following command for gather statistics of Number of rows in each table

exec dbms_statsgather_database_stats()

create or replace procedure sess1kill_session ( v_sid number v_serial number ) as v_varchar2 varchar2(100) begin execute immediate ALTER SYSTEM KILL SESSION || v_sid || || v_serial || end

HARD MOUNT DRIVE

mount -o hardrwnoacrsize=32768wsize=32768suidproto=tcpvers=3 1023710137unixbkp backup

oradim -delete -sid EPToradim -new -sid EPT -pfile DoracleHome2databaseinitEPToraSET TNS_ADMIN=Coracleora92networkadmin

Alter user user_name quota unlimted on tablespace_name

This is most likely a bug I would recommend to apply patchset 9207 As oracle recommends at least 9203 versiyon Anyway you can try below fix

Change the listener and database services Log On user to domain user who is a member of the groups domain admin and ORA_DBA group The default setting is Local System Account

- Run regedit- Drill down to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServices- Locate and delete the OracleOraHome92TNSListener (or whatever the listener name is)- Reboot the entire Windows box- When started and logged on as the Oracle user go to a DOS Command prompt- Run lsnrctl start ltlistener_namegt without the single quotes and replacing ltlistener_namegt with the name- An OS error of 1060 will be seen (normal) as the service is missing- The listener should start correctly or the next logical error may display

By the way can you explain backward of the problem Did you do any upgrade May you use double oracle_homelsquovaroptoracle--Installloc

zfs set quota=10G datapoolzfsoracle

select oracle_username os_user_namelocked_modeobject_nameobject_type from v$locked_object a dba_objects b where aobject_id=bobject_id

Select distinct busername bosuser bmachinebterminalmode_heldmode_requested

blogon_time SESSION WAIT sw From dba_ddl_locks a v$session b v$session_wait sw Where name= and asession_id=bsid and status=ACTIVE and swsid=bsid

spcreatesqlspreportsqlfor i in SAGACEND SAGACENB GLATTD STNC wrkshdevdoORACLE_SID=$iexport ORACLE_SIDsqlplus as sysdba ltlt select sum(bytes)10241024 from dba_data_filesexitdone

optinfoallinfo

For Hp-ux File Extend

fuser -c oradata2

umount oradata2

lvextend -L 40000M devvg00lvol7extendfs devvg00rlvoradata2

mount oradata2

devvg01lvwls 2097152 1457349 610113 70 weblogic

ALL_TAB_PRIVS All object grants where the user or public is granteeALL_TAB_PRIVS_MADE All object grants made by user or on user owned objectsALL_TAB_PRIVS_RECD All object grants to user or publicDBA_SYS_PRIVS System privileges granted to users and rolesDBA_ROLES List of all roles in the databaseDBA_ROLE_PRIVS Roles granted to users and to other rolesROLE_ROLE_PRIVS Roles granted to other rolesROLE_SYS_PRIVS System privileges granted to rolesROLE_TAB_PRIVS Table privileges granted to rolesSESSION_PRIVS All privileges currently available to userSESSION_ROLES All roles currently available to userUSER_SYS_PRIVS System privileges granted to current userUSER_TAB_PRIV Grants on objects where current user is grantee grantor or owner

DBA_TAB_PRIVSetcftpdftpusers

bash-300 zfs create datapooltelebash-300 zfs set mountpoint=app datapoolappbash-300 zfs set quota=10G datapoolapp

EXECUTE dbms_sessionset_sql_trace (FALSE)

SELECT SUBSTR (dfNAME 1 70) file_name dfbytes 1024 1024 allocated_mb((dfbytes 1024 1024) - NVL (SUM (dfsbytes) 1024 1024 0))used_mbNVL (SUM (dfsbytes) 1024 1024 0) free_space_mbFROM v$datafile df dba_free_space dfsWHERE dffile = dfsfile_id(+)GROUP BY dfsfile_id dfNAME dffile dfbytesORDER BY file_name

purge table name-of-table

purge index name-of-table

purge recyclebin

purge dba_recyclebin

purge tablespace name-of-tablespace

purge tablespace name-of-tablespaces user name-of-user

date dd if = ltinput filegt of = lt output file gt date

isainfo ndashv-output of the o2 is 32 bit or 64 bit

1023720911

isql -Udba -Scso_otpwSQL

for start and stop the databasescript sybdata1syb126IQcso_ot

Recover databaseAlter database open

1023720469

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name)FROM dba_tablespaces

Here is the query to get the details based on Unix PID select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = ltunix_pidgt and spaddr = paddr

CREATE CONTROLFILE SET DATABASE GMACDEV RESETLOGS NOARCHIVELOG-- SET STANDBY TO MAXIMIZE PERFORMANCE MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 30 MAXINSTANCES 1 MAXLOGHISTORY 112LOGFILE GROUP 1 gmacGMACDEVloglog1rdo SIZE 100M GROUP 2 gmacGMACDEVloglog2rdo SIZE 100M-- STANDBY LOGFILEDATAFILE gmacGMACDEVdatasystemdbf gmacGMACDEVdataundodbf gmacGMACDEVdatauserdbf gmacGMACDEVdatatestdbfCHARACTER SET US7ASCIIselect from nls_database_parameters 2 where parameter = any(NLS_CHARACTERSETNLS_NCHAR_CHARACTERSET)

EXECUTE dbms_sessionset_sql_trace (TRUE)PLSQL PROFILER

If you are not already configured DBMS_PROFILE package look the following script in Oracle Home-gtrdbms-gtadminSCRIPT PROFLOADSQL and PROFTABSQL

102375164Softwares

My problemWhen I dont use tnsnames and want to use ipc protocol then I get the followingerrorSQLgt connect myuseridmypasswordERROR

ORA-01034 ORACLE not availableORA-27121 unable to determine size of shared memory segmentSVR4 Error 13 Permission denied

Answer to your problem=======================Make sure file oracle has the following permissionscd $ORACLE_HOMEbin6751 If not1 Login as oracle user2 Shutdown (normal) the db3 Go to $ORACLE_HOMEbin4 Execute the followingchmod 6751 oracle5 Check the file permissions on oracle using the followingls -l oracle they should be-rwsr-s--x Startup the db and try connecting as dba or non-oracle user if this will not work- make sure permisions on oracle filesystems are set correctly ( set to 755)- filesystem must be mounted with correct setuid

date +DATE mdynTIMEHMS

Now start the Oracle EM dbconsole Build Script ($ORACLE_HOMEbinemca for Linux and $ORACLE_HOMEBinemcabat for Windows)

$ emca -repos create$ emca -config dbcontrol db

STARTED EMCA at Fri May 14 104322 MEST 2004Enter the following information about the databaseto be configured

Listener port number 1521Database SID AKI1Service name AKI1WORLDEmail address for notification martinzahnakadiacomEmail gateway for notification mailhostPassword for dbsnmp xxxxxxx

Password for sysman xxxxxxxPassword for sys xxxxxxx

---------------------------------------------------------You have specified the following settings

Database ORACLE_HOME optoracleproduct1010Enterprise Manager ORACLE_HOME optoracleproduct1010

Database host name akiraListener port number 1521Database SID AKI1Service name AKI1Email address for notification martinzahnakadiacomEmail gateway for notification mailhost---------------------------------------------------------Do you wish to continue [yesno] yesAM oraclesysmanemcpEMConfig updateReposVarsINFO Updating file configrepositoryvariables

Now wait about 10 Minutes to complete

M oraclesysmanemcpEMConfig createRepositoryINFO Creating repository M oraclesysmanemcpEMConfig performINFO Repository was created successfullyM oraclesysmanemcputilPortQuery findUsedPortsINFO Searching services file for used portAM oraclesysmanemcpEMConfig getPropertiesINFO Starting the DBConsole AM oraclesysmanemcpEMConfig performINFO DBConsole is started successfullyINFO gtgtgtgtgtgtgtgtgtgtgt The Enterprise Manager URL is httpakira5500em ltltltltltltltltltltltEnterprise Manager configuration is completed successfullyFINISHED EMCA at Fri May 14 105525 MEST 2004

Try to connect to the database Control

httpakira5500em

emca -deconfig dbcontrol db -repos drop

1gt select name from sysconfigures2gt goMsg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go

Msg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go name

-------------------------------------------------------------------------------- number of devices

suspend audit when device full

(2 rows affected)1gt sp_configure number of devices2gt go Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 36 60 60 number dynamic

(1 row affected)(return status = 0)1gt sp_configure number of devices702gt go00000000002720070315 14521046 server Configuration file sybasesyb125ASE-12_5ddmcfg has been written and the previous version has been renamed to sybasesyb125ASE-12_5ddm04600000000002720070315 14521048 server The configuration option number of devices has been changed by sa from 60 to 70 Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 44 70 70 number dynamic

(1 row affected)Configuration option changed The SQL Server need not be rebooted since theoption is dynamicChanging the value of number of devices to 70 increases the amount of memoryASE uses by 12 K(return status = 0)

disk initname=gem_hist_data7physname=datasyb125gem_histgem_hist_data7datsize=1600Mgo

This Query is used to find out the object name and lock id

select cownercobject_namecobject_typebsidbserialbstatusbosuserbmachine from v$locked_object a v$session bdba_objects c where bsid = asession_id and aobject_id = cobject_id

For apply patches for migration one version to another version1Run the setupexe2shu down the database3startup migrate4Run the below scriptscatpatchsqlcatciosqlutlrpsqlcatexpsql5shu immediate

Find out the locked object and sql query

select aobject_name boracle_username bos_user_namecsid cserialcterminal dsql_textfrom sysdba_objects av$locked_object bv$session cv$sqltext d where aobject_id = bobject_id and csid = bsession_id and csql_hash_value = dhash_value

HP-UX Cron tab

NAME

crontab - user crontab file

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 9: 7101564-Daily-Work

ALTER THE FILE TO OFFLINE

alter database tempfile oradata2rating9idatatemp01dbf offline

alter database tempfile oradata2rating9idatatemp01dbf online

SQLgt alter table PPMENO_PRODPPMENO_MEDIA_ENCODING_ENTITY modify (MINOR_STATION_IND varchar2(2 byte) constraint a1 not null enable novalidate)migrationoracle9ibinpupbldmigrationoracle9isqlplusadminpupbldsqlu01apporacleproduct816binpupbldu01apporacleproduct816sqlplusadminpupbldsqldata1ora92orainstallbinpupblddata1ora92orainstallsqlplusadminpupbldsql

STATSPACK INSTALLATION

Statspack Installation

Steps1Create tablespace tablespace_name datafile lsquofilenamedbfrsquorsquo size 500M 2 optoraclerdbmsadmin 3run the command in sql prompt optoraclerdbmsadminspcreatesql 4

grant select on PPMDP_STENppmdp_media_stream to public

create public synonym ppmdp_media_stream for PPMDP_STENppmdp_media_stream

IMP UTILITY

connected to ORACLE

The errors occur on Oracle database installed in Windows machine too Actually the problem can occurs in any platform of Oracle database It usually happens when try to import into new database

The problem occurs because imp utiliy encounters error out when trying to execute some commands

The solution to solve the problem is as following

Login as sys in the SQLPLUS and run the following sqls

$OHrdbmsadmindbmsreadsql$OHrdbmsadminprvtreadplb

After executing the above sql scripts retry the import The error should disappears

Select granteegranted_name from dba_role_privs

UNDOTBS

Guidelines for Using Partition-Level Import

Partition-level Import can only be specified in table mode It lets you selectively load data from specified partitions or subpartitions in an export file Keep the following guidelines in mind when using partition-level import

bull Import always stores the rows according to the partitioning scheme of the target table

bull Partition-level Import inserts only the row data from the specified source partitions or subpartitions

bull If the target table is partitioned partition-level Import rejects any rows that fall above the highest partition of the target table

bull Partition-level Import cannot import a nonpartitioned exported table However a partitioned table can be imported from a nonpartitioned exported table using table-level Import

bull Partition-level Import is legal only if the source table (that is the table called tablename at export time) was partitioned and exists in the Export file

bull If the partition or subpartition name is not a valid partition in the export file Import generates a warning

bull The partition or subpartition name in the parameter refers to only the partition or subpartition in the Export file which may not contain all of the data of the table on the export source system

bull If ROWS=y (default) and the table does not exist in the Import target system the table is created and all rows from the source partition or subpartition are inserted into the partition or subpartition of the target table

bull If ROWS=y (default) and IGNORE=y but the table already existed before Import all rows for the specified partition or subpartition in the table are inserted into the table The rows are stored according to the existing partitioning scheme of the target table

bull If ROWS=n Import does not insert data into the target table and continues to process other objects associated with the specified table and partition or subpartition in the file

bull If the target table is nonpartitioned the partitions and subpartitions are imported into the entire table Import requires IGNORE=y to import one or more partitions or subpartitions from the Export file into a nonpartitioned table on the import target system

USER CREATION IN OS

useradd -d exporthomeS106255 -m s106462

FIND UPTIME OF THE DATABASE

SQLgt select TO_CHAR(startup_timemm-dd-yy hh24miss) from v$instance

SQLgt select property_name property_value from database_properties

The SQL will return the following results look for DEFAULT_TEMP_TABLESPACE for the setting

PROPERTY_NAME PROPERTY_VALUEmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashDICTBASE 2DEFAULT_TEMP_TABLESPACE TEMPDBTIMEZONE +0100NLS_NCHAR_CHARACTERSET AL16UTF16GLOBAL_DB_NAME ARONGENERALICHEXPORT_VIEWS_VERSION 8NLS_LANGUAGE AMERICANNLS_TERRITORY AMERICANLS_CURRENCY $NLS_ISO_CURRENCY AMERICANLS_NUMERIC_CHARACTERS NLS_CHARACTERSET WE8ISO8859P1NLS_CALENDAR GREGORIANNLS_DATE_FORMAT DD-MON-RRNLS_DATE_LANGUAGE AMERICANNLS_SORT BINARYNLS_TIME_FORMAT HHMISSXFF AMNLS_TIMESTAMP_FORMAT DD-MON-RR HHMISSXFF AMNLS_TIME_TZ_FORMAT HHMISSXFF AM TZRNLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HHMISSXFF AM TZRNLS_DUAL_CURRENCY $

NLS_COMP BINARYNLS_LENGTH_SEMANTICS BYTENLS_NCHAR_CONV_EXCP FALSENLS_RDBMS_VERSION 92060

If default temporary tablespace is wrong the alter it with the following command

SQLgt alter database default temporary tablespace temp

To check default temporary tablespace for all users of the database

SQLgt select username temporary_tablespace account_status from dba_users

will return the following result check if all users TEMPORARY_TABLESPACE is set to correct settings

USERNAME TEMPORARY_TABLESPACE ACCOUNT_STATUSmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndashSYS TEMPRY OPENSYSTEM TEMP OPENOUTLN TEMP OPENDBSNMP TEMP OPENDBMONITOR TEMP OPENTEST TEMP OPENWMSYS TEMP EXPIRED amp LOCKED

If wrong temporary tablespace is found alter it with the correct tablespace name (for example sys) with the following SQL

SQLgt alter user sys temporary tablespace temp

Alternatively recreate or add a datafile to your temporary tablespace and change the default temporary tablespace for your database

SQLgt drop tablespace temp including contents and datafiles

SQLgt create temporary tablespace temp tempfile lsquodbtemp01dbfrsquo size 100m autoextend off extent management local uniform size 1m

SQLgt alter database default temporary tablespace temp

HOW TO DISABLE CONSTAINT

alter table PPM_PRODPPM_SAMPLE_HH_CHRSTCS disable constraint SHC_S_FK

WHAT WILL HAPPEN IF U DELETE THOSE FILES

And what can happen if I delete them

Its two very large files (150-160 MB each)920assistantsdbcatemplatesData_Warehousedfj920assistantsdbcatemplatesTransaction_Processingdfj

Re What are these files GOOD for [message 126248 is a reply to message 126216 ]

Sat 02 July 2005 0009

AchchanMessages 86Registered June 2005

Member

HiFiles that have a DJF extension contain the predefined redo logs and datafiles for seed templates in DBCAIf you delete them you wont be able to use those db creation templates in future

FOR JVM INSTALLATION IN ORACLE

For JVM installation

We have to run this scriptinitjvmasql

DB_DOMAIN NAME PARAMETER

db_domainGLOBAL_NAMES=TRUEALTER DATABASE RENAME GLOBAL_NAME TO WEBDVCTSCOM

CREATE TABLE STRUCTURE

SQLgt select DBMS_METADATAGET_DDL(TABLELOGOFF_TBLCOORS_TARGET) from dual CREATE OR REPLACE TRIGGER SYStrg_logoff

BEFORE logoff ON DATABASEBEGIN INSERT INTO SYSlogoff_tbl VALUES(sys_context(userenvsession_user) SYSDATE)END

BACKUP PATH

507 mount 1023710137unixbkp backup 508 cd backup 509 df -k 510 cd backup 511 ls 512 clear 513 ls 514 mkdir jpmc_bak 515 cd jpmc_bak 516 ls 517 df -k u02 518 pwd 519 ls u02 520 pwd 521 cp -rpf u02ccsystst 522 ls -ltr 523 history

NOOF CPU

isainfo ndashv

HOW TO LOCK THE USER ACCOUNT IN ORACLE

Alter user user_name account lock

CHANGE TABLESPACE BLOCK SIZE ISSUE

Db_2k_cache_size=10m

OEM IN ORACLE 10G

Emctl status dbconsole

httphostnameportem

SET ENVIRONMENT VARIABLE IN ORACLE

export PATH=optjava14bin$PATHexport JAVA_HOME=optjava14jre ora9i 8837 1 0 May 24 1147 ora_pmon_poi ora9i 2305 1 0 Mar 29 2359 ora_pmon_portal ora9i 2321 1 0 Mar 29 2417 ora_pmon_EDMS ora10g 17394 1 0 Apr 02 12857 ora_pmon_POI2 orainst 14743 14365 0 110243 pts3 000 grep pmonCREATE DIRECTORY create directory utl_dir as lsquopathrsquogrant all on directory utl

Modify the given parameter

utl_file_dir

If any timeout request

SqlnetInbound_connect_Timeout

Any privilege for DBMS package

Grant execute on dbmsstats to username

select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = lsquo27229rsquo and spaddr = paddr

Load dump to the Sybase database

Load database database_name from

Load database database_name from ldquocompresspathrdquo

Load database database_name from stripe_on ldquocompresspath01rdquo

Stripe on ldquocompresspath02rdquo

Dump database database_name to lsquopathrsquos

Those scripts should run for install JVM

javavminstallinitjvmsql

optoracle10gxdkadmininitxmlsql

optoracle10gxdkadminxmljasql

optoracle10grdbmsadmincatjavasql

optoracle10grdbmsadmincatexfsql

Once the database has been restarted resolve any invalid objects by

running the utlrpsql script eg

optoracle10grdbmsadminutlrpsql

Those scripts should run for Uninstall JVM

rdbmsadmincatnoexfsql

rdbmsadminrmaqjmssql

rdbmsadminrmcdcsql

xdkadminrmxmlsql

javavminstallrmjvmsql

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name) FROM dba_tablespaces

1023710137mdashBackup Report

SYBASE ndashDatabase

1su ndash syb

2dscp

3open

4listall

5isql ndashUsa ndashSddm(database Name)

6sp-who

7go

8shutdown with nowait

9Sybasesyb125ASE-12-5install

10 startserver ndashf RUNndashgsms

online database gem_curr

11sp ndashhelpdb

12sp-configure

13sp-config parameter newvalue=0is

vgdisplay -v vg02 | grep LV Name |more

For Truncate the table

dump tran test_saatchi with truncate_only

backupsp_helpdb test_saatchi

cd $SYSBASE

MORE INTERFACE

Sybadmin-pW

MAX_ENABLED_ROLES = 70

svrmgrlconnect internalstartupshutdown abort

The following command for gather statistics of Number of rows in each table

exec dbms_statsgather_database_stats()

create or replace procedure sess1kill_session ( v_sid number v_serial number ) as v_varchar2 varchar2(100) begin execute immediate ALTER SYSTEM KILL SESSION || v_sid || || v_serial || end

HARD MOUNT DRIVE

mount -o hardrwnoacrsize=32768wsize=32768suidproto=tcpvers=3 1023710137unixbkp backup

oradim -delete -sid EPToradim -new -sid EPT -pfile DoracleHome2databaseinitEPToraSET TNS_ADMIN=Coracleora92networkadmin

Alter user user_name quota unlimted on tablespace_name

This is most likely a bug I would recommend to apply patchset 9207 As oracle recommends at least 9203 versiyon Anyway you can try below fix

Change the listener and database services Log On user to domain user who is a member of the groups domain admin and ORA_DBA group The default setting is Local System Account

- Run regedit- Drill down to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServices- Locate and delete the OracleOraHome92TNSListener (or whatever the listener name is)- Reboot the entire Windows box- When started and logged on as the Oracle user go to a DOS Command prompt- Run lsnrctl start ltlistener_namegt without the single quotes and replacing ltlistener_namegt with the name- An OS error of 1060 will be seen (normal) as the service is missing- The listener should start correctly or the next logical error may display

By the way can you explain backward of the problem Did you do any upgrade May you use double oracle_homelsquovaroptoracle--Installloc

zfs set quota=10G datapoolzfsoracle

select oracle_username os_user_namelocked_modeobject_nameobject_type from v$locked_object a dba_objects b where aobject_id=bobject_id

Select distinct busername bosuser bmachinebterminalmode_heldmode_requested

blogon_time SESSION WAIT sw From dba_ddl_locks a v$session b v$session_wait sw Where name= and asession_id=bsid and status=ACTIVE and swsid=bsid

spcreatesqlspreportsqlfor i in SAGACEND SAGACENB GLATTD STNC wrkshdevdoORACLE_SID=$iexport ORACLE_SIDsqlplus as sysdba ltlt select sum(bytes)10241024 from dba_data_filesexitdone

optinfoallinfo

For Hp-ux File Extend

fuser -c oradata2

umount oradata2

lvextend -L 40000M devvg00lvol7extendfs devvg00rlvoradata2

mount oradata2

devvg01lvwls 2097152 1457349 610113 70 weblogic

ALL_TAB_PRIVS All object grants where the user or public is granteeALL_TAB_PRIVS_MADE All object grants made by user or on user owned objectsALL_TAB_PRIVS_RECD All object grants to user or publicDBA_SYS_PRIVS System privileges granted to users and rolesDBA_ROLES List of all roles in the databaseDBA_ROLE_PRIVS Roles granted to users and to other rolesROLE_ROLE_PRIVS Roles granted to other rolesROLE_SYS_PRIVS System privileges granted to rolesROLE_TAB_PRIVS Table privileges granted to rolesSESSION_PRIVS All privileges currently available to userSESSION_ROLES All roles currently available to userUSER_SYS_PRIVS System privileges granted to current userUSER_TAB_PRIV Grants on objects where current user is grantee grantor or owner

DBA_TAB_PRIVSetcftpdftpusers

bash-300 zfs create datapooltelebash-300 zfs set mountpoint=app datapoolappbash-300 zfs set quota=10G datapoolapp

EXECUTE dbms_sessionset_sql_trace (FALSE)

SELECT SUBSTR (dfNAME 1 70) file_name dfbytes 1024 1024 allocated_mb((dfbytes 1024 1024) - NVL (SUM (dfsbytes) 1024 1024 0))used_mbNVL (SUM (dfsbytes) 1024 1024 0) free_space_mbFROM v$datafile df dba_free_space dfsWHERE dffile = dfsfile_id(+)GROUP BY dfsfile_id dfNAME dffile dfbytesORDER BY file_name

purge table name-of-table

purge index name-of-table

purge recyclebin

purge dba_recyclebin

purge tablespace name-of-tablespace

purge tablespace name-of-tablespaces user name-of-user

date dd if = ltinput filegt of = lt output file gt date

isainfo ndashv-output of the o2 is 32 bit or 64 bit

1023720911

isql -Udba -Scso_otpwSQL

for start and stop the databasescript sybdata1syb126IQcso_ot

Recover databaseAlter database open

1023720469

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name)FROM dba_tablespaces

Here is the query to get the details based on Unix PID select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = ltunix_pidgt and spaddr = paddr

CREATE CONTROLFILE SET DATABASE GMACDEV RESETLOGS NOARCHIVELOG-- SET STANDBY TO MAXIMIZE PERFORMANCE MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 30 MAXINSTANCES 1 MAXLOGHISTORY 112LOGFILE GROUP 1 gmacGMACDEVloglog1rdo SIZE 100M GROUP 2 gmacGMACDEVloglog2rdo SIZE 100M-- STANDBY LOGFILEDATAFILE gmacGMACDEVdatasystemdbf gmacGMACDEVdataundodbf gmacGMACDEVdatauserdbf gmacGMACDEVdatatestdbfCHARACTER SET US7ASCIIselect from nls_database_parameters 2 where parameter = any(NLS_CHARACTERSETNLS_NCHAR_CHARACTERSET)

EXECUTE dbms_sessionset_sql_trace (TRUE)PLSQL PROFILER

If you are not already configured DBMS_PROFILE package look the following script in Oracle Home-gtrdbms-gtadminSCRIPT PROFLOADSQL and PROFTABSQL

102375164Softwares

My problemWhen I dont use tnsnames and want to use ipc protocol then I get the followingerrorSQLgt connect myuseridmypasswordERROR

ORA-01034 ORACLE not availableORA-27121 unable to determine size of shared memory segmentSVR4 Error 13 Permission denied

Answer to your problem=======================Make sure file oracle has the following permissionscd $ORACLE_HOMEbin6751 If not1 Login as oracle user2 Shutdown (normal) the db3 Go to $ORACLE_HOMEbin4 Execute the followingchmod 6751 oracle5 Check the file permissions on oracle using the followingls -l oracle they should be-rwsr-s--x Startup the db and try connecting as dba or non-oracle user if this will not work- make sure permisions on oracle filesystems are set correctly ( set to 755)- filesystem must be mounted with correct setuid

date +DATE mdynTIMEHMS

Now start the Oracle EM dbconsole Build Script ($ORACLE_HOMEbinemca for Linux and $ORACLE_HOMEBinemcabat for Windows)

$ emca -repos create$ emca -config dbcontrol db

STARTED EMCA at Fri May 14 104322 MEST 2004Enter the following information about the databaseto be configured

Listener port number 1521Database SID AKI1Service name AKI1WORLDEmail address for notification martinzahnakadiacomEmail gateway for notification mailhostPassword for dbsnmp xxxxxxx

Password for sysman xxxxxxxPassword for sys xxxxxxx

---------------------------------------------------------You have specified the following settings

Database ORACLE_HOME optoracleproduct1010Enterprise Manager ORACLE_HOME optoracleproduct1010

Database host name akiraListener port number 1521Database SID AKI1Service name AKI1Email address for notification martinzahnakadiacomEmail gateway for notification mailhost---------------------------------------------------------Do you wish to continue [yesno] yesAM oraclesysmanemcpEMConfig updateReposVarsINFO Updating file configrepositoryvariables

Now wait about 10 Minutes to complete

M oraclesysmanemcpEMConfig createRepositoryINFO Creating repository M oraclesysmanemcpEMConfig performINFO Repository was created successfullyM oraclesysmanemcputilPortQuery findUsedPortsINFO Searching services file for used portAM oraclesysmanemcpEMConfig getPropertiesINFO Starting the DBConsole AM oraclesysmanemcpEMConfig performINFO DBConsole is started successfullyINFO gtgtgtgtgtgtgtgtgtgtgt The Enterprise Manager URL is httpakira5500em ltltltltltltltltltltltEnterprise Manager configuration is completed successfullyFINISHED EMCA at Fri May 14 105525 MEST 2004

Try to connect to the database Control

httpakira5500em

emca -deconfig dbcontrol db -repos drop

1gt select name from sysconfigures2gt goMsg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go

Msg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go name

-------------------------------------------------------------------------------- number of devices

suspend audit when device full

(2 rows affected)1gt sp_configure number of devices2gt go Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 36 60 60 number dynamic

(1 row affected)(return status = 0)1gt sp_configure number of devices702gt go00000000002720070315 14521046 server Configuration file sybasesyb125ASE-12_5ddmcfg has been written and the previous version has been renamed to sybasesyb125ASE-12_5ddm04600000000002720070315 14521048 server The configuration option number of devices has been changed by sa from 60 to 70 Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 44 70 70 number dynamic

(1 row affected)Configuration option changed The SQL Server need not be rebooted since theoption is dynamicChanging the value of number of devices to 70 increases the amount of memoryASE uses by 12 K(return status = 0)

disk initname=gem_hist_data7physname=datasyb125gem_histgem_hist_data7datsize=1600Mgo

This Query is used to find out the object name and lock id

select cownercobject_namecobject_typebsidbserialbstatusbosuserbmachine from v$locked_object a v$session bdba_objects c where bsid = asession_id and aobject_id = cobject_id

For apply patches for migration one version to another version1Run the setupexe2shu down the database3startup migrate4Run the below scriptscatpatchsqlcatciosqlutlrpsqlcatexpsql5shu immediate

Find out the locked object and sql query

select aobject_name boracle_username bos_user_namecsid cserialcterminal dsql_textfrom sysdba_objects av$locked_object bv$session cv$sqltext d where aobject_id = bobject_id and csid = bsession_id and csql_hash_value = dhash_value

HP-UX Cron tab

NAME

crontab - user crontab file

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 10: 7101564-Daily-Work

$OHrdbmsadmindbmsreadsql$OHrdbmsadminprvtreadplb

After executing the above sql scripts retry the import The error should disappears

Select granteegranted_name from dba_role_privs

UNDOTBS

Guidelines for Using Partition-Level Import

Partition-level Import can only be specified in table mode It lets you selectively load data from specified partitions or subpartitions in an export file Keep the following guidelines in mind when using partition-level import

bull Import always stores the rows according to the partitioning scheme of the target table

bull Partition-level Import inserts only the row data from the specified source partitions or subpartitions

bull If the target table is partitioned partition-level Import rejects any rows that fall above the highest partition of the target table

bull Partition-level Import cannot import a nonpartitioned exported table However a partitioned table can be imported from a nonpartitioned exported table using table-level Import

bull Partition-level Import is legal only if the source table (that is the table called tablename at export time) was partitioned and exists in the Export file

bull If the partition or subpartition name is not a valid partition in the export file Import generates a warning

bull The partition or subpartition name in the parameter refers to only the partition or subpartition in the Export file which may not contain all of the data of the table on the export source system

bull If ROWS=y (default) and the table does not exist in the Import target system the table is created and all rows from the source partition or subpartition are inserted into the partition or subpartition of the target table

bull If ROWS=y (default) and IGNORE=y but the table already existed before Import all rows for the specified partition or subpartition in the table are inserted into the table The rows are stored according to the existing partitioning scheme of the target table

bull If ROWS=n Import does not insert data into the target table and continues to process other objects associated with the specified table and partition or subpartition in the file

bull If the target table is nonpartitioned the partitions and subpartitions are imported into the entire table Import requires IGNORE=y to import one or more partitions or subpartitions from the Export file into a nonpartitioned table on the import target system

USER CREATION IN OS

useradd -d exporthomeS106255 -m s106462

FIND UPTIME OF THE DATABASE

SQLgt select TO_CHAR(startup_timemm-dd-yy hh24miss) from v$instance

SQLgt select property_name property_value from database_properties

The SQL will return the following results look for DEFAULT_TEMP_TABLESPACE for the setting

PROPERTY_NAME PROPERTY_VALUEmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashDICTBASE 2DEFAULT_TEMP_TABLESPACE TEMPDBTIMEZONE +0100NLS_NCHAR_CHARACTERSET AL16UTF16GLOBAL_DB_NAME ARONGENERALICHEXPORT_VIEWS_VERSION 8NLS_LANGUAGE AMERICANNLS_TERRITORY AMERICANLS_CURRENCY $NLS_ISO_CURRENCY AMERICANLS_NUMERIC_CHARACTERS NLS_CHARACTERSET WE8ISO8859P1NLS_CALENDAR GREGORIANNLS_DATE_FORMAT DD-MON-RRNLS_DATE_LANGUAGE AMERICANNLS_SORT BINARYNLS_TIME_FORMAT HHMISSXFF AMNLS_TIMESTAMP_FORMAT DD-MON-RR HHMISSXFF AMNLS_TIME_TZ_FORMAT HHMISSXFF AM TZRNLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HHMISSXFF AM TZRNLS_DUAL_CURRENCY $

NLS_COMP BINARYNLS_LENGTH_SEMANTICS BYTENLS_NCHAR_CONV_EXCP FALSENLS_RDBMS_VERSION 92060

If default temporary tablespace is wrong the alter it with the following command

SQLgt alter database default temporary tablespace temp

To check default temporary tablespace for all users of the database

SQLgt select username temporary_tablespace account_status from dba_users

will return the following result check if all users TEMPORARY_TABLESPACE is set to correct settings

USERNAME TEMPORARY_TABLESPACE ACCOUNT_STATUSmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndashSYS TEMPRY OPENSYSTEM TEMP OPENOUTLN TEMP OPENDBSNMP TEMP OPENDBMONITOR TEMP OPENTEST TEMP OPENWMSYS TEMP EXPIRED amp LOCKED

If wrong temporary tablespace is found alter it with the correct tablespace name (for example sys) with the following SQL

SQLgt alter user sys temporary tablespace temp

Alternatively recreate or add a datafile to your temporary tablespace and change the default temporary tablespace for your database

SQLgt drop tablespace temp including contents and datafiles

SQLgt create temporary tablespace temp tempfile lsquodbtemp01dbfrsquo size 100m autoextend off extent management local uniform size 1m

SQLgt alter database default temporary tablespace temp

HOW TO DISABLE CONSTAINT

alter table PPM_PRODPPM_SAMPLE_HH_CHRSTCS disable constraint SHC_S_FK

WHAT WILL HAPPEN IF U DELETE THOSE FILES

And what can happen if I delete them

Its two very large files (150-160 MB each)920assistantsdbcatemplatesData_Warehousedfj920assistantsdbcatemplatesTransaction_Processingdfj

Re What are these files GOOD for [message 126248 is a reply to message 126216 ]

Sat 02 July 2005 0009

AchchanMessages 86Registered June 2005

Member

HiFiles that have a DJF extension contain the predefined redo logs and datafiles for seed templates in DBCAIf you delete them you wont be able to use those db creation templates in future

FOR JVM INSTALLATION IN ORACLE

For JVM installation

We have to run this scriptinitjvmasql

DB_DOMAIN NAME PARAMETER

db_domainGLOBAL_NAMES=TRUEALTER DATABASE RENAME GLOBAL_NAME TO WEBDVCTSCOM

CREATE TABLE STRUCTURE

SQLgt select DBMS_METADATAGET_DDL(TABLELOGOFF_TBLCOORS_TARGET) from dual CREATE OR REPLACE TRIGGER SYStrg_logoff

BEFORE logoff ON DATABASEBEGIN INSERT INTO SYSlogoff_tbl VALUES(sys_context(userenvsession_user) SYSDATE)END

BACKUP PATH

507 mount 1023710137unixbkp backup 508 cd backup 509 df -k 510 cd backup 511 ls 512 clear 513 ls 514 mkdir jpmc_bak 515 cd jpmc_bak 516 ls 517 df -k u02 518 pwd 519 ls u02 520 pwd 521 cp -rpf u02ccsystst 522 ls -ltr 523 history

NOOF CPU

isainfo ndashv

HOW TO LOCK THE USER ACCOUNT IN ORACLE

Alter user user_name account lock

CHANGE TABLESPACE BLOCK SIZE ISSUE

Db_2k_cache_size=10m

OEM IN ORACLE 10G

Emctl status dbconsole

httphostnameportem

SET ENVIRONMENT VARIABLE IN ORACLE

export PATH=optjava14bin$PATHexport JAVA_HOME=optjava14jre ora9i 8837 1 0 May 24 1147 ora_pmon_poi ora9i 2305 1 0 Mar 29 2359 ora_pmon_portal ora9i 2321 1 0 Mar 29 2417 ora_pmon_EDMS ora10g 17394 1 0 Apr 02 12857 ora_pmon_POI2 orainst 14743 14365 0 110243 pts3 000 grep pmonCREATE DIRECTORY create directory utl_dir as lsquopathrsquogrant all on directory utl

Modify the given parameter

utl_file_dir

If any timeout request

SqlnetInbound_connect_Timeout

Any privilege for DBMS package

Grant execute on dbmsstats to username

select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = lsquo27229rsquo and spaddr = paddr

Load dump to the Sybase database

Load database database_name from

Load database database_name from ldquocompresspathrdquo

Load database database_name from stripe_on ldquocompresspath01rdquo

Stripe on ldquocompresspath02rdquo

Dump database database_name to lsquopathrsquos

Those scripts should run for install JVM

javavminstallinitjvmsql

optoracle10gxdkadmininitxmlsql

optoracle10gxdkadminxmljasql

optoracle10grdbmsadmincatjavasql

optoracle10grdbmsadmincatexfsql

Once the database has been restarted resolve any invalid objects by

running the utlrpsql script eg

optoracle10grdbmsadminutlrpsql

Those scripts should run for Uninstall JVM

rdbmsadmincatnoexfsql

rdbmsadminrmaqjmssql

rdbmsadminrmcdcsql

xdkadminrmxmlsql

javavminstallrmjvmsql

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name) FROM dba_tablespaces

1023710137mdashBackup Report

SYBASE ndashDatabase

1su ndash syb

2dscp

3open

4listall

5isql ndashUsa ndashSddm(database Name)

6sp-who

7go

8shutdown with nowait

9Sybasesyb125ASE-12-5install

10 startserver ndashf RUNndashgsms

online database gem_curr

11sp ndashhelpdb

12sp-configure

13sp-config parameter newvalue=0is

vgdisplay -v vg02 | grep LV Name |more

For Truncate the table

dump tran test_saatchi with truncate_only

backupsp_helpdb test_saatchi

cd $SYSBASE

MORE INTERFACE

Sybadmin-pW

MAX_ENABLED_ROLES = 70

svrmgrlconnect internalstartupshutdown abort

The following command for gather statistics of Number of rows in each table

exec dbms_statsgather_database_stats()

create or replace procedure sess1kill_session ( v_sid number v_serial number ) as v_varchar2 varchar2(100) begin execute immediate ALTER SYSTEM KILL SESSION || v_sid || || v_serial || end

HARD MOUNT DRIVE

mount -o hardrwnoacrsize=32768wsize=32768suidproto=tcpvers=3 1023710137unixbkp backup

oradim -delete -sid EPToradim -new -sid EPT -pfile DoracleHome2databaseinitEPToraSET TNS_ADMIN=Coracleora92networkadmin

Alter user user_name quota unlimted on tablespace_name

This is most likely a bug I would recommend to apply patchset 9207 As oracle recommends at least 9203 versiyon Anyway you can try below fix

Change the listener and database services Log On user to domain user who is a member of the groups domain admin and ORA_DBA group The default setting is Local System Account

- Run regedit- Drill down to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServices- Locate and delete the OracleOraHome92TNSListener (or whatever the listener name is)- Reboot the entire Windows box- When started and logged on as the Oracle user go to a DOS Command prompt- Run lsnrctl start ltlistener_namegt without the single quotes and replacing ltlistener_namegt with the name- An OS error of 1060 will be seen (normal) as the service is missing- The listener should start correctly or the next logical error may display

By the way can you explain backward of the problem Did you do any upgrade May you use double oracle_homelsquovaroptoracle--Installloc

zfs set quota=10G datapoolzfsoracle

select oracle_username os_user_namelocked_modeobject_nameobject_type from v$locked_object a dba_objects b where aobject_id=bobject_id

Select distinct busername bosuser bmachinebterminalmode_heldmode_requested

blogon_time SESSION WAIT sw From dba_ddl_locks a v$session b v$session_wait sw Where name= and asession_id=bsid and status=ACTIVE and swsid=bsid

spcreatesqlspreportsqlfor i in SAGACEND SAGACENB GLATTD STNC wrkshdevdoORACLE_SID=$iexport ORACLE_SIDsqlplus as sysdba ltlt select sum(bytes)10241024 from dba_data_filesexitdone

optinfoallinfo

For Hp-ux File Extend

fuser -c oradata2

umount oradata2

lvextend -L 40000M devvg00lvol7extendfs devvg00rlvoradata2

mount oradata2

devvg01lvwls 2097152 1457349 610113 70 weblogic

ALL_TAB_PRIVS All object grants where the user or public is granteeALL_TAB_PRIVS_MADE All object grants made by user or on user owned objectsALL_TAB_PRIVS_RECD All object grants to user or publicDBA_SYS_PRIVS System privileges granted to users and rolesDBA_ROLES List of all roles in the databaseDBA_ROLE_PRIVS Roles granted to users and to other rolesROLE_ROLE_PRIVS Roles granted to other rolesROLE_SYS_PRIVS System privileges granted to rolesROLE_TAB_PRIVS Table privileges granted to rolesSESSION_PRIVS All privileges currently available to userSESSION_ROLES All roles currently available to userUSER_SYS_PRIVS System privileges granted to current userUSER_TAB_PRIV Grants on objects where current user is grantee grantor or owner

DBA_TAB_PRIVSetcftpdftpusers

bash-300 zfs create datapooltelebash-300 zfs set mountpoint=app datapoolappbash-300 zfs set quota=10G datapoolapp

EXECUTE dbms_sessionset_sql_trace (FALSE)

SELECT SUBSTR (dfNAME 1 70) file_name dfbytes 1024 1024 allocated_mb((dfbytes 1024 1024) - NVL (SUM (dfsbytes) 1024 1024 0))used_mbNVL (SUM (dfsbytes) 1024 1024 0) free_space_mbFROM v$datafile df dba_free_space dfsWHERE dffile = dfsfile_id(+)GROUP BY dfsfile_id dfNAME dffile dfbytesORDER BY file_name

purge table name-of-table

purge index name-of-table

purge recyclebin

purge dba_recyclebin

purge tablespace name-of-tablespace

purge tablespace name-of-tablespaces user name-of-user

date dd if = ltinput filegt of = lt output file gt date

isainfo ndashv-output of the o2 is 32 bit or 64 bit

1023720911

isql -Udba -Scso_otpwSQL

for start and stop the databasescript sybdata1syb126IQcso_ot

Recover databaseAlter database open

1023720469

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name)FROM dba_tablespaces

Here is the query to get the details based on Unix PID select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = ltunix_pidgt and spaddr = paddr

CREATE CONTROLFILE SET DATABASE GMACDEV RESETLOGS NOARCHIVELOG-- SET STANDBY TO MAXIMIZE PERFORMANCE MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 30 MAXINSTANCES 1 MAXLOGHISTORY 112LOGFILE GROUP 1 gmacGMACDEVloglog1rdo SIZE 100M GROUP 2 gmacGMACDEVloglog2rdo SIZE 100M-- STANDBY LOGFILEDATAFILE gmacGMACDEVdatasystemdbf gmacGMACDEVdataundodbf gmacGMACDEVdatauserdbf gmacGMACDEVdatatestdbfCHARACTER SET US7ASCIIselect from nls_database_parameters 2 where parameter = any(NLS_CHARACTERSETNLS_NCHAR_CHARACTERSET)

EXECUTE dbms_sessionset_sql_trace (TRUE)PLSQL PROFILER

If you are not already configured DBMS_PROFILE package look the following script in Oracle Home-gtrdbms-gtadminSCRIPT PROFLOADSQL and PROFTABSQL

102375164Softwares

My problemWhen I dont use tnsnames and want to use ipc protocol then I get the followingerrorSQLgt connect myuseridmypasswordERROR

ORA-01034 ORACLE not availableORA-27121 unable to determine size of shared memory segmentSVR4 Error 13 Permission denied

Answer to your problem=======================Make sure file oracle has the following permissionscd $ORACLE_HOMEbin6751 If not1 Login as oracle user2 Shutdown (normal) the db3 Go to $ORACLE_HOMEbin4 Execute the followingchmod 6751 oracle5 Check the file permissions on oracle using the followingls -l oracle they should be-rwsr-s--x Startup the db and try connecting as dba or non-oracle user if this will not work- make sure permisions on oracle filesystems are set correctly ( set to 755)- filesystem must be mounted with correct setuid

date +DATE mdynTIMEHMS

Now start the Oracle EM dbconsole Build Script ($ORACLE_HOMEbinemca for Linux and $ORACLE_HOMEBinemcabat for Windows)

$ emca -repos create$ emca -config dbcontrol db

STARTED EMCA at Fri May 14 104322 MEST 2004Enter the following information about the databaseto be configured

Listener port number 1521Database SID AKI1Service name AKI1WORLDEmail address for notification martinzahnakadiacomEmail gateway for notification mailhostPassword for dbsnmp xxxxxxx

Password for sysman xxxxxxxPassword for sys xxxxxxx

---------------------------------------------------------You have specified the following settings

Database ORACLE_HOME optoracleproduct1010Enterprise Manager ORACLE_HOME optoracleproduct1010

Database host name akiraListener port number 1521Database SID AKI1Service name AKI1Email address for notification martinzahnakadiacomEmail gateway for notification mailhost---------------------------------------------------------Do you wish to continue [yesno] yesAM oraclesysmanemcpEMConfig updateReposVarsINFO Updating file configrepositoryvariables

Now wait about 10 Minutes to complete

M oraclesysmanemcpEMConfig createRepositoryINFO Creating repository M oraclesysmanemcpEMConfig performINFO Repository was created successfullyM oraclesysmanemcputilPortQuery findUsedPortsINFO Searching services file for used portAM oraclesysmanemcpEMConfig getPropertiesINFO Starting the DBConsole AM oraclesysmanemcpEMConfig performINFO DBConsole is started successfullyINFO gtgtgtgtgtgtgtgtgtgtgt The Enterprise Manager URL is httpakira5500em ltltltltltltltltltltltEnterprise Manager configuration is completed successfullyFINISHED EMCA at Fri May 14 105525 MEST 2004

Try to connect to the database Control

httpakira5500em

emca -deconfig dbcontrol db -repos drop

1gt select name from sysconfigures2gt goMsg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go

Msg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go name

-------------------------------------------------------------------------------- number of devices

suspend audit when device full

(2 rows affected)1gt sp_configure number of devices2gt go Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 36 60 60 number dynamic

(1 row affected)(return status = 0)1gt sp_configure number of devices702gt go00000000002720070315 14521046 server Configuration file sybasesyb125ASE-12_5ddmcfg has been written and the previous version has been renamed to sybasesyb125ASE-12_5ddm04600000000002720070315 14521048 server The configuration option number of devices has been changed by sa from 60 to 70 Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 44 70 70 number dynamic

(1 row affected)Configuration option changed The SQL Server need not be rebooted since theoption is dynamicChanging the value of number of devices to 70 increases the amount of memoryASE uses by 12 K(return status = 0)

disk initname=gem_hist_data7physname=datasyb125gem_histgem_hist_data7datsize=1600Mgo

This Query is used to find out the object name and lock id

select cownercobject_namecobject_typebsidbserialbstatusbosuserbmachine from v$locked_object a v$session bdba_objects c where bsid = asession_id and aobject_id = cobject_id

For apply patches for migration one version to another version1Run the setupexe2shu down the database3startup migrate4Run the below scriptscatpatchsqlcatciosqlutlrpsqlcatexpsql5shu immediate

Find out the locked object and sql query

select aobject_name boracle_username bos_user_namecsid cserialcterminal dsql_textfrom sysdba_objects av$locked_object bv$session cv$sqltext d where aobject_id = bobject_id and csid = bsession_id and csql_hash_value = dhash_value

HP-UX Cron tab

NAME

crontab - user crontab file

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 11: 7101564-Daily-Work

bull If the target table is nonpartitioned the partitions and subpartitions are imported into the entire table Import requires IGNORE=y to import one or more partitions or subpartitions from the Export file into a nonpartitioned table on the import target system

USER CREATION IN OS

useradd -d exporthomeS106255 -m s106462

FIND UPTIME OF THE DATABASE

SQLgt select TO_CHAR(startup_timemm-dd-yy hh24miss) from v$instance

SQLgt select property_name property_value from database_properties

The SQL will return the following results look for DEFAULT_TEMP_TABLESPACE for the setting

PROPERTY_NAME PROPERTY_VALUEmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashDICTBASE 2DEFAULT_TEMP_TABLESPACE TEMPDBTIMEZONE +0100NLS_NCHAR_CHARACTERSET AL16UTF16GLOBAL_DB_NAME ARONGENERALICHEXPORT_VIEWS_VERSION 8NLS_LANGUAGE AMERICANNLS_TERRITORY AMERICANLS_CURRENCY $NLS_ISO_CURRENCY AMERICANLS_NUMERIC_CHARACTERS NLS_CHARACTERSET WE8ISO8859P1NLS_CALENDAR GREGORIANNLS_DATE_FORMAT DD-MON-RRNLS_DATE_LANGUAGE AMERICANNLS_SORT BINARYNLS_TIME_FORMAT HHMISSXFF AMNLS_TIMESTAMP_FORMAT DD-MON-RR HHMISSXFF AMNLS_TIME_TZ_FORMAT HHMISSXFF AM TZRNLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HHMISSXFF AM TZRNLS_DUAL_CURRENCY $

NLS_COMP BINARYNLS_LENGTH_SEMANTICS BYTENLS_NCHAR_CONV_EXCP FALSENLS_RDBMS_VERSION 92060

If default temporary tablespace is wrong the alter it with the following command

SQLgt alter database default temporary tablespace temp

To check default temporary tablespace for all users of the database

SQLgt select username temporary_tablespace account_status from dba_users

will return the following result check if all users TEMPORARY_TABLESPACE is set to correct settings

USERNAME TEMPORARY_TABLESPACE ACCOUNT_STATUSmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndashSYS TEMPRY OPENSYSTEM TEMP OPENOUTLN TEMP OPENDBSNMP TEMP OPENDBMONITOR TEMP OPENTEST TEMP OPENWMSYS TEMP EXPIRED amp LOCKED

If wrong temporary tablespace is found alter it with the correct tablespace name (for example sys) with the following SQL

SQLgt alter user sys temporary tablespace temp

Alternatively recreate or add a datafile to your temporary tablespace and change the default temporary tablespace for your database

SQLgt drop tablespace temp including contents and datafiles

SQLgt create temporary tablespace temp tempfile lsquodbtemp01dbfrsquo size 100m autoextend off extent management local uniform size 1m

SQLgt alter database default temporary tablespace temp

HOW TO DISABLE CONSTAINT

alter table PPM_PRODPPM_SAMPLE_HH_CHRSTCS disable constraint SHC_S_FK

WHAT WILL HAPPEN IF U DELETE THOSE FILES

And what can happen if I delete them

Its two very large files (150-160 MB each)920assistantsdbcatemplatesData_Warehousedfj920assistantsdbcatemplatesTransaction_Processingdfj

Re What are these files GOOD for [message 126248 is a reply to message 126216 ]

Sat 02 July 2005 0009

AchchanMessages 86Registered June 2005

Member

HiFiles that have a DJF extension contain the predefined redo logs and datafiles for seed templates in DBCAIf you delete them you wont be able to use those db creation templates in future

FOR JVM INSTALLATION IN ORACLE

For JVM installation

We have to run this scriptinitjvmasql

DB_DOMAIN NAME PARAMETER

db_domainGLOBAL_NAMES=TRUEALTER DATABASE RENAME GLOBAL_NAME TO WEBDVCTSCOM

CREATE TABLE STRUCTURE

SQLgt select DBMS_METADATAGET_DDL(TABLELOGOFF_TBLCOORS_TARGET) from dual CREATE OR REPLACE TRIGGER SYStrg_logoff

BEFORE logoff ON DATABASEBEGIN INSERT INTO SYSlogoff_tbl VALUES(sys_context(userenvsession_user) SYSDATE)END

BACKUP PATH

507 mount 1023710137unixbkp backup 508 cd backup 509 df -k 510 cd backup 511 ls 512 clear 513 ls 514 mkdir jpmc_bak 515 cd jpmc_bak 516 ls 517 df -k u02 518 pwd 519 ls u02 520 pwd 521 cp -rpf u02ccsystst 522 ls -ltr 523 history

NOOF CPU

isainfo ndashv

HOW TO LOCK THE USER ACCOUNT IN ORACLE

Alter user user_name account lock

CHANGE TABLESPACE BLOCK SIZE ISSUE

Db_2k_cache_size=10m

OEM IN ORACLE 10G

Emctl status dbconsole

httphostnameportem

SET ENVIRONMENT VARIABLE IN ORACLE

export PATH=optjava14bin$PATHexport JAVA_HOME=optjava14jre ora9i 8837 1 0 May 24 1147 ora_pmon_poi ora9i 2305 1 0 Mar 29 2359 ora_pmon_portal ora9i 2321 1 0 Mar 29 2417 ora_pmon_EDMS ora10g 17394 1 0 Apr 02 12857 ora_pmon_POI2 orainst 14743 14365 0 110243 pts3 000 grep pmonCREATE DIRECTORY create directory utl_dir as lsquopathrsquogrant all on directory utl

Modify the given parameter

utl_file_dir

If any timeout request

SqlnetInbound_connect_Timeout

Any privilege for DBMS package

Grant execute on dbmsstats to username

select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = lsquo27229rsquo and spaddr = paddr

Load dump to the Sybase database

Load database database_name from

Load database database_name from ldquocompresspathrdquo

Load database database_name from stripe_on ldquocompresspath01rdquo

Stripe on ldquocompresspath02rdquo

Dump database database_name to lsquopathrsquos

Those scripts should run for install JVM

javavminstallinitjvmsql

optoracle10gxdkadmininitxmlsql

optoracle10gxdkadminxmljasql

optoracle10grdbmsadmincatjavasql

optoracle10grdbmsadmincatexfsql

Once the database has been restarted resolve any invalid objects by

running the utlrpsql script eg

optoracle10grdbmsadminutlrpsql

Those scripts should run for Uninstall JVM

rdbmsadmincatnoexfsql

rdbmsadminrmaqjmssql

rdbmsadminrmcdcsql

xdkadminrmxmlsql

javavminstallrmjvmsql

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name) FROM dba_tablespaces

1023710137mdashBackup Report

SYBASE ndashDatabase

1su ndash syb

2dscp

3open

4listall

5isql ndashUsa ndashSddm(database Name)

6sp-who

7go

8shutdown with nowait

9Sybasesyb125ASE-12-5install

10 startserver ndashf RUNndashgsms

online database gem_curr

11sp ndashhelpdb

12sp-configure

13sp-config parameter newvalue=0is

vgdisplay -v vg02 | grep LV Name |more

For Truncate the table

dump tran test_saatchi with truncate_only

backupsp_helpdb test_saatchi

cd $SYSBASE

MORE INTERFACE

Sybadmin-pW

MAX_ENABLED_ROLES = 70

svrmgrlconnect internalstartupshutdown abort

The following command for gather statistics of Number of rows in each table

exec dbms_statsgather_database_stats()

create or replace procedure sess1kill_session ( v_sid number v_serial number ) as v_varchar2 varchar2(100) begin execute immediate ALTER SYSTEM KILL SESSION || v_sid || || v_serial || end

HARD MOUNT DRIVE

mount -o hardrwnoacrsize=32768wsize=32768suidproto=tcpvers=3 1023710137unixbkp backup

oradim -delete -sid EPToradim -new -sid EPT -pfile DoracleHome2databaseinitEPToraSET TNS_ADMIN=Coracleora92networkadmin

Alter user user_name quota unlimted on tablespace_name

This is most likely a bug I would recommend to apply patchset 9207 As oracle recommends at least 9203 versiyon Anyway you can try below fix

Change the listener and database services Log On user to domain user who is a member of the groups domain admin and ORA_DBA group The default setting is Local System Account

- Run regedit- Drill down to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServices- Locate and delete the OracleOraHome92TNSListener (or whatever the listener name is)- Reboot the entire Windows box- When started and logged on as the Oracle user go to a DOS Command prompt- Run lsnrctl start ltlistener_namegt without the single quotes and replacing ltlistener_namegt with the name- An OS error of 1060 will be seen (normal) as the service is missing- The listener should start correctly or the next logical error may display

By the way can you explain backward of the problem Did you do any upgrade May you use double oracle_homelsquovaroptoracle--Installloc

zfs set quota=10G datapoolzfsoracle

select oracle_username os_user_namelocked_modeobject_nameobject_type from v$locked_object a dba_objects b where aobject_id=bobject_id

Select distinct busername bosuser bmachinebterminalmode_heldmode_requested

blogon_time SESSION WAIT sw From dba_ddl_locks a v$session b v$session_wait sw Where name= and asession_id=bsid and status=ACTIVE and swsid=bsid

spcreatesqlspreportsqlfor i in SAGACEND SAGACENB GLATTD STNC wrkshdevdoORACLE_SID=$iexport ORACLE_SIDsqlplus as sysdba ltlt select sum(bytes)10241024 from dba_data_filesexitdone

optinfoallinfo

For Hp-ux File Extend

fuser -c oradata2

umount oradata2

lvextend -L 40000M devvg00lvol7extendfs devvg00rlvoradata2

mount oradata2

devvg01lvwls 2097152 1457349 610113 70 weblogic

ALL_TAB_PRIVS All object grants where the user or public is granteeALL_TAB_PRIVS_MADE All object grants made by user or on user owned objectsALL_TAB_PRIVS_RECD All object grants to user or publicDBA_SYS_PRIVS System privileges granted to users and rolesDBA_ROLES List of all roles in the databaseDBA_ROLE_PRIVS Roles granted to users and to other rolesROLE_ROLE_PRIVS Roles granted to other rolesROLE_SYS_PRIVS System privileges granted to rolesROLE_TAB_PRIVS Table privileges granted to rolesSESSION_PRIVS All privileges currently available to userSESSION_ROLES All roles currently available to userUSER_SYS_PRIVS System privileges granted to current userUSER_TAB_PRIV Grants on objects where current user is grantee grantor or owner

DBA_TAB_PRIVSetcftpdftpusers

bash-300 zfs create datapooltelebash-300 zfs set mountpoint=app datapoolappbash-300 zfs set quota=10G datapoolapp

EXECUTE dbms_sessionset_sql_trace (FALSE)

SELECT SUBSTR (dfNAME 1 70) file_name dfbytes 1024 1024 allocated_mb((dfbytes 1024 1024) - NVL (SUM (dfsbytes) 1024 1024 0))used_mbNVL (SUM (dfsbytes) 1024 1024 0) free_space_mbFROM v$datafile df dba_free_space dfsWHERE dffile = dfsfile_id(+)GROUP BY dfsfile_id dfNAME dffile dfbytesORDER BY file_name

purge table name-of-table

purge index name-of-table

purge recyclebin

purge dba_recyclebin

purge tablespace name-of-tablespace

purge tablespace name-of-tablespaces user name-of-user

date dd if = ltinput filegt of = lt output file gt date

isainfo ndashv-output of the o2 is 32 bit or 64 bit

1023720911

isql -Udba -Scso_otpwSQL

for start and stop the databasescript sybdata1syb126IQcso_ot

Recover databaseAlter database open

1023720469

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name)FROM dba_tablespaces

Here is the query to get the details based on Unix PID select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = ltunix_pidgt and spaddr = paddr

CREATE CONTROLFILE SET DATABASE GMACDEV RESETLOGS NOARCHIVELOG-- SET STANDBY TO MAXIMIZE PERFORMANCE MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 30 MAXINSTANCES 1 MAXLOGHISTORY 112LOGFILE GROUP 1 gmacGMACDEVloglog1rdo SIZE 100M GROUP 2 gmacGMACDEVloglog2rdo SIZE 100M-- STANDBY LOGFILEDATAFILE gmacGMACDEVdatasystemdbf gmacGMACDEVdataundodbf gmacGMACDEVdatauserdbf gmacGMACDEVdatatestdbfCHARACTER SET US7ASCIIselect from nls_database_parameters 2 where parameter = any(NLS_CHARACTERSETNLS_NCHAR_CHARACTERSET)

EXECUTE dbms_sessionset_sql_trace (TRUE)PLSQL PROFILER

If you are not already configured DBMS_PROFILE package look the following script in Oracle Home-gtrdbms-gtadminSCRIPT PROFLOADSQL and PROFTABSQL

102375164Softwares

My problemWhen I dont use tnsnames and want to use ipc protocol then I get the followingerrorSQLgt connect myuseridmypasswordERROR

ORA-01034 ORACLE not availableORA-27121 unable to determine size of shared memory segmentSVR4 Error 13 Permission denied

Answer to your problem=======================Make sure file oracle has the following permissionscd $ORACLE_HOMEbin6751 If not1 Login as oracle user2 Shutdown (normal) the db3 Go to $ORACLE_HOMEbin4 Execute the followingchmod 6751 oracle5 Check the file permissions on oracle using the followingls -l oracle they should be-rwsr-s--x Startup the db and try connecting as dba or non-oracle user if this will not work- make sure permisions on oracle filesystems are set correctly ( set to 755)- filesystem must be mounted with correct setuid

date +DATE mdynTIMEHMS

Now start the Oracle EM dbconsole Build Script ($ORACLE_HOMEbinemca for Linux and $ORACLE_HOMEBinemcabat for Windows)

$ emca -repos create$ emca -config dbcontrol db

STARTED EMCA at Fri May 14 104322 MEST 2004Enter the following information about the databaseto be configured

Listener port number 1521Database SID AKI1Service name AKI1WORLDEmail address for notification martinzahnakadiacomEmail gateway for notification mailhostPassword for dbsnmp xxxxxxx

Password for sysman xxxxxxxPassword for sys xxxxxxx

---------------------------------------------------------You have specified the following settings

Database ORACLE_HOME optoracleproduct1010Enterprise Manager ORACLE_HOME optoracleproduct1010

Database host name akiraListener port number 1521Database SID AKI1Service name AKI1Email address for notification martinzahnakadiacomEmail gateway for notification mailhost---------------------------------------------------------Do you wish to continue [yesno] yesAM oraclesysmanemcpEMConfig updateReposVarsINFO Updating file configrepositoryvariables

Now wait about 10 Minutes to complete

M oraclesysmanemcpEMConfig createRepositoryINFO Creating repository M oraclesysmanemcpEMConfig performINFO Repository was created successfullyM oraclesysmanemcputilPortQuery findUsedPortsINFO Searching services file for used portAM oraclesysmanemcpEMConfig getPropertiesINFO Starting the DBConsole AM oraclesysmanemcpEMConfig performINFO DBConsole is started successfullyINFO gtgtgtgtgtgtgtgtgtgtgt The Enterprise Manager URL is httpakira5500em ltltltltltltltltltltltEnterprise Manager configuration is completed successfullyFINISHED EMCA at Fri May 14 105525 MEST 2004

Try to connect to the database Control

httpakira5500em

emca -deconfig dbcontrol db -repos drop

1gt select name from sysconfigures2gt goMsg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go

Msg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go name

-------------------------------------------------------------------------------- number of devices

suspend audit when device full

(2 rows affected)1gt sp_configure number of devices2gt go Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 36 60 60 number dynamic

(1 row affected)(return status = 0)1gt sp_configure number of devices702gt go00000000002720070315 14521046 server Configuration file sybasesyb125ASE-12_5ddmcfg has been written and the previous version has been renamed to sybasesyb125ASE-12_5ddm04600000000002720070315 14521048 server The configuration option number of devices has been changed by sa from 60 to 70 Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 44 70 70 number dynamic

(1 row affected)Configuration option changed The SQL Server need not be rebooted since theoption is dynamicChanging the value of number of devices to 70 increases the amount of memoryASE uses by 12 K(return status = 0)

disk initname=gem_hist_data7physname=datasyb125gem_histgem_hist_data7datsize=1600Mgo

This Query is used to find out the object name and lock id

select cownercobject_namecobject_typebsidbserialbstatusbosuserbmachine from v$locked_object a v$session bdba_objects c where bsid = asession_id and aobject_id = cobject_id

For apply patches for migration one version to another version1Run the setupexe2shu down the database3startup migrate4Run the below scriptscatpatchsqlcatciosqlutlrpsqlcatexpsql5shu immediate

Find out the locked object and sql query

select aobject_name boracle_username bos_user_namecsid cserialcterminal dsql_textfrom sysdba_objects av$locked_object bv$session cv$sqltext d where aobject_id = bobject_id and csid = bsession_id and csql_hash_value = dhash_value

HP-UX Cron tab

NAME

crontab - user crontab file

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 12: 7101564-Daily-Work

NLS_COMP BINARYNLS_LENGTH_SEMANTICS BYTENLS_NCHAR_CONV_EXCP FALSENLS_RDBMS_VERSION 92060

If default temporary tablespace is wrong the alter it with the following command

SQLgt alter database default temporary tablespace temp

To check default temporary tablespace for all users of the database

SQLgt select username temporary_tablespace account_status from dba_users

will return the following result check if all users TEMPORARY_TABLESPACE is set to correct settings

USERNAME TEMPORARY_TABLESPACE ACCOUNT_STATUSmdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdash mdashmdashmdashmdashmdashmdashmdashmdashmdashmdashndashSYS TEMPRY OPENSYSTEM TEMP OPENOUTLN TEMP OPENDBSNMP TEMP OPENDBMONITOR TEMP OPENTEST TEMP OPENWMSYS TEMP EXPIRED amp LOCKED

If wrong temporary tablespace is found alter it with the correct tablespace name (for example sys) with the following SQL

SQLgt alter user sys temporary tablespace temp

Alternatively recreate or add a datafile to your temporary tablespace and change the default temporary tablespace for your database

SQLgt drop tablespace temp including contents and datafiles

SQLgt create temporary tablespace temp tempfile lsquodbtemp01dbfrsquo size 100m autoextend off extent management local uniform size 1m

SQLgt alter database default temporary tablespace temp

HOW TO DISABLE CONSTAINT

alter table PPM_PRODPPM_SAMPLE_HH_CHRSTCS disable constraint SHC_S_FK

WHAT WILL HAPPEN IF U DELETE THOSE FILES

And what can happen if I delete them

Its two very large files (150-160 MB each)920assistantsdbcatemplatesData_Warehousedfj920assistantsdbcatemplatesTransaction_Processingdfj

Re What are these files GOOD for [message 126248 is a reply to message 126216 ]

Sat 02 July 2005 0009

AchchanMessages 86Registered June 2005

Member

HiFiles that have a DJF extension contain the predefined redo logs and datafiles for seed templates in DBCAIf you delete them you wont be able to use those db creation templates in future

FOR JVM INSTALLATION IN ORACLE

For JVM installation

We have to run this scriptinitjvmasql

DB_DOMAIN NAME PARAMETER

db_domainGLOBAL_NAMES=TRUEALTER DATABASE RENAME GLOBAL_NAME TO WEBDVCTSCOM

CREATE TABLE STRUCTURE

SQLgt select DBMS_METADATAGET_DDL(TABLELOGOFF_TBLCOORS_TARGET) from dual CREATE OR REPLACE TRIGGER SYStrg_logoff

BEFORE logoff ON DATABASEBEGIN INSERT INTO SYSlogoff_tbl VALUES(sys_context(userenvsession_user) SYSDATE)END

BACKUP PATH

507 mount 1023710137unixbkp backup 508 cd backup 509 df -k 510 cd backup 511 ls 512 clear 513 ls 514 mkdir jpmc_bak 515 cd jpmc_bak 516 ls 517 df -k u02 518 pwd 519 ls u02 520 pwd 521 cp -rpf u02ccsystst 522 ls -ltr 523 history

NOOF CPU

isainfo ndashv

HOW TO LOCK THE USER ACCOUNT IN ORACLE

Alter user user_name account lock

CHANGE TABLESPACE BLOCK SIZE ISSUE

Db_2k_cache_size=10m

OEM IN ORACLE 10G

Emctl status dbconsole

httphostnameportem

SET ENVIRONMENT VARIABLE IN ORACLE

export PATH=optjava14bin$PATHexport JAVA_HOME=optjava14jre ora9i 8837 1 0 May 24 1147 ora_pmon_poi ora9i 2305 1 0 Mar 29 2359 ora_pmon_portal ora9i 2321 1 0 Mar 29 2417 ora_pmon_EDMS ora10g 17394 1 0 Apr 02 12857 ora_pmon_POI2 orainst 14743 14365 0 110243 pts3 000 grep pmonCREATE DIRECTORY create directory utl_dir as lsquopathrsquogrant all on directory utl

Modify the given parameter

utl_file_dir

If any timeout request

SqlnetInbound_connect_Timeout

Any privilege for DBMS package

Grant execute on dbmsstats to username

select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = lsquo27229rsquo and spaddr = paddr

Load dump to the Sybase database

Load database database_name from

Load database database_name from ldquocompresspathrdquo

Load database database_name from stripe_on ldquocompresspath01rdquo

Stripe on ldquocompresspath02rdquo

Dump database database_name to lsquopathrsquos

Those scripts should run for install JVM

javavminstallinitjvmsql

optoracle10gxdkadmininitxmlsql

optoracle10gxdkadminxmljasql

optoracle10grdbmsadmincatjavasql

optoracle10grdbmsadmincatexfsql

Once the database has been restarted resolve any invalid objects by

running the utlrpsql script eg

optoracle10grdbmsadminutlrpsql

Those scripts should run for Uninstall JVM

rdbmsadmincatnoexfsql

rdbmsadminrmaqjmssql

rdbmsadminrmcdcsql

xdkadminrmxmlsql

javavminstallrmjvmsql

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name) FROM dba_tablespaces

1023710137mdashBackup Report

SYBASE ndashDatabase

1su ndash syb

2dscp

3open

4listall

5isql ndashUsa ndashSddm(database Name)

6sp-who

7go

8shutdown with nowait

9Sybasesyb125ASE-12-5install

10 startserver ndashf RUNndashgsms

online database gem_curr

11sp ndashhelpdb

12sp-configure

13sp-config parameter newvalue=0is

vgdisplay -v vg02 | grep LV Name |more

For Truncate the table

dump tran test_saatchi with truncate_only

backupsp_helpdb test_saatchi

cd $SYSBASE

MORE INTERFACE

Sybadmin-pW

MAX_ENABLED_ROLES = 70

svrmgrlconnect internalstartupshutdown abort

The following command for gather statistics of Number of rows in each table

exec dbms_statsgather_database_stats()

create or replace procedure sess1kill_session ( v_sid number v_serial number ) as v_varchar2 varchar2(100) begin execute immediate ALTER SYSTEM KILL SESSION || v_sid || || v_serial || end

HARD MOUNT DRIVE

mount -o hardrwnoacrsize=32768wsize=32768suidproto=tcpvers=3 1023710137unixbkp backup

oradim -delete -sid EPToradim -new -sid EPT -pfile DoracleHome2databaseinitEPToraSET TNS_ADMIN=Coracleora92networkadmin

Alter user user_name quota unlimted on tablespace_name

This is most likely a bug I would recommend to apply patchset 9207 As oracle recommends at least 9203 versiyon Anyway you can try below fix

Change the listener and database services Log On user to domain user who is a member of the groups domain admin and ORA_DBA group The default setting is Local System Account

- Run regedit- Drill down to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServices- Locate and delete the OracleOraHome92TNSListener (or whatever the listener name is)- Reboot the entire Windows box- When started and logged on as the Oracle user go to a DOS Command prompt- Run lsnrctl start ltlistener_namegt without the single quotes and replacing ltlistener_namegt with the name- An OS error of 1060 will be seen (normal) as the service is missing- The listener should start correctly or the next logical error may display

By the way can you explain backward of the problem Did you do any upgrade May you use double oracle_homelsquovaroptoracle--Installloc

zfs set quota=10G datapoolzfsoracle

select oracle_username os_user_namelocked_modeobject_nameobject_type from v$locked_object a dba_objects b where aobject_id=bobject_id

Select distinct busername bosuser bmachinebterminalmode_heldmode_requested

blogon_time SESSION WAIT sw From dba_ddl_locks a v$session b v$session_wait sw Where name= and asession_id=bsid and status=ACTIVE and swsid=bsid

spcreatesqlspreportsqlfor i in SAGACEND SAGACENB GLATTD STNC wrkshdevdoORACLE_SID=$iexport ORACLE_SIDsqlplus as sysdba ltlt select sum(bytes)10241024 from dba_data_filesexitdone

optinfoallinfo

For Hp-ux File Extend

fuser -c oradata2

umount oradata2

lvextend -L 40000M devvg00lvol7extendfs devvg00rlvoradata2

mount oradata2

devvg01lvwls 2097152 1457349 610113 70 weblogic

ALL_TAB_PRIVS All object grants where the user or public is granteeALL_TAB_PRIVS_MADE All object grants made by user or on user owned objectsALL_TAB_PRIVS_RECD All object grants to user or publicDBA_SYS_PRIVS System privileges granted to users and rolesDBA_ROLES List of all roles in the databaseDBA_ROLE_PRIVS Roles granted to users and to other rolesROLE_ROLE_PRIVS Roles granted to other rolesROLE_SYS_PRIVS System privileges granted to rolesROLE_TAB_PRIVS Table privileges granted to rolesSESSION_PRIVS All privileges currently available to userSESSION_ROLES All roles currently available to userUSER_SYS_PRIVS System privileges granted to current userUSER_TAB_PRIV Grants on objects where current user is grantee grantor or owner

DBA_TAB_PRIVSetcftpdftpusers

bash-300 zfs create datapooltelebash-300 zfs set mountpoint=app datapoolappbash-300 zfs set quota=10G datapoolapp

EXECUTE dbms_sessionset_sql_trace (FALSE)

SELECT SUBSTR (dfNAME 1 70) file_name dfbytes 1024 1024 allocated_mb((dfbytes 1024 1024) - NVL (SUM (dfsbytes) 1024 1024 0))used_mbNVL (SUM (dfsbytes) 1024 1024 0) free_space_mbFROM v$datafile df dba_free_space dfsWHERE dffile = dfsfile_id(+)GROUP BY dfsfile_id dfNAME dffile dfbytesORDER BY file_name

purge table name-of-table

purge index name-of-table

purge recyclebin

purge dba_recyclebin

purge tablespace name-of-tablespace

purge tablespace name-of-tablespaces user name-of-user

date dd if = ltinput filegt of = lt output file gt date

isainfo ndashv-output of the o2 is 32 bit or 64 bit

1023720911

isql -Udba -Scso_otpwSQL

for start and stop the databasescript sybdata1syb126IQcso_ot

Recover databaseAlter database open

1023720469

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name)FROM dba_tablespaces

Here is the query to get the details based on Unix PID select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = ltunix_pidgt and spaddr = paddr

CREATE CONTROLFILE SET DATABASE GMACDEV RESETLOGS NOARCHIVELOG-- SET STANDBY TO MAXIMIZE PERFORMANCE MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 30 MAXINSTANCES 1 MAXLOGHISTORY 112LOGFILE GROUP 1 gmacGMACDEVloglog1rdo SIZE 100M GROUP 2 gmacGMACDEVloglog2rdo SIZE 100M-- STANDBY LOGFILEDATAFILE gmacGMACDEVdatasystemdbf gmacGMACDEVdataundodbf gmacGMACDEVdatauserdbf gmacGMACDEVdatatestdbfCHARACTER SET US7ASCIIselect from nls_database_parameters 2 where parameter = any(NLS_CHARACTERSETNLS_NCHAR_CHARACTERSET)

EXECUTE dbms_sessionset_sql_trace (TRUE)PLSQL PROFILER

If you are not already configured DBMS_PROFILE package look the following script in Oracle Home-gtrdbms-gtadminSCRIPT PROFLOADSQL and PROFTABSQL

102375164Softwares

My problemWhen I dont use tnsnames and want to use ipc protocol then I get the followingerrorSQLgt connect myuseridmypasswordERROR

ORA-01034 ORACLE not availableORA-27121 unable to determine size of shared memory segmentSVR4 Error 13 Permission denied

Answer to your problem=======================Make sure file oracle has the following permissionscd $ORACLE_HOMEbin6751 If not1 Login as oracle user2 Shutdown (normal) the db3 Go to $ORACLE_HOMEbin4 Execute the followingchmod 6751 oracle5 Check the file permissions on oracle using the followingls -l oracle they should be-rwsr-s--x Startup the db and try connecting as dba or non-oracle user if this will not work- make sure permisions on oracle filesystems are set correctly ( set to 755)- filesystem must be mounted with correct setuid

date +DATE mdynTIMEHMS

Now start the Oracle EM dbconsole Build Script ($ORACLE_HOMEbinemca for Linux and $ORACLE_HOMEBinemcabat for Windows)

$ emca -repos create$ emca -config dbcontrol db

STARTED EMCA at Fri May 14 104322 MEST 2004Enter the following information about the databaseto be configured

Listener port number 1521Database SID AKI1Service name AKI1WORLDEmail address for notification martinzahnakadiacomEmail gateway for notification mailhostPassword for dbsnmp xxxxxxx

Password for sysman xxxxxxxPassword for sys xxxxxxx

---------------------------------------------------------You have specified the following settings

Database ORACLE_HOME optoracleproduct1010Enterprise Manager ORACLE_HOME optoracleproduct1010

Database host name akiraListener port number 1521Database SID AKI1Service name AKI1Email address for notification martinzahnakadiacomEmail gateway for notification mailhost---------------------------------------------------------Do you wish to continue [yesno] yesAM oraclesysmanemcpEMConfig updateReposVarsINFO Updating file configrepositoryvariables

Now wait about 10 Minutes to complete

M oraclesysmanemcpEMConfig createRepositoryINFO Creating repository M oraclesysmanemcpEMConfig performINFO Repository was created successfullyM oraclesysmanemcputilPortQuery findUsedPortsINFO Searching services file for used portAM oraclesysmanemcpEMConfig getPropertiesINFO Starting the DBConsole AM oraclesysmanemcpEMConfig performINFO DBConsole is started successfullyINFO gtgtgtgtgtgtgtgtgtgtgt The Enterprise Manager URL is httpakira5500em ltltltltltltltltltltltEnterprise Manager configuration is completed successfullyFINISHED EMCA at Fri May 14 105525 MEST 2004

Try to connect to the database Control

httpakira5500em

emca -deconfig dbcontrol db -repos drop

1gt select name from sysconfigures2gt goMsg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go

Msg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go name

-------------------------------------------------------------------------------- number of devices

suspend audit when device full

(2 rows affected)1gt sp_configure number of devices2gt go Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 36 60 60 number dynamic

(1 row affected)(return status = 0)1gt sp_configure number of devices702gt go00000000002720070315 14521046 server Configuration file sybasesyb125ASE-12_5ddmcfg has been written and the previous version has been renamed to sybasesyb125ASE-12_5ddm04600000000002720070315 14521048 server The configuration option number of devices has been changed by sa from 60 to 70 Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 44 70 70 number dynamic

(1 row affected)Configuration option changed The SQL Server need not be rebooted since theoption is dynamicChanging the value of number of devices to 70 increases the amount of memoryASE uses by 12 K(return status = 0)

disk initname=gem_hist_data7physname=datasyb125gem_histgem_hist_data7datsize=1600Mgo

This Query is used to find out the object name and lock id

select cownercobject_namecobject_typebsidbserialbstatusbosuserbmachine from v$locked_object a v$session bdba_objects c where bsid = asession_id and aobject_id = cobject_id

For apply patches for migration one version to another version1Run the setupexe2shu down the database3startup migrate4Run the below scriptscatpatchsqlcatciosqlutlrpsqlcatexpsql5shu immediate

Find out the locked object and sql query

select aobject_name boracle_username bos_user_namecsid cserialcterminal dsql_textfrom sysdba_objects av$locked_object bv$session cv$sqltext d where aobject_id = bobject_id and csid = bsession_id and csql_hash_value = dhash_value

HP-UX Cron tab

NAME

crontab - user crontab file

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 13: 7101564-Daily-Work

WHAT WILL HAPPEN IF U DELETE THOSE FILES

And what can happen if I delete them

Its two very large files (150-160 MB each)920assistantsdbcatemplatesData_Warehousedfj920assistantsdbcatemplatesTransaction_Processingdfj

Re What are these files GOOD for [message 126248 is a reply to message 126216 ]

Sat 02 July 2005 0009

AchchanMessages 86Registered June 2005

Member

HiFiles that have a DJF extension contain the predefined redo logs and datafiles for seed templates in DBCAIf you delete them you wont be able to use those db creation templates in future

FOR JVM INSTALLATION IN ORACLE

For JVM installation

We have to run this scriptinitjvmasql

DB_DOMAIN NAME PARAMETER

db_domainGLOBAL_NAMES=TRUEALTER DATABASE RENAME GLOBAL_NAME TO WEBDVCTSCOM

CREATE TABLE STRUCTURE

SQLgt select DBMS_METADATAGET_DDL(TABLELOGOFF_TBLCOORS_TARGET) from dual CREATE OR REPLACE TRIGGER SYStrg_logoff

BEFORE logoff ON DATABASEBEGIN INSERT INTO SYSlogoff_tbl VALUES(sys_context(userenvsession_user) SYSDATE)END

BACKUP PATH

507 mount 1023710137unixbkp backup 508 cd backup 509 df -k 510 cd backup 511 ls 512 clear 513 ls 514 mkdir jpmc_bak 515 cd jpmc_bak 516 ls 517 df -k u02 518 pwd 519 ls u02 520 pwd 521 cp -rpf u02ccsystst 522 ls -ltr 523 history

NOOF CPU

isainfo ndashv

HOW TO LOCK THE USER ACCOUNT IN ORACLE

Alter user user_name account lock

CHANGE TABLESPACE BLOCK SIZE ISSUE

Db_2k_cache_size=10m

OEM IN ORACLE 10G

Emctl status dbconsole

httphostnameportem

SET ENVIRONMENT VARIABLE IN ORACLE

export PATH=optjava14bin$PATHexport JAVA_HOME=optjava14jre ora9i 8837 1 0 May 24 1147 ora_pmon_poi ora9i 2305 1 0 Mar 29 2359 ora_pmon_portal ora9i 2321 1 0 Mar 29 2417 ora_pmon_EDMS ora10g 17394 1 0 Apr 02 12857 ora_pmon_POI2 orainst 14743 14365 0 110243 pts3 000 grep pmonCREATE DIRECTORY create directory utl_dir as lsquopathrsquogrant all on directory utl

Modify the given parameter

utl_file_dir

If any timeout request

SqlnetInbound_connect_Timeout

Any privilege for DBMS package

Grant execute on dbmsstats to username

select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = lsquo27229rsquo and spaddr = paddr

Load dump to the Sybase database

Load database database_name from

Load database database_name from ldquocompresspathrdquo

Load database database_name from stripe_on ldquocompresspath01rdquo

Stripe on ldquocompresspath02rdquo

Dump database database_name to lsquopathrsquos

Those scripts should run for install JVM

javavminstallinitjvmsql

optoracle10gxdkadmininitxmlsql

optoracle10gxdkadminxmljasql

optoracle10grdbmsadmincatjavasql

optoracle10grdbmsadmincatexfsql

Once the database has been restarted resolve any invalid objects by

running the utlrpsql script eg

optoracle10grdbmsadminutlrpsql

Those scripts should run for Uninstall JVM

rdbmsadmincatnoexfsql

rdbmsadminrmaqjmssql

rdbmsadminrmcdcsql

xdkadminrmxmlsql

javavminstallrmjvmsql

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name) FROM dba_tablespaces

1023710137mdashBackup Report

SYBASE ndashDatabase

1su ndash syb

2dscp

3open

4listall

5isql ndashUsa ndashSddm(database Name)

6sp-who

7go

8shutdown with nowait

9Sybasesyb125ASE-12-5install

10 startserver ndashf RUNndashgsms

online database gem_curr

11sp ndashhelpdb

12sp-configure

13sp-config parameter newvalue=0is

vgdisplay -v vg02 | grep LV Name |more

For Truncate the table

dump tran test_saatchi with truncate_only

backupsp_helpdb test_saatchi

cd $SYSBASE

MORE INTERFACE

Sybadmin-pW

MAX_ENABLED_ROLES = 70

svrmgrlconnect internalstartupshutdown abort

The following command for gather statistics of Number of rows in each table

exec dbms_statsgather_database_stats()

create or replace procedure sess1kill_session ( v_sid number v_serial number ) as v_varchar2 varchar2(100) begin execute immediate ALTER SYSTEM KILL SESSION || v_sid || || v_serial || end

HARD MOUNT DRIVE

mount -o hardrwnoacrsize=32768wsize=32768suidproto=tcpvers=3 1023710137unixbkp backup

oradim -delete -sid EPToradim -new -sid EPT -pfile DoracleHome2databaseinitEPToraSET TNS_ADMIN=Coracleora92networkadmin

Alter user user_name quota unlimted on tablespace_name

This is most likely a bug I would recommend to apply patchset 9207 As oracle recommends at least 9203 versiyon Anyway you can try below fix

Change the listener and database services Log On user to domain user who is a member of the groups domain admin and ORA_DBA group The default setting is Local System Account

- Run regedit- Drill down to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServices- Locate and delete the OracleOraHome92TNSListener (or whatever the listener name is)- Reboot the entire Windows box- When started and logged on as the Oracle user go to a DOS Command prompt- Run lsnrctl start ltlistener_namegt without the single quotes and replacing ltlistener_namegt with the name- An OS error of 1060 will be seen (normal) as the service is missing- The listener should start correctly or the next logical error may display

By the way can you explain backward of the problem Did you do any upgrade May you use double oracle_homelsquovaroptoracle--Installloc

zfs set quota=10G datapoolzfsoracle

select oracle_username os_user_namelocked_modeobject_nameobject_type from v$locked_object a dba_objects b where aobject_id=bobject_id

Select distinct busername bosuser bmachinebterminalmode_heldmode_requested

blogon_time SESSION WAIT sw From dba_ddl_locks a v$session b v$session_wait sw Where name= and asession_id=bsid and status=ACTIVE and swsid=bsid

spcreatesqlspreportsqlfor i in SAGACEND SAGACENB GLATTD STNC wrkshdevdoORACLE_SID=$iexport ORACLE_SIDsqlplus as sysdba ltlt select sum(bytes)10241024 from dba_data_filesexitdone

optinfoallinfo

For Hp-ux File Extend

fuser -c oradata2

umount oradata2

lvextend -L 40000M devvg00lvol7extendfs devvg00rlvoradata2

mount oradata2

devvg01lvwls 2097152 1457349 610113 70 weblogic

ALL_TAB_PRIVS All object grants where the user or public is granteeALL_TAB_PRIVS_MADE All object grants made by user or on user owned objectsALL_TAB_PRIVS_RECD All object grants to user or publicDBA_SYS_PRIVS System privileges granted to users and rolesDBA_ROLES List of all roles in the databaseDBA_ROLE_PRIVS Roles granted to users and to other rolesROLE_ROLE_PRIVS Roles granted to other rolesROLE_SYS_PRIVS System privileges granted to rolesROLE_TAB_PRIVS Table privileges granted to rolesSESSION_PRIVS All privileges currently available to userSESSION_ROLES All roles currently available to userUSER_SYS_PRIVS System privileges granted to current userUSER_TAB_PRIV Grants on objects where current user is grantee grantor or owner

DBA_TAB_PRIVSetcftpdftpusers

bash-300 zfs create datapooltelebash-300 zfs set mountpoint=app datapoolappbash-300 zfs set quota=10G datapoolapp

EXECUTE dbms_sessionset_sql_trace (FALSE)

SELECT SUBSTR (dfNAME 1 70) file_name dfbytes 1024 1024 allocated_mb((dfbytes 1024 1024) - NVL (SUM (dfsbytes) 1024 1024 0))used_mbNVL (SUM (dfsbytes) 1024 1024 0) free_space_mbFROM v$datafile df dba_free_space dfsWHERE dffile = dfsfile_id(+)GROUP BY dfsfile_id dfNAME dffile dfbytesORDER BY file_name

purge table name-of-table

purge index name-of-table

purge recyclebin

purge dba_recyclebin

purge tablespace name-of-tablespace

purge tablespace name-of-tablespaces user name-of-user

date dd if = ltinput filegt of = lt output file gt date

isainfo ndashv-output of the o2 is 32 bit or 64 bit

1023720911

isql -Udba -Scso_otpwSQL

for start and stop the databasescript sybdata1syb126IQcso_ot

Recover databaseAlter database open

1023720469

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name)FROM dba_tablespaces

Here is the query to get the details based on Unix PID select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = ltunix_pidgt and spaddr = paddr

CREATE CONTROLFILE SET DATABASE GMACDEV RESETLOGS NOARCHIVELOG-- SET STANDBY TO MAXIMIZE PERFORMANCE MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 30 MAXINSTANCES 1 MAXLOGHISTORY 112LOGFILE GROUP 1 gmacGMACDEVloglog1rdo SIZE 100M GROUP 2 gmacGMACDEVloglog2rdo SIZE 100M-- STANDBY LOGFILEDATAFILE gmacGMACDEVdatasystemdbf gmacGMACDEVdataundodbf gmacGMACDEVdatauserdbf gmacGMACDEVdatatestdbfCHARACTER SET US7ASCIIselect from nls_database_parameters 2 where parameter = any(NLS_CHARACTERSETNLS_NCHAR_CHARACTERSET)

EXECUTE dbms_sessionset_sql_trace (TRUE)PLSQL PROFILER

If you are not already configured DBMS_PROFILE package look the following script in Oracle Home-gtrdbms-gtadminSCRIPT PROFLOADSQL and PROFTABSQL

102375164Softwares

My problemWhen I dont use tnsnames and want to use ipc protocol then I get the followingerrorSQLgt connect myuseridmypasswordERROR

ORA-01034 ORACLE not availableORA-27121 unable to determine size of shared memory segmentSVR4 Error 13 Permission denied

Answer to your problem=======================Make sure file oracle has the following permissionscd $ORACLE_HOMEbin6751 If not1 Login as oracle user2 Shutdown (normal) the db3 Go to $ORACLE_HOMEbin4 Execute the followingchmod 6751 oracle5 Check the file permissions on oracle using the followingls -l oracle they should be-rwsr-s--x Startup the db and try connecting as dba or non-oracle user if this will not work- make sure permisions on oracle filesystems are set correctly ( set to 755)- filesystem must be mounted with correct setuid

date +DATE mdynTIMEHMS

Now start the Oracle EM dbconsole Build Script ($ORACLE_HOMEbinemca for Linux and $ORACLE_HOMEBinemcabat for Windows)

$ emca -repos create$ emca -config dbcontrol db

STARTED EMCA at Fri May 14 104322 MEST 2004Enter the following information about the databaseto be configured

Listener port number 1521Database SID AKI1Service name AKI1WORLDEmail address for notification martinzahnakadiacomEmail gateway for notification mailhostPassword for dbsnmp xxxxxxx

Password for sysman xxxxxxxPassword for sys xxxxxxx

---------------------------------------------------------You have specified the following settings

Database ORACLE_HOME optoracleproduct1010Enterprise Manager ORACLE_HOME optoracleproduct1010

Database host name akiraListener port number 1521Database SID AKI1Service name AKI1Email address for notification martinzahnakadiacomEmail gateway for notification mailhost---------------------------------------------------------Do you wish to continue [yesno] yesAM oraclesysmanemcpEMConfig updateReposVarsINFO Updating file configrepositoryvariables

Now wait about 10 Minutes to complete

M oraclesysmanemcpEMConfig createRepositoryINFO Creating repository M oraclesysmanemcpEMConfig performINFO Repository was created successfullyM oraclesysmanemcputilPortQuery findUsedPortsINFO Searching services file for used portAM oraclesysmanemcpEMConfig getPropertiesINFO Starting the DBConsole AM oraclesysmanemcpEMConfig performINFO DBConsole is started successfullyINFO gtgtgtgtgtgtgtgtgtgtgt The Enterprise Manager URL is httpakira5500em ltltltltltltltltltltltEnterprise Manager configuration is completed successfullyFINISHED EMCA at Fri May 14 105525 MEST 2004

Try to connect to the database Control

httpakira5500em

emca -deconfig dbcontrol db -repos drop

1gt select name from sysconfigures2gt goMsg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go

Msg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go name

-------------------------------------------------------------------------------- number of devices

suspend audit when device full

(2 rows affected)1gt sp_configure number of devices2gt go Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 36 60 60 number dynamic

(1 row affected)(return status = 0)1gt sp_configure number of devices702gt go00000000002720070315 14521046 server Configuration file sybasesyb125ASE-12_5ddmcfg has been written and the previous version has been renamed to sybasesyb125ASE-12_5ddm04600000000002720070315 14521048 server The configuration option number of devices has been changed by sa from 60 to 70 Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 44 70 70 number dynamic

(1 row affected)Configuration option changed The SQL Server need not be rebooted since theoption is dynamicChanging the value of number of devices to 70 increases the amount of memoryASE uses by 12 K(return status = 0)

disk initname=gem_hist_data7physname=datasyb125gem_histgem_hist_data7datsize=1600Mgo

This Query is used to find out the object name and lock id

select cownercobject_namecobject_typebsidbserialbstatusbosuserbmachine from v$locked_object a v$session bdba_objects c where bsid = asession_id and aobject_id = cobject_id

For apply patches for migration one version to another version1Run the setupexe2shu down the database3startup migrate4Run the below scriptscatpatchsqlcatciosqlutlrpsqlcatexpsql5shu immediate

Find out the locked object and sql query

select aobject_name boracle_username bos_user_namecsid cserialcterminal dsql_textfrom sysdba_objects av$locked_object bv$session cv$sqltext d where aobject_id = bobject_id and csid = bsession_id and csql_hash_value = dhash_value

HP-UX Cron tab

NAME

crontab - user crontab file

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 14: 7101564-Daily-Work

BEFORE logoff ON DATABASEBEGIN INSERT INTO SYSlogoff_tbl VALUES(sys_context(userenvsession_user) SYSDATE)END

BACKUP PATH

507 mount 1023710137unixbkp backup 508 cd backup 509 df -k 510 cd backup 511 ls 512 clear 513 ls 514 mkdir jpmc_bak 515 cd jpmc_bak 516 ls 517 df -k u02 518 pwd 519 ls u02 520 pwd 521 cp -rpf u02ccsystst 522 ls -ltr 523 history

NOOF CPU

isainfo ndashv

HOW TO LOCK THE USER ACCOUNT IN ORACLE

Alter user user_name account lock

CHANGE TABLESPACE BLOCK SIZE ISSUE

Db_2k_cache_size=10m

OEM IN ORACLE 10G

Emctl status dbconsole

httphostnameportem

SET ENVIRONMENT VARIABLE IN ORACLE

export PATH=optjava14bin$PATHexport JAVA_HOME=optjava14jre ora9i 8837 1 0 May 24 1147 ora_pmon_poi ora9i 2305 1 0 Mar 29 2359 ora_pmon_portal ora9i 2321 1 0 Mar 29 2417 ora_pmon_EDMS ora10g 17394 1 0 Apr 02 12857 ora_pmon_POI2 orainst 14743 14365 0 110243 pts3 000 grep pmonCREATE DIRECTORY create directory utl_dir as lsquopathrsquogrant all on directory utl

Modify the given parameter

utl_file_dir

If any timeout request

SqlnetInbound_connect_Timeout

Any privilege for DBMS package

Grant execute on dbmsstats to username

select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = lsquo27229rsquo and spaddr = paddr

Load dump to the Sybase database

Load database database_name from

Load database database_name from ldquocompresspathrdquo

Load database database_name from stripe_on ldquocompresspath01rdquo

Stripe on ldquocompresspath02rdquo

Dump database database_name to lsquopathrsquos

Those scripts should run for install JVM

javavminstallinitjvmsql

optoracle10gxdkadmininitxmlsql

optoracle10gxdkadminxmljasql

optoracle10grdbmsadmincatjavasql

optoracle10grdbmsadmincatexfsql

Once the database has been restarted resolve any invalid objects by

running the utlrpsql script eg

optoracle10grdbmsadminutlrpsql

Those scripts should run for Uninstall JVM

rdbmsadmincatnoexfsql

rdbmsadminrmaqjmssql

rdbmsadminrmcdcsql

xdkadminrmxmlsql

javavminstallrmjvmsql

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name) FROM dba_tablespaces

1023710137mdashBackup Report

SYBASE ndashDatabase

1su ndash syb

2dscp

3open

4listall

5isql ndashUsa ndashSddm(database Name)

6sp-who

7go

8shutdown with nowait

9Sybasesyb125ASE-12-5install

10 startserver ndashf RUNndashgsms

online database gem_curr

11sp ndashhelpdb

12sp-configure

13sp-config parameter newvalue=0is

vgdisplay -v vg02 | grep LV Name |more

For Truncate the table

dump tran test_saatchi with truncate_only

backupsp_helpdb test_saatchi

cd $SYSBASE

MORE INTERFACE

Sybadmin-pW

MAX_ENABLED_ROLES = 70

svrmgrlconnect internalstartupshutdown abort

The following command for gather statistics of Number of rows in each table

exec dbms_statsgather_database_stats()

create or replace procedure sess1kill_session ( v_sid number v_serial number ) as v_varchar2 varchar2(100) begin execute immediate ALTER SYSTEM KILL SESSION || v_sid || || v_serial || end

HARD MOUNT DRIVE

mount -o hardrwnoacrsize=32768wsize=32768suidproto=tcpvers=3 1023710137unixbkp backup

oradim -delete -sid EPToradim -new -sid EPT -pfile DoracleHome2databaseinitEPToraSET TNS_ADMIN=Coracleora92networkadmin

Alter user user_name quota unlimted on tablespace_name

This is most likely a bug I would recommend to apply patchset 9207 As oracle recommends at least 9203 versiyon Anyway you can try below fix

Change the listener and database services Log On user to domain user who is a member of the groups domain admin and ORA_DBA group The default setting is Local System Account

- Run regedit- Drill down to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServices- Locate and delete the OracleOraHome92TNSListener (or whatever the listener name is)- Reboot the entire Windows box- When started and logged on as the Oracle user go to a DOS Command prompt- Run lsnrctl start ltlistener_namegt without the single quotes and replacing ltlistener_namegt with the name- An OS error of 1060 will be seen (normal) as the service is missing- The listener should start correctly or the next logical error may display

By the way can you explain backward of the problem Did you do any upgrade May you use double oracle_homelsquovaroptoracle--Installloc

zfs set quota=10G datapoolzfsoracle

select oracle_username os_user_namelocked_modeobject_nameobject_type from v$locked_object a dba_objects b where aobject_id=bobject_id

Select distinct busername bosuser bmachinebterminalmode_heldmode_requested

blogon_time SESSION WAIT sw From dba_ddl_locks a v$session b v$session_wait sw Where name= and asession_id=bsid and status=ACTIVE and swsid=bsid

spcreatesqlspreportsqlfor i in SAGACEND SAGACENB GLATTD STNC wrkshdevdoORACLE_SID=$iexport ORACLE_SIDsqlplus as sysdba ltlt select sum(bytes)10241024 from dba_data_filesexitdone

optinfoallinfo

For Hp-ux File Extend

fuser -c oradata2

umount oradata2

lvextend -L 40000M devvg00lvol7extendfs devvg00rlvoradata2

mount oradata2

devvg01lvwls 2097152 1457349 610113 70 weblogic

ALL_TAB_PRIVS All object grants where the user or public is granteeALL_TAB_PRIVS_MADE All object grants made by user or on user owned objectsALL_TAB_PRIVS_RECD All object grants to user or publicDBA_SYS_PRIVS System privileges granted to users and rolesDBA_ROLES List of all roles in the databaseDBA_ROLE_PRIVS Roles granted to users and to other rolesROLE_ROLE_PRIVS Roles granted to other rolesROLE_SYS_PRIVS System privileges granted to rolesROLE_TAB_PRIVS Table privileges granted to rolesSESSION_PRIVS All privileges currently available to userSESSION_ROLES All roles currently available to userUSER_SYS_PRIVS System privileges granted to current userUSER_TAB_PRIV Grants on objects where current user is grantee grantor or owner

DBA_TAB_PRIVSetcftpdftpusers

bash-300 zfs create datapooltelebash-300 zfs set mountpoint=app datapoolappbash-300 zfs set quota=10G datapoolapp

EXECUTE dbms_sessionset_sql_trace (FALSE)

SELECT SUBSTR (dfNAME 1 70) file_name dfbytes 1024 1024 allocated_mb((dfbytes 1024 1024) - NVL (SUM (dfsbytes) 1024 1024 0))used_mbNVL (SUM (dfsbytes) 1024 1024 0) free_space_mbFROM v$datafile df dba_free_space dfsWHERE dffile = dfsfile_id(+)GROUP BY dfsfile_id dfNAME dffile dfbytesORDER BY file_name

purge table name-of-table

purge index name-of-table

purge recyclebin

purge dba_recyclebin

purge tablespace name-of-tablespace

purge tablespace name-of-tablespaces user name-of-user

date dd if = ltinput filegt of = lt output file gt date

isainfo ndashv-output of the o2 is 32 bit or 64 bit

1023720911

isql -Udba -Scso_otpwSQL

for start and stop the databasescript sybdata1syb126IQcso_ot

Recover databaseAlter database open

1023720469

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name)FROM dba_tablespaces

Here is the query to get the details based on Unix PID select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = ltunix_pidgt and spaddr = paddr

CREATE CONTROLFILE SET DATABASE GMACDEV RESETLOGS NOARCHIVELOG-- SET STANDBY TO MAXIMIZE PERFORMANCE MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 30 MAXINSTANCES 1 MAXLOGHISTORY 112LOGFILE GROUP 1 gmacGMACDEVloglog1rdo SIZE 100M GROUP 2 gmacGMACDEVloglog2rdo SIZE 100M-- STANDBY LOGFILEDATAFILE gmacGMACDEVdatasystemdbf gmacGMACDEVdataundodbf gmacGMACDEVdatauserdbf gmacGMACDEVdatatestdbfCHARACTER SET US7ASCIIselect from nls_database_parameters 2 where parameter = any(NLS_CHARACTERSETNLS_NCHAR_CHARACTERSET)

EXECUTE dbms_sessionset_sql_trace (TRUE)PLSQL PROFILER

If you are not already configured DBMS_PROFILE package look the following script in Oracle Home-gtrdbms-gtadminSCRIPT PROFLOADSQL and PROFTABSQL

102375164Softwares

My problemWhen I dont use tnsnames and want to use ipc protocol then I get the followingerrorSQLgt connect myuseridmypasswordERROR

ORA-01034 ORACLE not availableORA-27121 unable to determine size of shared memory segmentSVR4 Error 13 Permission denied

Answer to your problem=======================Make sure file oracle has the following permissionscd $ORACLE_HOMEbin6751 If not1 Login as oracle user2 Shutdown (normal) the db3 Go to $ORACLE_HOMEbin4 Execute the followingchmod 6751 oracle5 Check the file permissions on oracle using the followingls -l oracle they should be-rwsr-s--x Startup the db and try connecting as dba or non-oracle user if this will not work- make sure permisions on oracle filesystems are set correctly ( set to 755)- filesystem must be mounted with correct setuid

date +DATE mdynTIMEHMS

Now start the Oracle EM dbconsole Build Script ($ORACLE_HOMEbinemca for Linux and $ORACLE_HOMEBinemcabat for Windows)

$ emca -repos create$ emca -config dbcontrol db

STARTED EMCA at Fri May 14 104322 MEST 2004Enter the following information about the databaseto be configured

Listener port number 1521Database SID AKI1Service name AKI1WORLDEmail address for notification martinzahnakadiacomEmail gateway for notification mailhostPassword for dbsnmp xxxxxxx

Password for sysman xxxxxxxPassword for sys xxxxxxx

---------------------------------------------------------You have specified the following settings

Database ORACLE_HOME optoracleproduct1010Enterprise Manager ORACLE_HOME optoracleproduct1010

Database host name akiraListener port number 1521Database SID AKI1Service name AKI1Email address for notification martinzahnakadiacomEmail gateway for notification mailhost---------------------------------------------------------Do you wish to continue [yesno] yesAM oraclesysmanemcpEMConfig updateReposVarsINFO Updating file configrepositoryvariables

Now wait about 10 Minutes to complete

M oraclesysmanemcpEMConfig createRepositoryINFO Creating repository M oraclesysmanemcpEMConfig performINFO Repository was created successfullyM oraclesysmanemcputilPortQuery findUsedPortsINFO Searching services file for used portAM oraclesysmanemcpEMConfig getPropertiesINFO Starting the DBConsole AM oraclesysmanemcpEMConfig performINFO DBConsole is started successfullyINFO gtgtgtgtgtgtgtgtgtgtgt The Enterprise Manager URL is httpakira5500em ltltltltltltltltltltltEnterprise Manager configuration is completed successfullyFINISHED EMCA at Fri May 14 105525 MEST 2004

Try to connect to the database Control

httpakira5500em

emca -deconfig dbcontrol db -repos drop

1gt select name from sysconfigures2gt goMsg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go

Msg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go name

-------------------------------------------------------------------------------- number of devices

suspend audit when device full

(2 rows affected)1gt sp_configure number of devices2gt go Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 36 60 60 number dynamic

(1 row affected)(return status = 0)1gt sp_configure number of devices702gt go00000000002720070315 14521046 server Configuration file sybasesyb125ASE-12_5ddmcfg has been written and the previous version has been renamed to sybasesyb125ASE-12_5ddm04600000000002720070315 14521048 server The configuration option number of devices has been changed by sa from 60 to 70 Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 44 70 70 number dynamic

(1 row affected)Configuration option changed The SQL Server need not be rebooted since theoption is dynamicChanging the value of number of devices to 70 increases the amount of memoryASE uses by 12 K(return status = 0)

disk initname=gem_hist_data7physname=datasyb125gem_histgem_hist_data7datsize=1600Mgo

This Query is used to find out the object name and lock id

select cownercobject_namecobject_typebsidbserialbstatusbosuserbmachine from v$locked_object a v$session bdba_objects c where bsid = asession_id and aobject_id = cobject_id

For apply patches for migration one version to another version1Run the setupexe2shu down the database3startup migrate4Run the below scriptscatpatchsqlcatciosqlutlrpsqlcatexpsql5shu immediate

Find out the locked object and sql query

select aobject_name boracle_username bos_user_namecsid cserialcterminal dsql_textfrom sysdba_objects av$locked_object bv$session cv$sqltext d where aobject_id = bobject_id and csid = bsession_id and csql_hash_value = dhash_value

HP-UX Cron tab

NAME

crontab - user crontab file

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 15: 7101564-Daily-Work

export PATH=optjava14bin$PATHexport JAVA_HOME=optjava14jre ora9i 8837 1 0 May 24 1147 ora_pmon_poi ora9i 2305 1 0 Mar 29 2359 ora_pmon_portal ora9i 2321 1 0 Mar 29 2417 ora_pmon_EDMS ora10g 17394 1 0 Apr 02 12857 ora_pmon_POI2 orainst 14743 14365 0 110243 pts3 000 grep pmonCREATE DIRECTORY create directory utl_dir as lsquopathrsquogrant all on directory utl

Modify the given parameter

utl_file_dir

If any timeout request

SqlnetInbound_connect_Timeout

Any privilege for DBMS package

Grant execute on dbmsstats to username

select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = lsquo27229rsquo and spaddr = paddr

Load dump to the Sybase database

Load database database_name from

Load database database_name from ldquocompresspathrdquo

Load database database_name from stripe_on ldquocompresspath01rdquo

Stripe on ldquocompresspath02rdquo

Dump database database_name to lsquopathrsquos

Those scripts should run for install JVM

javavminstallinitjvmsql

optoracle10gxdkadmininitxmlsql

optoracle10gxdkadminxmljasql

optoracle10grdbmsadmincatjavasql

optoracle10grdbmsadmincatexfsql

Once the database has been restarted resolve any invalid objects by

running the utlrpsql script eg

optoracle10grdbmsadminutlrpsql

Those scripts should run for Uninstall JVM

rdbmsadmincatnoexfsql

rdbmsadminrmaqjmssql

rdbmsadminrmcdcsql

xdkadminrmxmlsql

javavminstallrmjvmsql

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name) FROM dba_tablespaces

1023710137mdashBackup Report

SYBASE ndashDatabase

1su ndash syb

2dscp

3open

4listall

5isql ndashUsa ndashSddm(database Name)

6sp-who

7go

8shutdown with nowait

9Sybasesyb125ASE-12-5install

10 startserver ndashf RUNndashgsms

online database gem_curr

11sp ndashhelpdb

12sp-configure

13sp-config parameter newvalue=0is

vgdisplay -v vg02 | grep LV Name |more

For Truncate the table

dump tran test_saatchi with truncate_only

backupsp_helpdb test_saatchi

cd $SYSBASE

MORE INTERFACE

Sybadmin-pW

MAX_ENABLED_ROLES = 70

svrmgrlconnect internalstartupshutdown abort

The following command for gather statistics of Number of rows in each table

exec dbms_statsgather_database_stats()

create or replace procedure sess1kill_session ( v_sid number v_serial number ) as v_varchar2 varchar2(100) begin execute immediate ALTER SYSTEM KILL SESSION || v_sid || || v_serial || end

HARD MOUNT DRIVE

mount -o hardrwnoacrsize=32768wsize=32768suidproto=tcpvers=3 1023710137unixbkp backup

oradim -delete -sid EPToradim -new -sid EPT -pfile DoracleHome2databaseinitEPToraSET TNS_ADMIN=Coracleora92networkadmin

Alter user user_name quota unlimted on tablespace_name

This is most likely a bug I would recommend to apply patchset 9207 As oracle recommends at least 9203 versiyon Anyway you can try below fix

Change the listener and database services Log On user to domain user who is a member of the groups domain admin and ORA_DBA group The default setting is Local System Account

- Run regedit- Drill down to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServices- Locate and delete the OracleOraHome92TNSListener (or whatever the listener name is)- Reboot the entire Windows box- When started and logged on as the Oracle user go to a DOS Command prompt- Run lsnrctl start ltlistener_namegt without the single quotes and replacing ltlistener_namegt with the name- An OS error of 1060 will be seen (normal) as the service is missing- The listener should start correctly or the next logical error may display

By the way can you explain backward of the problem Did you do any upgrade May you use double oracle_homelsquovaroptoracle--Installloc

zfs set quota=10G datapoolzfsoracle

select oracle_username os_user_namelocked_modeobject_nameobject_type from v$locked_object a dba_objects b where aobject_id=bobject_id

Select distinct busername bosuser bmachinebterminalmode_heldmode_requested

blogon_time SESSION WAIT sw From dba_ddl_locks a v$session b v$session_wait sw Where name= and asession_id=bsid and status=ACTIVE and swsid=bsid

spcreatesqlspreportsqlfor i in SAGACEND SAGACENB GLATTD STNC wrkshdevdoORACLE_SID=$iexport ORACLE_SIDsqlplus as sysdba ltlt select sum(bytes)10241024 from dba_data_filesexitdone

optinfoallinfo

For Hp-ux File Extend

fuser -c oradata2

umount oradata2

lvextend -L 40000M devvg00lvol7extendfs devvg00rlvoradata2

mount oradata2

devvg01lvwls 2097152 1457349 610113 70 weblogic

ALL_TAB_PRIVS All object grants where the user or public is granteeALL_TAB_PRIVS_MADE All object grants made by user or on user owned objectsALL_TAB_PRIVS_RECD All object grants to user or publicDBA_SYS_PRIVS System privileges granted to users and rolesDBA_ROLES List of all roles in the databaseDBA_ROLE_PRIVS Roles granted to users and to other rolesROLE_ROLE_PRIVS Roles granted to other rolesROLE_SYS_PRIVS System privileges granted to rolesROLE_TAB_PRIVS Table privileges granted to rolesSESSION_PRIVS All privileges currently available to userSESSION_ROLES All roles currently available to userUSER_SYS_PRIVS System privileges granted to current userUSER_TAB_PRIV Grants on objects where current user is grantee grantor or owner

DBA_TAB_PRIVSetcftpdftpusers

bash-300 zfs create datapooltelebash-300 zfs set mountpoint=app datapoolappbash-300 zfs set quota=10G datapoolapp

EXECUTE dbms_sessionset_sql_trace (FALSE)

SELECT SUBSTR (dfNAME 1 70) file_name dfbytes 1024 1024 allocated_mb((dfbytes 1024 1024) - NVL (SUM (dfsbytes) 1024 1024 0))used_mbNVL (SUM (dfsbytes) 1024 1024 0) free_space_mbFROM v$datafile df dba_free_space dfsWHERE dffile = dfsfile_id(+)GROUP BY dfsfile_id dfNAME dffile dfbytesORDER BY file_name

purge table name-of-table

purge index name-of-table

purge recyclebin

purge dba_recyclebin

purge tablespace name-of-tablespace

purge tablespace name-of-tablespaces user name-of-user

date dd if = ltinput filegt of = lt output file gt date

isainfo ndashv-output of the o2 is 32 bit or 64 bit

1023720911

isql -Udba -Scso_otpwSQL

for start and stop the databasescript sybdata1syb126IQcso_ot

Recover databaseAlter database open

1023720469

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name)FROM dba_tablespaces

Here is the query to get the details based on Unix PID select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = ltunix_pidgt and spaddr = paddr

CREATE CONTROLFILE SET DATABASE GMACDEV RESETLOGS NOARCHIVELOG-- SET STANDBY TO MAXIMIZE PERFORMANCE MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 30 MAXINSTANCES 1 MAXLOGHISTORY 112LOGFILE GROUP 1 gmacGMACDEVloglog1rdo SIZE 100M GROUP 2 gmacGMACDEVloglog2rdo SIZE 100M-- STANDBY LOGFILEDATAFILE gmacGMACDEVdatasystemdbf gmacGMACDEVdataundodbf gmacGMACDEVdatauserdbf gmacGMACDEVdatatestdbfCHARACTER SET US7ASCIIselect from nls_database_parameters 2 where parameter = any(NLS_CHARACTERSETNLS_NCHAR_CHARACTERSET)

EXECUTE dbms_sessionset_sql_trace (TRUE)PLSQL PROFILER

If you are not already configured DBMS_PROFILE package look the following script in Oracle Home-gtrdbms-gtadminSCRIPT PROFLOADSQL and PROFTABSQL

102375164Softwares

My problemWhen I dont use tnsnames and want to use ipc protocol then I get the followingerrorSQLgt connect myuseridmypasswordERROR

ORA-01034 ORACLE not availableORA-27121 unable to determine size of shared memory segmentSVR4 Error 13 Permission denied

Answer to your problem=======================Make sure file oracle has the following permissionscd $ORACLE_HOMEbin6751 If not1 Login as oracle user2 Shutdown (normal) the db3 Go to $ORACLE_HOMEbin4 Execute the followingchmod 6751 oracle5 Check the file permissions on oracle using the followingls -l oracle they should be-rwsr-s--x Startup the db and try connecting as dba or non-oracle user if this will not work- make sure permisions on oracle filesystems are set correctly ( set to 755)- filesystem must be mounted with correct setuid

date +DATE mdynTIMEHMS

Now start the Oracle EM dbconsole Build Script ($ORACLE_HOMEbinemca for Linux and $ORACLE_HOMEBinemcabat for Windows)

$ emca -repos create$ emca -config dbcontrol db

STARTED EMCA at Fri May 14 104322 MEST 2004Enter the following information about the databaseto be configured

Listener port number 1521Database SID AKI1Service name AKI1WORLDEmail address for notification martinzahnakadiacomEmail gateway for notification mailhostPassword for dbsnmp xxxxxxx

Password for sysman xxxxxxxPassword for sys xxxxxxx

---------------------------------------------------------You have specified the following settings

Database ORACLE_HOME optoracleproduct1010Enterprise Manager ORACLE_HOME optoracleproduct1010

Database host name akiraListener port number 1521Database SID AKI1Service name AKI1Email address for notification martinzahnakadiacomEmail gateway for notification mailhost---------------------------------------------------------Do you wish to continue [yesno] yesAM oraclesysmanemcpEMConfig updateReposVarsINFO Updating file configrepositoryvariables

Now wait about 10 Minutes to complete

M oraclesysmanemcpEMConfig createRepositoryINFO Creating repository M oraclesysmanemcpEMConfig performINFO Repository was created successfullyM oraclesysmanemcputilPortQuery findUsedPortsINFO Searching services file for used portAM oraclesysmanemcpEMConfig getPropertiesINFO Starting the DBConsole AM oraclesysmanemcpEMConfig performINFO DBConsole is started successfullyINFO gtgtgtgtgtgtgtgtgtgtgt The Enterprise Manager URL is httpakira5500em ltltltltltltltltltltltEnterprise Manager configuration is completed successfullyFINISHED EMCA at Fri May 14 105525 MEST 2004

Try to connect to the database Control

httpakira5500em

emca -deconfig dbcontrol db -repos drop

1gt select name from sysconfigures2gt goMsg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go

Msg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go name

-------------------------------------------------------------------------------- number of devices

suspend audit when device full

(2 rows affected)1gt sp_configure number of devices2gt go Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 36 60 60 number dynamic

(1 row affected)(return status = 0)1gt sp_configure number of devices702gt go00000000002720070315 14521046 server Configuration file sybasesyb125ASE-12_5ddmcfg has been written and the previous version has been renamed to sybasesyb125ASE-12_5ddm04600000000002720070315 14521048 server The configuration option number of devices has been changed by sa from 60 to 70 Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 44 70 70 number dynamic

(1 row affected)Configuration option changed The SQL Server need not be rebooted since theoption is dynamicChanging the value of number of devices to 70 increases the amount of memoryASE uses by 12 K(return status = 0)

disk initname=gem_hist_data7physname=datasyb125gem_histgem_hist_data7datsize=1600Mgo

This Query is used to find out the object name and lock id

select cownercobject_namecobject_typebsidbserialbstatusbosuserbmachine from v$locked_object a v$session bdba_objects c where bsid = asession_id and aobject_id = cobject_id

For apply patches for migration one version to another version1Run the setupexe2shu down the database3startup migrate4Run the below scriptscatpatchsqlcatciosqlutlrpsqlcatexpsql5shu immediate

Find out the locked object and sql query

select aobject_name boracle_username bos_user_namecsid cserialcterminal dsql_textfrom sysdba_objects av$locked_object bv$session cv$sqltext d where aobject_id = bobject_id and csid = bsession_id and csql_hash_value = dhash_value

HP-UX Cron tab

NAME

crontab - user crontab file

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 16: 7101564-Daily-Work

optoracle10grdbmsadmincatjavasql

optoracle10grdbmsadmincatexfsql

Once the database has been restarted resolve any invalid objects by

running the utlrpsql script eg

optoracle10grdbmsadminutlrpsql

Those scripts should run for Uninstall JVM

rdbmsadmincatnoexfsql

rdbmsadminrmaqjmssql

rdbmsadminrmcdcsql

xdkadminrmxmlsql

javavminstallrmjvmsql

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name) FROM dba_tablespaces

1023710137mdashBackup Report

SYBASE ndashDatabase

1su ndash syb

2dscp

3open

4listall

5isql ndashUsa ndashSddm(database Name)

6sp-who

7go

8shutdown with nowait

9Sybasesyb125ASE-12-5install

10 startserver ndashf RUNndashgsms

online database gem_curr

11sp ndashhelpdb

12sp-configure

13sp-config parameter newvalue=0is

vgdisplay -v vg02 | grep LV Name |more

For Truncate the table

dump tran test_saatchi with truncate_only

backupsp_helpdb test_saatchi

cd $SYSBASE

MORE INTERFACE

Sybadmin-pW

MAX_ENABLED_ROLES = 70

svrmgrlconnect internalstartupshutdown abort

The following command for gather statistics of Number of rows in each table

exec dbms_statsgather_database_stats()

create or replace procedure sess1kill_session ( v_sid number v_serial number ) as v_varchar2 varchar2(100) begin execute immediate ALTER SYSTEM KILL SESSION || v_sid || || v_serial || end

HARD MOUNT DRIVE

mount -o hardrwnoacrsize=32768wsize=32768suidproto=tcpvers=3 1023710137unixbkp backup

oradim -delete -sid EPToradim -new -sid EPT -pfile DoracleHome2databaseinitEPToraSET TNS_ADMIN=Coracleora92networkadmin

Alter user user_name quota unlimted on tablespace_name

This is most likely a bug I would recommend to apply patchset 9207 As oracle recommends at least 9203 versiyon Anyway you can try below fix

Change the listener and database services Log On user to domain user who is a member of the groups domain admin and ORA_DBA group The default setting is Local System Account

- Run regedit- Drill down to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServices- Locate and delete the OracleOraHome92TNSListener (or whatever the listener name is)- Reboot the entire Windows box- When started and logged on as the Oracle user go to a DOS Command prompt- Run lsnrctl start ltlistener_namegt without the single quotes and replacing ltlistener_namegt with the name- An OS error of 1060 will be seen (normal) as the service is missing- The listener should start correctly or the next logical error may display

By the way can you explain backward of the problem Did you do any upgrade May you use double oracle_homelsquovaroptoracle--Installloc

zfs set quota=10G datapoolzfsoracle

select oracle_username os_user_namelocked_modeobject_nameobject_type from v$locked_object a dba_objects b where aobject_id=bobject_id

Select distinct busername bosuser bmachinebterminalmode_heldmode_requested

blogon_time SESSION WAIT sw From dba_ddl_locks a v$session b v$session_wait sw Where name= and asession_id=bsid and status=ACTIVE and swsid=bsid

spcreatesqlspreportsqlfor i in SAGACEND SAGACENB GLATTD STNC wrkshdevdoORACLE_SID=$iexport ORACLE_SIDsqlplus as sysdba ltlt select sum(bytes)10241024 from dba_data_filesexitdone

optinfoallinfo

For Hp-ux File Extend

fuser -c oradata2

umount oradata2

lvextend -L 40000M devvg00lvol7extendfs devvg00rlvoradata2

mount oradata2

devvg01lvwls 2097152 1457349 610113 70 weblogic

ALL_TAB_PRIVS All object grants where the user or public is granteeALL_TAB_PRIVS_MADE All object grants made by user or on user owned objectsALL_TAB_PRIVS_RECD All object grants to user or publicDBA_SYS_PRIVS System privileges granted to users and rolesDBA_ROLES List of all roles in the databaseDBA_ROLE_PRIVS Roles granted to users and to other rolesROLE_ROLE_PRIVS Roles granted to other rolesROLE_SYS_PRIVS System privileges granted to rolesROLE_TAB_PRIVS Table privileges granted to rolesSESSION_PRIVS All privileges currently available to userSESSION_ROLES All roles currently available to userUSER_SYS_PRIVS System privileges granted to current userUSER_TAB_PRIV Grants on objects where current user is grantee grantor or owner

DBA_TAB_PRIVSetcftpdftpusers

bash-300 zfs create datapooltelebash-300 zfs set mountpoint=app datapoolappbash-300 zfs set quota=10G datapoolapp

EXECUTE dbms_sessionset_sql_trace (FALSE)

SELECT SUBSTR (dfNAME 1 70) file_name dfbytes 1024 1024 allocated_mb((dfbytes 1024 1024) - NVL (SUM (dfsbytes) 1024 1024 0))used_mbNVL (SUM (dfsbytes) 1024 1024 0) free_space_mbFROM v$datafile df dba_free_space dfsWHERE dffile = dfsfile_id(+)GROUP BY dfsfile_id dfNAME dffile dfbytesORDER BY file_name

purge table name-of-table

purge index name-of-table

purge recyclebin

purge dba_recyclebin

purge tablespace name-of-tablespace

purge tablespace name-of-tablespaces user name-of-user

date dd if = ltinput filegt of = lt output file gt date

isainfo ndashv-output of the o2 is 32 bit or 64 bit

1023720911

isql -Udba -Scso_otpwSQL

for start and stop the databasescript sybdata1syb126IQcso_ot

Recover databaseAlter database open

1023720469

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name)FROM dba_tablespaces

Here is the query to get the details based on Unix PID select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = ltunix_pidgt and spaddr = paddr

CREATE CONTROLFILE SET DATABASE GMACDEV RESETLOGS NOARCHIVELOG-- SET STANDBY TO MAXIMIZE PERFORMANCE MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 30 MAXINSTANCES 1 MAXLOGHISTORY 112LOGFILE GROUP 1 gmacGMACDEVloglog1rdo SIZE 100M GROUP 2 gmacGMACDEVloglog2rdo SIZE 100M-- STANDBY LOGFILEDATAFILE gmacGMACDEVdatasystemdbf gmacGMACDEVdataundodbf gmacGMACDEVdatauserdbf gmacGMACDEVdatatestdbfCHARACTER SET US7ASCIIselect from nls_database_parameters 2 where parameter = any(NLS_CHARACTERSETNLS_NCHAR_CHARACTERSET)

EXECUTE dbms_sessionset_sql_trace (TRUE)PLSQL PROFILER

If you are not already configured DBMS_PROFILE package look the following script in Oracle Home-gtrdbms-gtadminSCRIPT PROFLOADSQL and PROFTABSQL

102375164Softwares

My problemWhen I dont use tnsnames and want to use ipc protocol then I get the followingerrorSQLgt connect myuseridmypasswordERROR

ORA-01034 ORACLE not availableORA-27121 unable to determine size of shared memory segmentSVR4 Error 13 Permission denied

Answer to your problem=======================Make sure file oracle has the following permissionscd $ORACLE_HOMEbin6751 If not1 Login as oracle user2 Shutdown (normal) the db3 Go to $ORACLE_HOMEbin4 Execute the followingchmod 6751 oracle5 Check the file permissions on oracle using the followingls -l oracle they should be-rwsr-s--x Startup the db and try connecting as dba or non-oracle user if this will not work- make sure permisions on oracle filesystems are set correctly ( set to 755)- filesystem must be mounted with correct setuid

date +DATE mdynTIMEHMS

Now start the Oracle EM dbconsole Build Script ($ORACLE_HOMEbinemca for Linux and $ORACLE_HOMEBinemcabat for Windows)

$ emca -repos create$ emca -config dbcontrol db

STARTED EMCA at Fri May 14 104322 MEST 2004Enter the following information about the databaseto be configured

Listener port number 1521Database SID AKI1Service name AKI1WORLDEmail address for notification martinzahnakadiacomEmail gateway for notification mailhostPassword for dbsnmp xxxxxxx

Password for sysman xxxxxxxPassword for sys xxxxxxx

---------------------------------------------------------You have specified the following settings

Database ORACLE_HOME optoracleproduct1010Enterprise Manager ORACLE_HOME optoracleproduct1010

Database host name akiraListener port number 1521Database SID AKI1Service name AKI1Email address for notification martinzahnakadiacomEmail gateway for notification mailhost---------------------------------------------------------Do you wish to continue [yesno] yesAM oraclesysmanemcpEMConfig updateReposVarsINFO Updating file configrepositoryvariables

Now wait about 10 Minutes to complete

M oraclesysmanemcpEMConfig createRepositoryINFO Creating repository M oraclesysmanemcpEMConfig performINFO Repository was created successfullyM oraclesysmanemcputilPortQuery findUsedPortsINFO Searching services file for used portAM oraclesysmanemcpEMConfig getPropertiesINFO Starting the DBConsole AM oraclesysmanemcpEMConfig performINFO DBConsole is started successfullyINFO gtgtgtgtgtgtgtgtgtgtgt The Enterprise Manager URL is httpakira5500em ltltltltltltltltltltltEnterprise Manager configuration is completed successfullyFINISHED EMCA at Fri May 14 105525 MEST 2004

Try to connect to the database Control

httpakira5500em

emca -deconfig dbcontrol db -repos drop

1gt select name from sysconfigures2gt goMsg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go

Msg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go name

-------------------------------------------------------------------------------- number of devices

suspend audit when device full

(2 rows affected)1gt sp_configure number of devices2gt go Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 36 60 60 number dynamic

(1 row affected)(return status = 0)1gt sp_configure number of devices702gt go00000000002720070315 14521046 server Configuration file sybasesyb125ASE-12_5ddmcfg has been written and the previous version has been renamed to sybasesyb125ASE-12_5ddm04600000000002720070315 14521048 server The configuration option number of devices has been changed by sa from 60 to 70 Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 44 70 70 number dynamic

(1 row affected)Configuration option changed The SQL Server need not be rebooted since theoption is dynamicChanging the value of number of devices to 70 increases the amount of memoryASE uses by 12 K(return status = 0)

disk initname=gem_hist_data7physname=datasyb125gem_histgem_hist_data7datsize=1600Mgo

This Query is used to find out the object name and lock id

select cownercobject_namecobject_typebsidbserialbstatusbosuserbmachine from v$locked_object a v$session bdba_objects c where bsid = asession_id and aobject_id = cobject_id

For apply patches for migration one version to another version1Run the setupexe2shu down the database3startup migrate4Run the below scriptscatpatchsqlcatciosqlutlrpsqlcatexpsql5shu immediate

Find out the locked object and sql query

select aobject_name boracle_username bos_user_namecsid cserialcterminal dsql_textfrom sysdba_objects av$locked_object bv$session cv$sqltext d where aobject_id = bobject_id and csid = bsession_id and csql_hash_value = dhash_value

HP-UX Cron tab

NAME

crontab - user crontab file

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 17: 7101564-Daily-Work

online database gem_curr

11sp ndashhelpdb

12sp-configure

13sp-config parameter newvalue=0is

vgdisplay -v vg02 | grep LV Name |more

For Truncate the table

dump tran test_saatchi with truncate_only

backupsp_helpdb test_saatchi

cd $SYSBASE

MORE INTERFACE

Sybadmin-pW

MAX_ENABLED_ROLES = 70

svrmgrlconnect internalstartupshutdown abort

The following command for gather statistics of Number of rows in each table

exec dbms_statsgather_database_stats()

create or replace procedure sess1kill_session ( v_sid number v_serial number ) as v_varchar2 varchar2(100) begin execute immediate ALTER SYSTEM KILL SESSION || v_sid || || v_serial || end

HARD MOUNT DRIVE

mount -o hardrwnoacrsize=32768wsize=32768suidproto=tcpvers=3 1023710137unixbkp backup

oradim -delete -sid EPToradim -new -sid EPT -pfile DoracleHome2databaseinitEPToraSET TNS_ADMIN=Coracleora92networkadmin

Alter user user_name quota unlimted on tablespace_name

This is most likely a bug I would recommend to apply patchset 9207 As oracle recommends at least 9203 versiyon Anyway you can try below fix

Change the listener and database services Log On user to domain user who is a member of the groups domain admin and ORA_DBA group The default setting is Local System Account

- Run regedit- Drill down to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServices- Locate and delete the OracleOraHome92TNSListener (or whatever the listener name is)- Reboot the entire Windows box- When started and logged on as the Oracle user go to a DOS Command prompt- Run lsnrctl start ltlistener_namegt without the single quotes and replacing ltlistener_namegt with the name- An OS error of 1060 will be seen (normal) as the service is missing- The listener should start correctly or the next logical error may display

By the way can you explain backward of the problem Did you do any upgrade May you use double oracle_homelsquovaroptoracle--Installloc

zfs set quota=10G datapoolzfsoracle

select oracle_username os_user_namelocked_modeobject_nameobject_type from v$locked_object a dba_objects b where aobject_id=bobject_id

Select distinct busername bosuser bmachinebterminalmode_heldmode_requested

blogon_time SESSION WAIT sw From dba_ddl_locks a v$session b v$session_wait sw Where name= and asession_id=bsid and status=ACTIVE and swsid=bsid

spcreatesqlspreportsqlfor i in SAGACEND SAGACENB GLATTD STNC wrkshdevdoORACLE_SID=$iexport ORACLE_SIDsqlplus as sysdba ltlt select sum(bytes)10241024 from dba_data_filesexitdone

optinfoallinfo

For Hp-ux File Extend

fuser -c oradata2

umount oradata2

lvextend -L 40000M devvg00lvol7extendfs devvg00rlvoradata2

mount oradata2

devvg01lvwls 2097152 1457349 610113 70 weblogic

ALL_TAB_PRIVS All object grants where the user or public is granteeALL_TAB_PRIVS_MADE All object grants made by user or on user owned objectsALL_TAB_PRIVS_RECD All object grants to user or publicDBA_SYS_PRIVS System privileges granted to users and rolesDBA_ROLES List of all roles in the databaseDBA_ROLE_PRIVS Roles granted to users and to other rolesROLE_ROLE_PRIVS Roles granted to other rolesROLE_SYS_PRIVS System privileges granted to rolesROLE_TAB_PRIVS Table privileges granted to rolesSESSION_PRIVS All privileges currently available to userSESSION_ROLES All roles currently available to userUSER_SYS_PRIVS System privileges granted to current userUSER_TAB_PRIV Grants on objects where current user is grantee grantor or owner

DBA_TAB_PRIVSetcftpdftpusers

bash-300 zfs create datapooltelebash-300 zfs set mountpoint=app datapoolappbash-300 zfs set quota=10G datapoolapp

EXECUTE dbms_sessionset_sql_trace (FALSE)

SELECT SUBSTR (dfNAME 1 70) file_name dfbytes 1024 1024 allocated_mb((dfbytes 1024 1024) - NVL (SUM (dfsbytes) 1024 1024 0))used_mbNVL (SUM (dfsbytes) 1024 1024 0) free_space_mbFROM v$datafile df dba_free_space dfsWHERE dffile = dfsfile_id(+)GROUP BY dfsfile_id dfNAME dffile dfbytesORDER BY file_name

purge table name-of-table

purge index name-of-table

purge recyclebin

purge dba_recyclebin

purge tablespace name-of-tablespace

purge tablespace name-of-tablespaces user name-of-user

date dd if = ltinput filegt of = lt output file gt date

isainfo ndashv-output of the o2 is 32 bit or 64 bit

1023720911

isql -Udba -Scso_otpwSQL

for start and stop the databasescript sybdata1syb126IQcso_ot

Recover databaseAlter database open

1023720469

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name)FROM dba_tablespaces

Here is the query to get the details based on Unix PID select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = ltunix_pidgt and spaddr = paddr

CREATE CONTROLFILE SET DATABASE GMACDEV RESETLOGS NOARCHIVELOG-- SET STANDBY TO MAXIMIZE PERFORMANCE MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 30 MAXINSTANCES 1 MAXLOGHISTORY 112LOGFILE GROUP 1 gmacGMACDEVloglog1rdo SIZE 100M GROUP 2 gmacGMACDEVloglog2rdo SIZE 100M-- STANDBY LOGFILEDATAFILE gmacGMACDEVdatasystemdbf gmacGMACDEVdataundodbf gmacGMACDEVdatauserdbf gmacGMACDEVdatatestdbfCHARACTER SET US7ASCIIselect from nls_database_parameters 2 where parameter = any(NLS_CHARACTERSETNLS_NCHAR_CHARACTERSET)

EXECUTE dbms_sessionset_sql_trace (TRUE)PLSQL PROFILER

If you are not already configured DBMS_PROFILE package look the following script in Oracle Home-gtrdbms-gtadminSCRIPT PROFLOADSQL and PROFTABSQL

102375164Softwares

My problemWhen I dont use tnsnames and want to use ipc protocol then I get the followingerrorSQLgt connect myuseridmypasswordERROR

ORA-01034 ORACLE not availableORA-27121 unable to determine size of shared memory segmentSVR4 Error 13 Permission denied

Answer to your problem=======================Make sure file oracle has the following permissionscd $ORACLE_HOMEbin6751 If not1 Login as oracle user2 Shutdown (normal) the db3 Go to $ORACLE_HOMEbin4 Execute the followingchmod 6751 oracle5 Check the file permissions on oracle using the followingls -l oracle they should be-rwsr-s--x Startup the db and try connecting as dba or non-oracle user if this will not work- make sure permisions on oracle filesystems are set correctly ( set to 755)- filesystem must be mounted with correct setuid

date +DATE mdynTIMEHMS

Now start the Oracle EM dbconsole Build Script ($ORACLE_HOMEbinemca for Linux and $ORACLE_HOMEBinemcabat for Windows)

$ emca -repos create$ emca -config dbcontrol db

STARTED EMCA at Fri May 14 104322 MEST 2004Enter the following information about the databaseto be configured

Listener port number 1521Database SID AKI1Service name AKI1WORLDEmail address for notification martinzahnakadiacomEmail gateway for notification mailhostPassword for dbsnmp xxxxxxx

Password for sysman xxxxxxxPassword for sys xxxxxxx

---------------------------------------------------------You have specified the following settings

Database ORACLE_HOME optoracleproduct1010Enterprise Manager ORACLE_HOME optoracleproduct1010

Database host name akiraListener port number 1521Database SID AKI1Service name AKI1Email address for notification martinzahnakadiacomEmail gateway for notification mailhost---------------------------------------------------------Do you wish to continue [yesno] yesAM oraclesysmanemcpEMConfig updateReposVarsINFO Updating file configrepositoryvariables

Now wait about 10 Minutes to complete

M oraclesysmanemcpEMConfig createRepositoryINFO Creating repository M oraclesysmanemcpEMConfig performINFO Repository was created successfullyM oraclesysmanemcputilPortQuery findUsedPortsINFO Searching services file for used portAM oraclesysmanemcpEMConfig getPropertiesINFO Starting the DBConsole AM oraclesysmanemcpEMConfig performINFO DBConsole is started successfullyINFO gtgtgtgtgtgtgtgtgtgtgt The Enterprise Manager URL is httpakira5500em ltltltltltltltltltltltEnterprise Manager configuration is completed successfullyFINISHED EMCA at Fri May 14 105525 MEST 2004

Try to connect to the database Control

httpakira5500em

emca -deconfig dbcontrol db -repos drop

1gt select name from sysconfigures2gt goMsg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go

Msg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go name

-------------------------------------------------------------------------------- number of devices

suspend audit when device full

(2 rows affected)1gt sp_configure number of devices2gt go Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 36 60 60 number dynamic

(1 row affected)(return status = 0)1gt sp_configure number of devices702gt go00000000002720070315 14521046 server Configuration file sybasesyb125ASE-12_5ddmcfg has been written and the previous version has been renamed to sybasesyb125ASE-12_5ddm04600000000002720070315 14521048 server The configuration option number of devices has been changed by sa from 60 to 70 Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 44 70 70 number dynamic

(1 row affected)Configuration option changed The SQL Server need not be rebooted since theoption is dynamicChanging the value of number of devices to 70 increases the amount of memoryASE uses by 12 K(return status = 0)

disk initname=gem_hist_data7physname=datasyb125gem_histgem_hist_data7datsize=1600Mgo

This Query is used to find out the object name and lock id

select cownercobject_namecobject_typebsidbserialbstatusbosuserbmachine from v$locked_object a v$session bdba_objects c where bsid = asession_id and aobject_id = cobject_id

For apply patches for migration one version to another version1Run the setupexe2shu down the database3startup migrate4Run the below scriptscatpatchsqlcatciosqlutlrpsqlcatexpsql5shu immediate

Find out the locked object and sql query

select aobject_name boracle_username bos_user_namecsid cserialcterminal dsql_textfrom sysdba_objects av$locked_object bv$session cv$sqltext d where aobject_id = bobject_id and csid = bsession_id and csql_hash_value = dhash_value

HP-UX Cron tab

NAME

crontab - user crontab file

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 18: 7101564-Daily-Work

oradim -delete -sid EPToradim -new -sid EPT -pfile DoracleHome2databaseinitEPToraSET TNS_ADMIN=Coracleora92networkadmin

Alter user user_name quota unlimted on tablespace_name

This is most likely a bug I would recommend to apply patchset 9207 As oracle recommends at least 9203 versiyon Anyway you can try below fix

Change the listener and database services Log On user to domain user who is a member of the groups domain admin and ORA_DBA group The default setting is Local System Account

- Run regedit- Drill down to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServices- Locate and delete the OracleOraHome92TNSListener (or whatever the listener name is)- Reboot the entire Windows box- When started and logged on as the Oracle user go to a DOS Command prompt- Run lsnrctl start ltlistener_namegt without the single quotes and replacing ltlistener_namegt with the name- An OS error of 1060 will be seen (normal) as the service is missing- The listener should start correctly or the next logical error may display

By the way can you explain backward of the problem Did you do any upgrade May you use double oracle_homelsquovaroptoracle--Installloc

zfs set quota=10G datapoolzfsoracle

select oracle_username os_user_namelocked_modeobject_nameobject_type from v$locked_object a dba_objects b where aobject_id=bobject_id

Select distinct busername bosuser bmachinebterminalmode_heldmode_requested

blogon_time SESSION WAIT sw From dba_ddl_locks a v$session b v$session_wait sw Where name= and asession_id=bsid and status=ACTIVE and swsid=bsid

spcreatesqlspreportsqlfor i in SAGACEND SAGACENB GLATTD STNC wrkshdevdoORACLE_SID=$iexport ORACLE_SIDsqlplus as sysdba ltlt select sum(bytes)10241024 from dba_data_filesexitdone

optinfoallinfo

For Hp-ux File Extend

fuser -c oradata2

umount oradata2

lvextend -L 40000M devvg00lvol7extendfs devvg00rlvoradata2

mount oradata2

devvg01lvwls 2097152 1457349 610113 70 weblogic

ALL_TAB_PRIVS All object grants where the user or public is granteeALL_TAB_PRIVS_MADE All object grants made by user or on user owned objectsALL_TAB_PRIVS_RECD All object grants to user or publicDBA_SYS_PRIVS System privileges granted to users and rolesDBA_ROLES List of all roles in the databaseDBA_ROLE_PRIVS Roles granted to users and to other rolesROLE_ROLE_PRIVS Roles granted to other rolesROLE_SYS_PRIVS System privileges granted to rolesROLE_TAB_PRIVS Table privileges granted to rolesSESSION_PRIVS All privileges currently available to userSESSION_ROLES All roles currently available to userUSER_SYS_PRIVS System privileges granted to current userUSER_TAB_PRIV Grants on objects where current user is grantee grantor or owner

DBA_TAB_PRIVSetcftpdftpusers

bash-300 zfs create datapooltelebash-300 zfs set mountpoint=app datapoolappbash-300 zfs set quota=10G datapoolapp

EXECUTE dbms_sessionset_sql_trace (FALSE)

SELECT SUBSTR (dfNAME 1 70) file_name dfbytes 1024 1024 allocated_mb((dfbytes 1024 1024) - NVL (SUM (dfsbytes) 1024 1024 0))used_mbNVL (SUM (dfsbytes) 1024 1024 0) free_space_mbFROM v$datafile df dba_free_space dfsWHERE dffile = dfsfile_id(+)GROUP BY dfsfile_id dfNAME dffile dfbytesORDER BY file_name

purge table name-of-table

purge index name-of-table

purge recyclebin

purge dba_recyclebin

purge tablespace name-of-tablespace

purge tablespace name-of-tablespaces user name-of-user

date dd if = ltinput filegt of = lt output file gt date

isainfo ndashv-output of the o2 is 32 bit or 64 bit

1023720911

isql -Udba -Scso_otpwSQL

for start and stop the databasescript sybdata1syb126IQcso_ot

Recover databaseAlter database open

1023720469

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name)FROM dba_tablespaces

Here is the query to get the details based on Unix PID select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = ltunix_pidgt and spaddr = paddr

CREATE CONTROLFILE SET DATABASE GMACDEV RESETLOGS NOARCHIVELOG-- SET STANDBY TO MAXIMIZE PERFORMANCE MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 30 MAXINSTANCES 1 MAXLOGHISTORY 112LOGFILE GROUP 1 gmacGMACDEVloglog1rdo SIZE 100M GROUP 2 gmacGMACDEVloglog2rdo SIZE 100M-- STANDBY LOGFILEDATAFILE gmacGMACDEVdatasystemdbf gmacGMACDEVdataundodbf gmacGMACDEVdatauserdbf gmacGMACDEVdatatestdbfCHARACTER SET US7ASCIIselect from nls_database_parameters 2 where parameter = any(NLS_CHARACTERSETNLS_NCHAR_CHARACTERSET)

EXECUTE dbms_sessionset_sql_trace (TRUE)PLSQL PROFILER

If you are not already configured DBMS_PROFILE package look the following script in Oracle Home-gtrdbms-gtadminSCRIPT PROFLOADSQL and PROFTABSQL

102375164Softwares

My problemWhen I dont use tnsnames and want to use ipc protocol then I get the followingerrorSQLgt connect myuseridmypasswordERROR

ORA-01034 ORACLE not availableORA-27121 unable to determine size of shared memory segmentSVR4 Error 13 Permission denied

Answer to your problem=======================Make sure file oracle has the following permissionscd $ORACLE_HOMEbin6751 If not1 Login as oracle user2 Shutdown (normal) the db3 Go to $ORACLE_HOMEbin4 Execute the followingchmod 6751 oracle5 Check the file permissions on oracle using the followingls -l oracle they should be-rwsr-s--x Startup the db and try connecting as dba or non-oracle user if this will not work- make sure permisions on oracle filesystems are set correctly ( set to 755)- filesystem must be mounted with correct setuid

date +DATE mdynTIMEHMS

Now start the Oracle EM dbconsole Build Script ($ORACLE_HOMEbinemca for Linux and $ORACLE_HOMEBinemcabat for Windows)

$ emca -repos create$ emca -config dbcontrol db

STARTED EMCA at Fri May 14 104322 MEST 2004Enter the following information about the databaseto be configured

Listener port number 1521Database SID AKI1Service name AKI1WORLDEmail address for notification martinzahnakadiacomEmail gateway for notification mailhostPassword for dbsnmp xxxxxxx

Password for sysman xxxxxxxPassword for sys xxxxxxx

---------------------------------------------------------You have specified the following settings

Database ORACLE_HOME optoracleproduct1010Enterprise Manager ORACLE_HOME optoracleproduct1010

Database host name akiraListener port number 1521Database SID AKI1Service name AKI1Email address for notification martinzahnakadiacomEmail gateway for notification mailhost---------------------------------------------------------Do you wish to continue [yesno] yesAM oraclesysmanemcpEMConfig updateReposVarsINFO Updating file configrepositoryvariables

Now wait about 10 Minutes to complete

M oraclesysmanemcpEMConfig createRepositoryINFO Creating repository M oraclesysmanemcpEMConfig performINFO Repository was created successfullyM oraclesysmanemcputilPortQuery findUsedPortsINFO Searching services file for used portAM oraclesysmanemcpEMConfig getPropertiesINFO Starting the DBConsole AM oraclesysmanemcpEMConfig performINFO DBConsole is started successfullyINFO gtgtgtgtgtgtgtgtgtgtgt The Enterprise Manager URL is httpakira5500em ltltltltltltltltltltltEnterprise Manager configuration is completed successfullyFINISHED EMCA at Fri May 14 105525 MEST 2004

Try to connect to the database Control

httpakira5500em

emca -deconfig dbcontrol db -repos drop

1gt select name from sysconfigures2gt goMsg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go

Msg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go name

-------------------------------------------------------------------------------- number of devices

suspend audit when device full

(2 rows affected)1gt sp_configure number of devices2gt go Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 36 60 60 number dynamic

(1 row affected)(return status = 0)1gt sp_configure number of devices702gt go00000000002720070315 14521046 server Configuration file sybasesyb125ASE-12_5ddmcfg has been written and the previous version has been renamed to sybasesyb125ASE-12_5ddm04600000000002720070315 14521048 server The configuration option number of devices has been changed by sa from 60 to 70 Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 44 70 70 number dynamic

(1 row affected)Configuration option changed The SQL Server need not be rebooted since theoption is dynamicChanging the value of number of devices to 70 increases the amount of memoryASE uses by 12 K(return status = 0)

disk initname=gem_hist_data7physname=datasyb125gem_histgem_hist_data7datsize=1600Mgo

This Query is used to find out the object name and lock id

select cownercobject_namecobject_typebsidbserialbstatusbosuserbmachine from v$locked_object a v$session bdba_objects c where bsid = asession_id and aobject_id = cobject_id

For apply patches for migration one version to another version1Run the setupexe2shu down the database3startup migrate4Run the below scriptscatpatchsqlcatciosqlutlrpsqlcatexpsql5shu immediate

Find out the locked object and sql query

select aobject_name boracle_username bos_user_namecsid cserialcterminal dsql_textfrom sysdba_objects av$locked_object bv$session cv$sqltext d where aobject_id = bobject_id and csid = bsession_id and csql_hash_value = dhash_value

HP-UX Cron tab

NAME

crontab - user crontab file

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 19: 7101564-Daily-Work

blogon_time SESSION WAIT sw From dba_ddl_locks a v$session b v$session_wait sw Where name= and asession_id=bsid and status=ACTIVE and swsid=bsid

spcreatesqlspreportsqlfor i in SAGACEND SAGACENB GLATTD STNC wrkshdevdoORACLE_SID=$iexport ORACLE_SIDsqlplus as sysdba ltlt select sum(bytes)10241024 from dba_data_filesexitdone

optinfoallinfo

For Hp-ux File Extend

fuser -c oradata2

umount oradata2

lvextend -L 40000M devvg00lvol7extendfs devvg00rlvoradata2

mount oradata2

devvg01lvwls 2097152 1457349 610113 70 weblogic

ALL_TAB_PRIVS All object grants where the user or public is granteeALL_TAB_PRIVS_MADE All object grants made by user or on user owned objectsALL_TAB_PRIVS_RECD All object grants to user or publicDBA_SYS_PRIVS System privileges granted to users and rolesDBA_ROLES List of all roles in the databaseDBA_ROLE_PRIVS Roles granted to users and to other rolesROLE_ROLE_PRIVS Roles granted to other rolesROLE_SYS_PRIVS System privileges granted to rolesROLE_TAB_PRIVS Table privileges granted to rolesSESSION_PRIVS All privileges currently available to userSESSION_ROLES All roles currently available to userUSER_SYS_PRIVS System privileges granted to current userUSER_TAB_PRIV Grants on objects where current user is grantee grantor or owner

DBA_TAB_PRIVSetcftpdftpusers

bash-300 zfs create datapooltelebash-300 zfs set mountpoint=app datapoolappbash-300 zfs set quota=10G datapoolapp

EXECUTE dbms_sessionset_sql_trace (FALSE)

SELECT SUBSTR (dfNAME 1 70) file_name dfbytes 1024 1024 allocated_mb((dfbytes 1024 1024) - NVL (SUM (dfsbytes) 1024 1024 0))used_mbNVL (SUM (dfsbytes) 1024 1024 0) free_space_mbFROM v$datafile df dba_free_space dfsWHERE dffile = dfsfile_id(+)GROUP BY dfsfile_id dfNAME dffile dfbytesORDER BY file_name

purge table name-of-table

purge index name-of-table

purge recyclebin

purge dba_recyclebin

purge tablespace name-of-tablespace

purge tablespace name-of-tablespaces user name-of-user

date dd if = ltinput filegt of = lt output file gt date

isainfo ndashv-output of the o2 is 32 bit or 64 bit

1023720911

isql -Udba -Scso_otpwSQL

for start and stop the databasescript sybdata1syb126IQcso_ot

Recover databaseAlter database open

1023720469

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name)FROM dba_tablespaces

Here is the query to get the details based on Unix PID select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = ltunix_pidgt and spaddr = paddr

CREATE CONTROLFILE SET DATABASE GMACDEV RESETLOGS NOARCHIVELOG-- SET STANDBY TO MAXIMIZE PERFORMANCE MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 30 MAXINSTANCES 1 MAXLOGHISTORY 112LOGFILE GROUP 1 gmacGMACDEVloglog1rdo SIZE 100M GROUP 2 gmacGMACDEVloglog2rdo SIZE 100M-- STANDBY LOGFILEDATAFILE gmacGMACDEVdatasystemdbf gmacGMACDEVdataundodbf gmacGMACDEVdatauserdbf gmacGMACDEVdatatestdbfCHARACTER SET US7ASCIIselect from nls_database_parameters 2 where parameter = any(NLS_CHARACTERSETNLS_NCHAR_CHARACTERSET)

EXECUTE dbms_sessionset_sql_trace (TRUE)PLSQL PROFILER

If you are not already configured DBMS_PROFILE package look the following script in Oracle Home-gtrdbms-gtadminSCRIPT PROFLOADSQL and PROFTABSQL

102375164Softwares

My problemWhen I dont use tnsnames and want to use ipc protocol then I get the followingerrorSQLgt connect myuseridmypasswordERROR

ORA-01034 ORACLE not availableORA-27121 unable to determine size of shared memory segmentSVR4 Error 13 Permission denied

Answer to your problem=======================Make sure file oracle has the following permissionscd $ORACLE_HOMEbin6751 If not1 Login as oracle user2 Shutdown (normal) the db3 Go to $ORACLE_HOMEbin4 Execute the followingchmod 6751 oracle5 Check the file permissions on oracle using the followingls -l oracle they should be-rwsr-s--x Startup the db and try connecting as dba or non-oracle user if this will not work- make sure permisions on oracle filesystems are set correctly ( set to 755)- filesystem must be mounted with correct setuid

date +DATE mdynTIMEHMS

Now start the Oracle EM dbconsole Build Script ($ORACLE_HOMEbinemca for Linux and $ORACLE_HOMEBinemcabat for Windows)

$ emca -repos create$ emca -config dbcontrol db

STARTED EMCA at Fri May 14 104322 MEST 2004Enter the following information about the databaseto be configured

Listener port number 1521Database SID AKI1Service name AKI1WORLDEmail address for notification martinzahnakadiacomEmail gateway for notification mailhostPassword for dbsnmp xxxxxxx

Password for sysman xxxxxxxPassword for sys xxxxxxx

---------------------------------------------------------You have specified the following settings

Database ORACLE_HOME optoracleproduct1010Enterprise Manager ORACLE_HOME optoracleproduct1010

Database host name akiraListener port number 1521Database SID AKI1Service name AKI1Email address for notification martinzahnakadiacomEmail gateway for notification mailhost---------------------------------------------------------Do you wish to continue [yesno] yesAM oraclesysmanemcpEMConfig updateReposVarsINFO Updating file configrepositoryvariables

Now wait about 10 Minutes to complete

M oraclesysmanemcpEMConfig createRepositoryINFO Creating repository M oraclesysmanemcpEMConfig performINFO Repository was created successfullyM oraclesysmanemcputilPortQuery findUsedPortsINFO Searching services file for used portAM oraclesysmanemcpEMConfig getPropertiesINFO Starting the DBConsole AM oraclesysmanemcpEMConfig performINFO DBConsole is started successfullyINFO gtgtgtgtgtgtgtgtgtgtgt The Enterprise Manager URL is httpakira5500em ltltltltltltltltltltltEnterprise Manager configuration is completed successfullyFINISHED EMCA at Fri May 14 105525 MEST 2004

Try to connect to the database Control

httpakira5500em

emca -deconfig dbcontrol db -repos drop

1gt select name from sysconfigures2gt goMsg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go

Msg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go name

-------------------------------------------------------------------------------- number of devices

suspend audit when device full

(2 rows affected)1gt sp_configure number of devices2gt go Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 36 60 60 number dynamic

(1 row affected)(return status = 0)1gt sp_configure number of devices702gt go00000000002720070315 14521046 server Configuration file sybasesyb125ASE-12_5ddmcfg has been written and the previous version has been renamed to sybasesyb125ASE-12_5ddm04600000000002720070315 14521048 server The configuration option number of devices has been changed by sa from 60 to 70 Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 44 70 70 number dynamic

(1 row affected)Configuration option changed The SQL Server need not be rebooted since theoption is dynamicChanging the value of number of devices to 70 increases the amount of memoryASE uses by 12 K(return status = 0)

disk initname=gem_hist_data7physname=datasyb125gem_histgem_hist_data7datsize=1600Mgo

This Query is used to find out the object name and lock id

select cownercobject_namecobject_typebsidbserialbstatusbosuserbmachine from v$locked_object a v$session bdba_objects c where bsid = asession_id and aobject_id = cobject_id

For apply patches for migration one version to another version1Run the setupexe2shu down the database3startup migrate4Run the below scriptscatpatchsqlcatciosqlutlrpsqlcatexpsql5shu immediate

Find out the locked object and sql query

select aobject_name boracle_username bos_user_namecsid cserialcterminal dsql_textfrom sysdba_objects av$locked_object bv$session cv$sqltext d where aobject_id = bobject_id and csid = bsession_id and csql_hash_value = dhash_value

HP-UX Cron tab

NAME

crontab - user crontab file

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 20: 7101564-Daily-Work

DBA_TAB_PRIVSetcftpdftpusers

bash-300 zfs create datapooltelebash-300 zfs set mountpoint=app datapoolappbash-300 zfs set quota=10G datapoolapp

EXECUTE dbms_sessionset_sql_trace (FALSE)

SELECT SUBSTR (dfNAME 1 70) file_name dfbytes 1024 1024 allocated_mb((dfbytes 1024 1024) - NVL (SUM (dfsbytes) 1024 1024 0))used_mbNVL (SUM (dfsbytes) 1024 1024 0) free_space_mbFROM v$datafile df dba_free_space dfsWHERE dffile = dfsfile_id(+)GROUP BY dfsfile_id dfNAME dffile dfbytesORDER BY file_name

purge table name-of-table

purge index name-of-table

purge recyclebin

purge dba_recyclebin

purge tablespace name-of-tablespace

purge tablespace name-of-tablespaces user name-of-user

date dd if = ltinput filegt of = lt output file gt date

isainfo ndashv-output of the o2 is 32 bit or 64 bit

1023720911

isql -Udba -Scso_otpwSQL

for start and stop the databasescript sybdata1syb126IQcso_ot

Recover databaseAlter database open

1023720469

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name)FROM dba_tablespaces

Here is the query to get the details based on Unix PID select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = ltunix_pidgt and spaddr = paddr

CREATE CONTROLFILE SET DATABASE GMACDEV RESETLOGS NOARCHIVELOG-- SET STANDBY TO MAXIMIZE PERFORMANCE MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 30 MAXINSTANCES 1 MAXLOGHISTORY 112LOGFILE GROUP 1 gmacGMACDEVloglog1rdo SIZE 100M GROUP 2 gmacGMACDEVloglog2rdo SIZE 100M-- STANDBY LOGFILEDATAFILE gmacGMACDEVdatasystemdbf gmacGMACDEVdataundodbf gmacGMACDEVdatauserdbf gmacGMACDEVdatatestdbfCHARACTER SET US7ASCIIselect from nls_database_parameters 2 where parameter = any(NLS_CHARACTERSETNLS_NCHAR_CHARACTERSET)

EXECUTE dbms_sessionset_sql_trace (TRUE)PLSQL PROFILER

If you are not already configured DBMS_PROFILE package look the following script in Oracle Home-gtrdbms-gtadminSCRIPT PROFLOADSQL and PROFTABSQL

102375164Softwares

My problemWhen I dont use tnsnames and want to use ipc protocol then I get the followingerrorSQLgt connect myuseridmypasswordERROR

ORA-01034 ORACLE not availableORA-27121 unable to determine size of shared memory segmentSVR4 Error 13 Permission denied

Answer to your problem=======================Make sure file oracle has the following permissionscd $ORACLE_HOMEbin6751 If not1 Login as oracle user2 Shutdown (normal) the db3 Go to $ORACLE_HOMEbin4 Execute the followingchmod 6751 oracle5 Check the file permissions on oracle using the followingls -l oracle they should be-rwsr-s--x Startup the db and try connecting as dba or non-oracle user if this will not work- make sure permisions on oracle filesystems are set correctly ( set to 755)- filesystem must be mounted with correct setuid

date +DATE mdynTIMEHMS

Now start the Oracle EM dbconsole Build Script ($ORACLE_HOMEbinemca for Linux and $ORACLE_HOMEBinemcabat for Windows)

$ emca -repos create$ emca -config dbcontrol db

STARTED EMCA at Fri May 14 104322 MEST 2004Enter the following information about the databaseto be configured

Listener port number 1521Database SID AKI1Service name AKI1WORLDEmail address for notification martinzahnakadiacomEmail gateway for notification mailhostPassword for dbsnmp xxxxxxx

Password for sysman xxxxxxxPassword for sys xxxxxxx

---------------------------------------------------------You have specified the following settings

Database ORACLE_HOME optoracleproduct1010Enterprise Manager ORACLE_HOME optoracleproduct1010

Database host name akiraListener port number 1521Database SID AKI1Service name AKI1Email address for notification martinzahnakadiacomEmail gateway for notification mailhost---------------------------------------------------------Do you wish to continue [yesno] yesAM oraclesysmanemcpEMConfig updateReposVarsINFO Updating file configrepositoryvariables

Now wait about 10 Minutes to complete

M oraclesysmanemcpEMConfig createRepositoryINFO Creating repository M oraclesysmanemcpEMConfig performINFO Repository was created successfullyM oraclesysmanemcputilPortQuery findUsedPortsINFO Searching services file for used portAM oraclesysmanemcpEMConfig getPropertiesINFO Starting the DBConsole AM oraclesysmanemcpEMConfig performINFO DBConsole is started successfullyINFO gtgtgtgtgtgtgtgtgtgtgt The Enterprise Manager URL is httpakira5500em ltltltltltltltltltltltEnterprise Manager configuration is completed successfullyFINISHED EMCA at Fri May 14 105525 MEST 2004

Try to connect to the database Control

httpakira5500em

emca -deconfig dbcontrol db -repos drop

1gt select name from sysconfigures2gt goMsg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go

Msg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go name

-------------------------------------------------------------------------------- number of devices

suspend audit when device full

(2 rows affected)1gt sp_configure number of devices2gt go Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 36 60 60 number dynamic

(1 row affected)(return status = 0)1gt sp_configure number of devices702gt go00000000002720070315 14521046 server Configuration file sybasesyb125ASE-12_5ddmcfg has been written and the previous version has been renamed to sybasesyb125ASE-12_5ddm04600000000002720070315 14521048 server The configuration option number of devices has been changed by sa from 60 to 70 Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 44 70 70 number dynamic

(1 row affected)Configuration option changed The SQL Server need not be rebooted since theoption is dynamicChanging the value of number of devices to 70 increases the amount of memoryASE uses by 12 K(return status = 0)

disk initname=gem_hist_data7physname=datasyb125gem_histgem_hist_data7datsize=1600Mgo

This Query is used to find out the object name and lock id

select cownercobject_namecobject_typebsidbserialbstatusbosuserbmachine from v$locked_object a v$session bdba_objects c where bsid = asession_id and aobject_id = cobject_id

For apply patches for migration one version to another version1Run the setupexe2shu down the database3startup migrate4Run the below scriptscatpatchsqlcatciosqlutlrpsqlcatexpsql5shu immediate

Find out the locked object and sql query

select aobject_name boracle_username bos_user_namecsid cserialcterminal dsql_textfrom sysdba_objects av$locked_object bv$session cv$sqltext d where aobject_id = bobject_id and csid = bsession_id and csql_hash_value = dhash_value

HP-UX Cron tab

NAME

crontab - user crontab file

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 21: 7101564-Daily-Work

1023720469

SELECT dbms_metadataget_ddl(TABLESPACE tablespace_name)FROM dba_tablespaces

Here is the query to get the details based on Unix PID select susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process pwhere sprocess = ltunix_pidgt and spaddr = paddr

CREATE CONTROLFILE SET DATABASE GMACDEV RESETLOGS NOARCHIVELOG-- SET STANDBY TO MAXIMIZE PERFORMANCE MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 30 MAXINSTANCES 1 MAXLOGHISTORY 112LOGFILE GROUP 1 gmacGMACDEVloglog1rdo SIZE 100M GROUP 2 gmacGMACDEVloglog2rdo SIZE 100M-- STANDBY LOGFILEDATAFILE gmacGMACDEVdatasystemdbf gmacGMACDEVdataundodbf gmacGMACDEVdatauserdbf gmacGMACDEVdatatestdbfCHARACTER SET US7ASCIIselect from nls_database_parameters 2 where parameter = any(NLS_CHARACTERSETNLS_NCHAR_CHARACTERSET)

EXECUTE dbms_sessionset_sql_trace (TRUE)PLSQL PROFILER

If you are not already configured DBMS_PROFILE package look the following script in Oracle Home-gtrdbms-gtadminSCRIPT PROFLOADSQL and PROFTABSQL

102375164Softwares

My problemWhen I dont use tnsnames and want to use ipc protocol then I get the followingerrorSQLgt connect myuseridmypasswordERROR

ORA-01034 ORACLE not availableORA-27121 unable to determine size of shared memory segmentSVR4 Error 13 Permission denied

Answer to your problem=======================Make sure file oracle has the following permissionscd $ORACLE_HOMEbin6751 If not1 Login as oracle user2 Shutdown (normal) the db3 Go to $ORACLE_HOMEbin4 Execute the followingchmod 6751 oracle5 Check the file permissions on oracle using the followingls -l oracle they should be-rwsr-s--x Startup the db and try connecting as dba or non-oracle user if this will not work- make sure permisions on oracle filesystems are set correctly ( set to 755)- filesystem must be mounted with correct setuid

date +DATE mdynTIMEHMS

Now start the Oracle EM dbconsole Build Script ($ORACLE_HOMEbinemca for Linux and $ORACLE_HOMEBinemcabat for Windows)

$ emca -repos create$ emca -config dbcontrol db

STARTED EMCA at Fri May 14 104322 MEST 2004Enter the following information about the databaseto be configured

Listener port number 1521Database SID AKI1Service name AKI1WORLDEmail address for notification martinzahnakadiacomEmail gateway for notification mailhostPassword for dbsnmp xxxxxxx

Password for sysman xxxxxxxPassword for sys xxxxxxx

---------------------------------------------------------You have specified the following settings

Database ORACLE_HOME optoracleproduct1010Enterprise Manager ORACLE_HOME optoracleproduct1010

Database host name akiraListener port number 1521Database SID AKI1Service name AKI1Email address for notification martinzahnakadiacomEmail gateway for notification mailhost---------------------------------------------------------Do you wish to continue [yesno] yesAM oraclesysmanemcpEMConfig updateReposVarsINFO Updating file configrepositoryvariables

Now wait about 10 Minutes to complete

M oraclesysmanemcpEMConfig createRepositoryINFO Creating repository M oraclesysmanemcpEMConfig performINFO Repository was created successfullyM oraclesysmanemcputilPortQuery findUsedPortsINFO Searching services file for used portAM oraclesysmanemcpEMConfig getPropertiesINFO Starting the DBConsole AM oraclesysmanemcpEMConfig performINFO DBConsole is started successfullyINFO gtgtgtgtgtgtgtgtgtgtgt The Enterprise Manager URL is httpakira5500em ltltltltltltltltltltltEnterprise Manager configuration is completed successfullyFINISHED EMCA at Fri May 14 105525 MEST 2004

Try to connect to the database Control

httpakira5500em

emca -deconfig dbcontrol db -repos drop

1gt select name from sysconfigures2gt goMsg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go

Msg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go name

-------------------------------------------------------------------------------- number of devices

suspend audit when device full

(2 rows affected)1gt sp_configure number of devices2gt go Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 36 60 60 number dynamic

(1 row affected)(return status = 0)1gt sp_configure number of devices702gt go00000000002720070315 14521046 server Configuration file sybasesyb125ASE-12_5ddmcfg has been written and the previous version has been renamed to sybasesyb125ASE-12_5ddm04600000000002720070315 14521048 server The configuration option number of devices has been changed by sa from 60 to 70 Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 44 70 70 number dynamic

(1 row affected)Configuration option changed The SQL Server need not be rebooted since theoption is dynamicChanging the value of number of devices to 70 increases the amount of memoryASE uses by 12 K(return status = 0)

disk initname=gem_hist_data7physname=datasyb125gem_histgem_hist_data7datsize=1600Mgo

This Query is used to find out the object name and lock id

select cownercobject_namecobject_typebsidbserialbstatusbosuserbmachine from v$locked_object a v$session bdba_objects c where bsid = asession_id and aobject_id = cobject_id

For apply patches for migration one version to another version1Run the setupexe2shu down the database3startup migrate4Run the below scriptscatpatchsqlcatciosqlutlrpsqlcatexpsql5shu immediate

Find out the locked object and sql query

select aobject_name boracle_username bos_user_namecsid cserialcterminal dsql_textfrom sysdba_objects av$locked_object bv$session cv$sqltext d where aobject_id = bobject_id and csid = bsession_id and csql_hash_value = dhash_value

HP-UX Cron tab

NAME

crontab - user crontab file

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 22: 7101564-Daily-Work

ORA-01034 ORACLE not availableORA-27121 unable to determine size of shared memory segmentSVR4 Error 13 Permission denied

Answer to your problem=======================Make sure file oracle has the following permissionscd $ORACLE_HOMEbin6751 If not1 Login as oracle user2 Shutdown (normal) the db3 Go to $ORACLE_HOMEbin4 Execute the followingchmod 6751 oracle5 Check the file permissions on oracle using the followingls -l oracle they should be-rwsr-s--x Startup the db and try connecting as dba or non-oracle user if this will not work- make sure permisions on oracle filesystems are set correctly ( set to 755)- filesystem must be mounted with correct setuid

date +DATE mdynTIMEHMS

Now start the Oracle EM dbconsole Build Script ($ORACLE_HOMEbinemca for Linux and $ORACLE_HOMEBinemcabat for Windows)

$ emca -repos create$ emca -config dbcontrol db

STARTED EMCA at Fri May 14 104322 MEST 2004Enter the following information about the databaseto be configured

Listener port number 1521Database SID AKI1Service name AKI1WORLDEmail address for notification martinzahnakadiacomEmail gateway for notification mailhostPassword for dbsnmp xxxxxxx

Password for sysman xxxxxxxPassword for sys xxxxxxx

---------------------------------------------------------You have specified the following settings

Database ORACLE_HOME optoracleproduct1010Enterprise Manager ORACLE_HOME optoracleproduct1010

Database host name akiraListener port number 1521Database SID AKI1Service name AKI1Email address for notification martinzahnakadiacomEmail gateway for notification mailhost---------------------------------------------------------Do you wish to continue [yesno] yesAM oraclesysmanemcpEMConfig updateReposVarsINFO Updating file configrepositoryvariables

Now wait about 10 Minutes to complete

M oraclesysmanemcpEMConfig createRepositoryINFO Creating repository M oraclesysmanemcpEMConfig performINFO Repository was created successfullyM oraclesysmanemcputilPortQuery findUsedPortsINFO Searching services file for used portAM oraclesysmanemcpEMConfig getPropertiesINFO Starting the DBConsole AM oraclesysmanemcpEMConfig performINFO DBConsole is started successfullyINFO gtgtgtgtgtgtgtgtgtgtgt The Enterprise Manager URL is httpakira5500em ltltltltltltltltltltltEnterprise Manager configuration is completed successfullyFINISHED EMCA at Fri May 14 105525 MEST 2004

Try to connect to the database Control

httpakira5500em

emca -deconfig dbcontrol db -repos drop

1gt select name from sysconfigures2gt goMsg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go

Msg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go name

-------------------------------------------------------------------------------- number of devices

suspend audit when device full

(2 rows affected)1gt sp_configure number of devices2gt go Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 36 60 60 number dynamic

(1 row affected)(return status = 0)1gt sp_configure number of devices702gt go00000000002720070315 14521046 server Configuration file sybasesyb125ASE-12_5ddmcfg has been written and the previous version has been renamed to sybasesyb125ASE-12_5ddm04600000000002720070315 14521048 server The configuration option number of devices has been changed by sa from 60 to 70 Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 44 70 70 number dynamic

(1 row affected)Configuration option changed The SQL Server need not be rebooted since theoption is dynamicChanging the value of number of devices to 70 increases the amount of memoryASE uses by 12 K(return status = 0)

disk initname=gem_hist_data7physname=datasyb125gem_histgem_hist_data7datsize=1600Mgo

This Query is used to find out the object name and lock id

select cownercobject_namecobject_typebsidbserialbstatusbosuserbmachine from v$locked_object a v$session bdba_objects c where bsid = asession_id and aobject_id = cobject_id

For apply patches for migration one version to another version1Run the setupexe2shu down the database3startup migrate4Run the below scriptscatpatchsqlcatciosqlutlrpsqlcatexpsql5shu immediate

Find out the locked object and sql query

select aobject_name boracle_username bos_user_namecsid cserialcterminal dsql_textfrom sysdba_objects av$locked_object bv$session cv$sqltext d where aobject_id = bobject_id and csid = bsession_id and csql_hash_value = dhash_value

HP-UX Cron tab

NAME

crontab - user crontab file

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 23: 7101564-Daily-Work

Password for sysman xxxxxxxPassword for sys xxxxxxx

---------------------------------------------------------You have specified the following settings

Database ORACLE_HOME optoracleproduct1010Enterprise Manager ORACLE_HOME optoracleproduct1010

Database host name akiraListener port number 1521Database SID AKI1Service name AKI1Email address for notification martinzahnakadiacomEmail gateway for notification mailhost---------------------------------------------------------Do you wish to continue [yesno] yesAM oraclesysmanemcpEMConfig updateReposVarsINFO Updating file configrepositoryvariables

Now wait about 10 Minutes to complete

M oraclesysmanemcpEMConfig createRepositoryINFO Creating repository M oraclesysmanemcpEMConfig performINFO Repository was created successfullyM oraclesysmanemcputilPortQuery findUsedPortsINFO Searching services file for used portAM oraclesysmanemcpEMConfig getPropertiesINFO Starting the DBConsole AM oraclesysmanemcpEMConfig performINFO DBConsole is started successfullyINFO gtgtgtgtgtgtgtgtgtgtgt The Enterprise Manager URL is httpakira5500em ltltltltltltltltltltltEnterprise Manager configuration is completed successfullyFINISHED EMCA at Fri May 14 105525 MEST 2004

Try to connect to the database Control

httpakira5500em

emca -deconfig dbcontrol db -repos drop

1gt select name from sysconfigures2gt goMsg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go

Msg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go name

-------------------------------------------------------------------------------- number of devices

suspend audit when device full

(2 rows affected)1gt sp_configure number of devices2gt go Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 36 60 60 number dynamic

(1 row affected)(return status = 0)1gt sp_configure number of devices702gt go00000000002720070315 14521046 server Configuration file sybasesyb125ASE-12_5ddmcfg has been written and the previous version has been renamed to sybasesyb125ASE-12_5ddm04600000000002720070315 14521048 server The configuration option number of devices has been changed by sa from 60 to 70 Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 44 70 70 number dynamic

(1 row affected)Configuration option changed The SQL Server need not be rebooted since theoption is dynamicChanging the value of number of devices to 70 increases the amount of memoryASE uses by 12 K(return status = 0)

disk initname=gem_hist_data7physname=datasyb125gem_histgem_hist_data7datsize=1600Mgo

This Query is used to find out the object name and lock id

select cownercobject_namecobject_typebsidbserialbstatusbosuserbmachine from v$locked_object a v$session bdba_objects c where bsid = asession_id and aobject_id = cobject_id

For apply patches for migration one version to another version1Run the setupexe2shu down the database3startup migrate4Run the below scriptscatpatchsqlcatciosqlutlrpsqlcatexpsql5shu immediate

Find out the locked object and sql query

select aobject_name boracle_username bos_user_namecsid cserialcterminal dsql_textfrom sysdba_objects av$locked_object bv$session cv$sqltext d where aobject_id = bobject_id and csid = bsession_id and csql_hash_value = dhash_value

HP-UX Cron tab

NAME

crontab - user crontab file

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 24: 7101564-Daily-Work

Msg 102 Level 15 State 1Server ddm Line 1Incorrect syntax near 1gt select name from sysconfigures where name like device2gt go name

-------------------------------------------------------------------------------- number of devices

suspend audit when device full

(2 rows affected)1gt sp_configure number of devices2gt go Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 36 60 60 number dynamic

(1 row affected)(return status = 0)1gt sp_configure number of devices702gt go00000000002720070315 14521046 server Configuration file sybasesyb125ASE-12_5ddmcfg has been written and the previous version has been renamed to sybasesyb125ASE-12_5ddm04600000000002720070315 14521048 server The configuration option number of devices has been changed by sa from 60 to 70 Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 44 70 70 number dynamic

(1 row affected)Configuration option changed The SQL Server need not be rebooted since theoption is dynamicChanging the value of number of devices to 70 increases the amount of memoryASE uses by 12 K(return status = 0)

disk initname=gem_hist_data7physname=datasyb125gem_histgem_hist_data7datsize=1600Mgo

This Query is used to find out the object name and lock id

select cownercobject_namecobject_typebsidbserialbstatusbosuserbmachine from v$locked_object a v$session bdba_objects c where bsid = asession_id and aobject_id = cobject_id

For apply patches for migration one version to another version1Run the setupexe2shu down the database3startup migrate4Run the below scriptscatpatchsqlcatciosqlutlrpsqlcatexpsql5shu immediate

Find out the locked object and sql query

select aobject_name boracle_username bos_user_namecsid cserialcterminal dsql_textfrom sysdba_objects av$locked_object bv$session cv$sqltext d where aobject_id = bobject_id and csid = bsession_id and csql_hash_value = dhash_value

HP-UX Cron tab

NAME

crontab - user crontab file

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 25: 7101564-Daily-Work

disk initname=gem_hist_data7physname=datasyb125gem_histgem_hist_data7datsize=1600Mgo

This Query is used to find out the object name and lock id

select cownercobject_namecobject_typebsidbserialbstatusbosuserbmachine from v$locked_object a v$session bdba_objects c where bsid = asession_id and aobject_id = cobject_id

For apply patches for migration one version to another version1Run the setupexe2shu down the database3startup migrate4Run the below scriptscatpatchsqlcatciosqlutlrpsqlcatexpsql5shu immediate

Find out the locked object and sql query

select aobject_name boracle_username bos_user_namecsid cserialcterminal dsql_textfrom sysdba_objects av$locked_object bv$session cv$sqltext d where aobject_id = bobject_id and csid = bsession_id and csql_hash_value = dhash_value

HP-UX Cron tab

NAME

crontab - user crontab file

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 26: 7101564-Daily-Work

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file or standard input if no file is

specified into a directory that holds all users crontab files (see

cron(1M)) The -r option removes a users crontab from the crontab

directory crontab -l lists the crontab file for the invoking user

Users are permitted to use crontab if their names appear in the file

usrlibcroncronallow If that file does not exist the file

usrlibcroncrondeny is checked to determine if the user should be

denied access to crontab If neither file exists only root is

allowed to submit a job If only crondeny exists and is empty

global usage is permitted The allowdeny files consist of one user

name per line

A crontab file consists of lines of six fields each The fields are

separated by spaces or tabs The first five are integer patterns that

specify the following

minute (0-59)

hour (0-23)

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 27: 7101564-Daily-Work

day of the month (1-31)

month of the year (1-12)

day of the week (0-6 with 0=Sunday)

select smachine from v$process pv$session s where spaddr=paddr and spid=17143The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software OMA is an operating system process that when deployed on each monitored host is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS) which stores it in the Oracle Management Repository (OMR)

Donald K Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes All SQL inserts updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed

Sadly many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query While this approach makes SQL run fast function-based Oracle indexes make it possible to over-allocate indexes on table columns This over-allocation of indexes can cripple the performance of loads on critical Oracle tables

Until Oracle9i there was no way to identify those indexes that were not being used by SQL queries This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes

The approach is quite simple Oracle9i has a tool that allows you to monitor index usage with an alter index command You can then query and find those indexes that are unused and drop them from the database

Here is a script that will turn on monitoring of usage for all indexes in a system

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 28: 7101564-Daily-Work

set pages 999set heading off

spool run_monitorsql

select alter index ||owner||||index_name|| monitoring usagefrom dba_indexeswhere owner not in (SYSSYSTEMPERFSTAT)

spool off

run_monitor

Next we wait until a significant amount of SQL has executed on our database and then query the new v$object_usage view

select index_name table_name usedfrom v$object_usage

Here we see that v$object_usage has a single column called used which will be set to YES or NO Sadly this will not tell you how many times the index has been used but this tool is useful for investigating unused indexes

INDEX_NAME TABLE_NAME MON USED --------------- --------------- --- ---- CUSTOMER_LAST_NAME_IDX CUSTOMER YES NO

If you like Oracle tuning you might enjoy my latest book ldquoOracle Tuning The DefinitiveReferencerdquo by Rampant TechPress (I donrsquot think it is right to charge a fortune for books) and you can buy it right now at this link

httpwwwrampant-bookscombook_2005_1_awr_proactive_tuninghtm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPENMOUNTBACKUP

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 29: 7101564-Daily-Work

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only Any form of incomplete recovery such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA)

Includes the RESTRICTED SESSION privilege

Changing the Character Set After Database Creation

In some cases you may wish to change the existing database character set For instance you may find that the number of languages that need to be supported in your database have increased In most cases you will need to do a full exportimport to properly convert all data to the new character set However if and only if the new character set is a strict superset of the current character set it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set with the same corresponding codepoint value For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1 AL24UTFFSS and UTF8

Current Character Set New Character SetNew Character Set is strict supersetUS7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption To ensure data integrity whenever migrating to a new character set that is not a strict superset you must use exportimport It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement since the command cannot be rolled back The syntax is

ALTER DATABASE [] CHARACTER SET ALTER DATABASE [] NATIONAL CHARACTER SET

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 30: 7101564-Daily-Work

The database name is optional The character set name should be specified without quotes for example

ALTER DATABASE CHARACTER SET WE8ISO8859P1

To change the database character set perform the following steps Not all of them are absolutely necessary but they are highly recommended

SQLgt SHUTDOWN IMMEDIATE -- or NORMAL

SQLgt STARTUP MOUNTSQLgt ALTER SYSTEM ENABLE RESTRICED SESSIONSQLgt ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0SQLgt ALTER DATABASE OPENSQLgt ALTER DATABASE CHARACTER SET SQLgt SHUTDOWN IMMEDIATE -- or NORMALSQLgt STARTUP

To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET You can issue both commands together if desired

bash-300 zfs create datapool1dbbackupsbash-300 zfs set mountpoint=dbbackups datapooldbbackupsbash-300 zfs set quota=10G datapooldbbackups

varspoolcroncrontabs

1Touch user2check crondeny file also

how to caluculate the database sizeeeeee

SELECT segment_type segment_nameBLOCKS20481024 KbFROM DBA_SEGMENTSWHERE OWNER=UPPER(ltownergt) AND SEGMENT_NAME = UPPER(lttable_namegt)

You should substract emptied blocks from this table using

ANALYZE TABLE ltownergtlttable_namegt ESTIMATE STATISTICS

SELECT TABLE_NAME EMPTY_BLOCKS20481024 KbFROM DBA_TABLES

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 31: 7101564-Daily-Work

WHERE OWNER=UPPER(ltownergt) AND TABLE_NAME = UPPER(lttable_namegt)

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttpdbatajblogspotcom Jun 1 (13 hours ago) babu is correctbut analyse the indexes alsoif u wanna know the actual used space use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming It has several advantages including

bull Automatic cleanup of the filesystem when database objects are dropped bull Standardized naming of database files bull Increased portability since file specifications are not needed bull Simplified creation of test systems on differing operating systems bull No unused files wasting disk space

The location of database files is defined using the DB_CREATE_FILE_DEST parameter If it is defined on its own all files are placed in the same location If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles These parameters are dymanic and can be changed using the ALTER SYSTEM statement

ALTER SYSTEM SET DB_CREATE_FILE_DEST=COracleOradataTSH1

Files typically have a default size of 100M and are named using the following formats where u is a unique 8 digit code g is the logfile group number and t is the tablespace name

File Type Format

Controlfiles ora_uctl

Redo Log Files ora_g_ulog

Datafiles ora_t_udbf

Temporary Datafiles ora_t_utmp

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 32: 7101564-Daily-Work

bull Managing Controlfiles Using OMF bull Managing Redo Log Files Using OMF bull Managing Tablespaces Using OMF bull Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified Instead a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the initora file Once the database creation is complete the CONTROL_FILES parameter can be set in the initora file using the generated names shown in the V$CONTROLFILE view

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the initora file decide on the locations and numbers of logfile members For exmple

DB_CREATE_ONLINE_LOG_DEST_1 = cOracleOradataTSH1

DB_CREATE_ONLINE_LOG_DEST_2 = dOracleOradataTSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE statement is issued Oracle will name the file and increment the group number if they are not specified

The ALTER DATABASE DROP LOGFILE GROUP 3 statement will remove the group and it members from the database and delete the files at operating system level

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the initora file specifies the location of the datafiles for OMF tablespaces Since the file location is specified and Oracle will name the file new tablespaces can be created using the following statement

CREATE TABLESPACE tsh_data

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED For a specific size file use

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M

To add a datafile to a tablespace use

ALTER TABLESPACE tsh_data ADD DATAFILE

If a tablespace is dropped Oracle will remove the OS files also For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES

Default Temporary TablespaceIn previous releases if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used This can cause contention and is considered bad practice To prevent this 9i gives you the ability to assign a default temporary tablespace If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 33: 7101564-Daily-Work

A default temporary tablespace can be created during database creation or assigned afterwards

CREATE DATABASE TSH1

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE cOracleOradatadts_1f SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online

Hope this helps Regards Tim

Oracle9is Auto Segment Space Management OptionJames F Koopmann(Database Expert) Posted 1122006Comments (3) | Trackbacks (0)

Oracle has done it again Venture with me down what seems like a small option but in fact has major implications on what we as DBAs no longer have to manage

The world of database performance and tuning is changing very fast Every time I look at new features it convinces me more and more that databases are becoming auto-tunable and self-healing We could argue for quite awhile that DBAs will or will not become obsolete in the future but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it With Oracle9i Oracle has given a peek into the future of where it is going when tuning not only the database but applications as well The little gem that Oracle has snuck in is Its new automated segment space management optionWhat Is It

If you havent read the manuals yet please do You will quickly realize that Oracle is pushing us to use locally managed tablespaces Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces the access to the data dictionary is relieved Not only does this not generate redo contention is reduce Along with the push to locally managed tablespaces is the push to use automatic segment space management This option takes total control of the parameters FREELISTS FREELIST GROUPS and PCTUSED That means that Oracle will track and manage the used and free space in datablocks using bitmaps

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 34: 7101564-Daily-Work

for all objects defined in the tablespace for which it has been definedHow It Use to Be

In the olden days everything was dictionary-managed tablespaces How objects were being used within tablespaces made setting FREELIST FREELIST GROUPS and PCTUSED an ordeal Typically you would sit down and look at the type of DML that was going to be executed the number of users executing the DML the size of rows in tables and how the data would grow over time You would then come up with an idea of how to set FREELIST PCTUSED and PCTFREE in order to get the best usage of space when weighed against performance of DML If you didnt know what you were doing or even if you did you constantly had to monitor contention and space to verify and plan your next attempt Lets spend a bit of time getting accustomed to these parametersFREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table When an insert is being done Oracle gets the next block on the freelist and uses it for the insert When multiple inserts are requested from multiple processes there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist until it is full and inserting into it Depending on how much contention you can live with you need to determine how many freelists you need so that the multiple processes can access their own freelistPCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED that block should be placed back on the freelist for available inserts The issue with using a value for PCTUSED was that you had to balance the need for performance a low PCTUSED to keep blocks off the freelist against a high PCTUSED to keep space usage under controlFREELIST GROUPS

Basically used for multiple instances to access an object This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contentionWhy Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management I truly think you can find something that you will like

No worries No wasted time searching for problems that dont exist No planning needed for storage parameters Out of the box performance for created objects No need to monitor levels of insertupdatedelete rates Improvement in space utilization Better performance than most can tune or plan for with concurrent access to objects Avoidance of data fragmentation Minimal data dictionary access Better indicator of the state of a data block Further more the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old on the freelist or off the freelist scenario

Create a Tablespace for Auto Segment Space Management

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 35: 7101564-Daily-Work

Creating a tablespace for Auto Segment Space Management is quite simple Include the statement at the end of the CREATE TABLESPACE statement Here is an example

CREATE TABLESPACE no_space_worries_ts DATAFILE oradatamysiddatafilesnospaceworries01dbf SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO

The AUTO keyword tells Oracle to use bitmaps for managing space for segmentsCheck What You Have Defined

To determine your current tablespace definition query the data dictionary

select tablespace_namecontentsextent_managementallocation_typesegment_space_management from dba_tablespaces

How Do You Switch To Auto Segment Space Management

Realize that you cant change the method of segment space management by an ALTER statement You must create a new permanent locally managed tablespace and state auto segment space management and then migrate the objectsOptional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states This procedure will recalculate the bitmap states based on either block contents or a specified value

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water markLet Oracle Take Over

Maybe its my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature If there is one thing I have learned from using Oracle databases its that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better Here is just one instance where I think we can embrace Oracles attempt to take over a mundane task that is has been prone to error in the wrong hands After all it isnt rocket science when you get down to it and will probably be gone in the next release anyway

Select DBTIMEZONE from dual is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so Ill only discuss performing full auditing on a single user

bull Server Setup

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 36: 7101564-Daily-Work

bull Audit Options bull View Audit Trail bull Maintenence bull Security

Server SetupTo allow auditing on the server you must

bull Set audit_trail = true in the initora file bull Run the $ORACLE_HOMErdbmsadmincatauditsql script while connected as SYS

Audit OptionsAssuming that the fireid user is to be audited

CONNECT syspassword AS SYSDBA

AUDIT ALL BY fireid BY ACCESS

AUDIT SELECT TABLE UPDATE TABLE INSERT TABLE DELETE TABLE BY fireid BY ACCESS

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS

These options audit all DDL amp DML issued by fireid along with some system events

bull DDL (CREATE ALTER amp DROP of objects) bull DML (INSERT UPDATE DELETE SELECT EXECUTE) bull SYSTEM EVENTS (LOGON LOGOFF etc)

View Audit TrailThe audit trail is stored in the SYSAUD$ table Its contents can be viewed directly or via the following views

bull DBA_AUDIT_EXISTS bull DBA_AUDIT_OBJECT bull DBA_AUDIT_SESSION bull DBA_AUDIT_STATEMENT bull DBA_AUDIT_TRAIL bull DBA_OBJ_AUDIT_OPTS bull DBA_PRIV_AUDIT_OPTS bull DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data but the following are most likely to be of interest

bull Username Oracle Username bull Terminal Machine that the user performed the action from bull Timestamp When the action occured bull Object Owner The owner of the object that was interacted with

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 37: 7101564-Daily-Work

bull Object Name The name of the object that was interacted with bull Action Name The action that occured against the object (INSERT UPDATE DELETE

SELECT EXECUTE)

MaintenanceThe audit trail must be deletedarchived on a regular basis to prevent the SYSAUD$ table growing to an unnacceptable size

SecurityOnly DBAs should have maintenance access to the audit trail If SELECT access is required by any applications this can be granted to any users or alternatively a specific user may be created for this

Auditing modifications of the data in the audit trail itself can be achieved as follows

AUDIT INSERT UPDATE DELETE ON sysaud$ BY ACCESS

2sqlplus as sysdba-HP-UXAIX

EXEC DBMS_UTILITYcompile_schema(ATT)

EXEC DBMS_UTILITYanalyze_schema(ATTCOMPUTE)

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Doc ID

Note2242701Type DIAGNOSTIC

TOOLS

Last Revision

Date 30-MAY-2007

Status PUBLISHED

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 38: 7101564-Daily-Work

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds andor Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4 8 or 12) and generates a comprehensive HTML report with performance related details time summary call summary (parse execute fetch) identification of top SQL row source plan explain plan CBO statistics wait events values of bind variables IO summary per schema object latches hot blocks etc

Output HTML report includes all the details found on TKPROF plus additional information normally requested and used for a transaction performance analysis Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF

Product Name Product Version

RDBMS 9i (92) 10g or higher

Can be used for Oracle Apps 11i or higher or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 243 on May 2007

Author Carlos Sierra

Instructions

Execution Environment

Once this tool is installed (under its own schema) it is executed from SQLPlus from

the schema owning the transaction that generated the raw SQL Trace

For example if used on an Oracle Applications instance execute using the APPS user

Access Privileges

To install it requires connection as a user with SYSDBA privilege

Once installed it does not require special privileges and it can be executed from

any schema user

Usage (standard method)

sqlplus ltusrgtltpwdgt

exec trca$itrace_analyzer(ltraw trace filename on udump directorygt)

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 39: 7101564-Daily-Work

General InformationNote Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files

Download Trace Analyzer

MetaLink Note 2242701

httpmetalinkoraclecommetalinkplsqlml2_guistartup

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 40: 7101564-Daily-Work

Install Trace Analyzer

Read the instructions in INSTRUCTIONSTXT to install the product

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1 Created a directory named INSTALL2 Unzipped TRCAzip into the INSTALL directory3 Created a directory under $ORACLE_HOME named TraceAnalyzer4 Moved the sql files from the INSTALL to the TraceAnalyzer directory5 Logged onto Oracle as SYS

conn as sysdba

6 Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO ltschema_namegtGRANT SELECT ON dba_ind_columns TO ltschema_namegtGRANT SELECT ON dba_objects TO ltschema_namegtGRANT SELECT ON dba_tables TO ltschema_namegtGRANT SELECT ON dba_temp_files TO ltschema_namegtGRANT SELECT ON dba_users TO ltschema_namegtGRANT SELECT ON v_$instance TO ltschema_namegtGRANT SELECT ON v_$latchname TO ltschema_namegtGRANT SELECT ON v_$parameter TO ltschema_namegt

7 Connected to Oracle as SYSTEM8 Ran the installation script TRCACREAsql

If any error occur recompile the package TRCA$ and correct errorsThey will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 41: 7101564-Daily-Work

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708trc and that the trace file is located at oracleadminorabaseudump

CONN systemltpasswordgt

ALTER SESSION SET EVENTS 10046 trace name context forever level 12

SELECT COUNT()FROM dba_tables t dba_indexes iWHERE ttable_name = itable_name

ALTER SESSION SET sql_trace = FALSE

Log onto an operating system session and navigate to the TraceAnalyzer directory

cgt cd oracleora92TraceAnalyzer

Start SQLPlus

coracleora92TraceAnalzyergt sqlplus systemltpwdgtltservice_namegt

Run Trace Analysis

TRCANLZRsql UDUMP orabase_ora_1708trc

Exit SQLPlus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708LOG

Run TKPROF on the same trace file for comparison

Oracle 10g linux TNS-12546 errorReply from eltorio on 3142005 70800 AM

Ive got an answer wich is working for info The problem was with the vartmporacle this directory had rootroot as owner on one linux box and oracledba on the working linux box

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 42: 7101564-Daily-Work

Why I dont know but I changed

chown -R oracledba vartmporacle bootinfo ndashK

command for solaris hardware details

usrplatform`uname -i`sbinprtdiag prtconf | grep Memisainfo ndashv

etcfstab

Query for find session details

select from v$sql where HASH_VALUE=(select ssql_hash_value from v$process pv$session s where spaddr=paddr and pspid=11270)

select sidnamevalue from v$statname nv$sesstat s where nSTATISTIC = sSTATISTIC and name like sessionmemoryorder by 3 ascselect susername sstatus ssid sserial pspid smachine sprocess from v$session s v$process p where pspid = 17883 and spaddr = paddr

SELECT units FROM v$sqlv$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address hash_value child_number

LGWR amp DBWR===========These two processes are more IO bound but when the OS needs patches or is misbehaving then theyspin (wait) until the IO operation to complete The spinning is a CPU operation Slowness or Failures in the Async IO operations show themselves like this You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file You should generally set the db_writer_processes to a value less than or equal to the number of cpus you have on your server Setting this parameter lower or to its default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring If setting this parameter lower does help the contention on your processors but you take an overall performance hit after lowering this parameter you may need to add additional CPU to your server before increasing

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 43: 7101564-Daily-Work

this parameter back to the way that you had it In addition having async io enabled with different combinations of these parameters can also cause performance problems and CPU spikes See the following note for more information about db_writer_process and dbwr_io_slaves and how they relate to async IO

- ltNote972911gt DB_WRITER_PROCESSES or DBWR_IO_SLAVES

If LGWR appears to be intermittently taking up 100 CPU you may be hitting the issue discussed in the following bug

- ltBug2656965gt LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file

_lgwr_async_io=false

This parameter turns of async io for lgwr but leaves it intact for the rest of the database server

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise Even on their own they consume a fair amount of CPU because they are in a infinite loop querying the job queue Some system statistics can be very distorted when they are enabled

ltBug1286684gt CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 817ltbug1319202gt RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ QMN======================================The AQ processes send and receive messages mostly through tables If they are using too muchCPU is because of the queries over those tables or some bug

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 44: 7101564-Daily-Work

ltBug1462218gt QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication

ltBug1559103gt QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date

An oracle (user) process (Back)-----------------------------------------

Large Queries Procedure compilation or execution Space management and Sorting are examples of operations with very high CPU usage Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic with the difference

1 select name from v$statname2 where statistic=12SQLgt

NAME---------------------------------CPU used by this session

CPU used by this session statistic is given in 1100ths of a second Eg a value of 22 mean 022 seconds in 8i

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i It differ a little bit from the CPU used by this session(see ltNote2158481gt)Also do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds(see ltNote398171gt)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed

select sssidsecommandssvalue CPU seusernameseprogram from v$sesstat ss v$session sewhere ssstatistic in (select statistic

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 45: 7101564-Daily-Work

from v$statname where name = CPU used by this session)and sesid=sssid and sssidgt6order by sssid

For the values of command please look at the definition of V$session in the reference manual

To find out what sql the problem session(s) are executing run the following query

select ssid event wait_time wseq qsql_textfrom v$session_wait w v$session s v$process p v$sqlarea qwhere spaddr=paddr andssid=ampp andssql_address=qaddress

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)4 || -bits word_length FROM v$process WHERE ROWNUM =1

Alter database datafile lsquooradata1CDOi1datausers01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasysaux01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataundotbs01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datasystem01dbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataCDO_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1datacdo_isdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFMETA_ISdbfrsquo autoextend offAlter database datafile lsquo oradata1CDOi1dataSFUSER_TSdbfrsquo autoextend offAlter database datafile lsquooradata1CDOi1dataSFUSER_ISdbfrsquo autoextend offYES

Alter database datafile oradata1CDOi1dataSFWEB_TSdbfYES

Alter database datafile oradata1CDOi1dataSFWEB_ISdbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile oradata1CDOi1data ULOG_TSdbfYES

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 46: 7101564-Daily-Work

Alter database datafile oracleCDOi1datausers02dbfYES

Subject MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 from IE when KeepAlive used

Doc ID Note2699801 Type PROBLEM

Last Revision

Date 08-FEB-2007

Status ARCHIVED

PURPOSE

shyshyshyshyshyshyshy

Identify intermittent HTTPshy500 errors caused by possible Microsoft Internet

Explorer bug The information in this article applies to releases of

shy Oracle Containers for J2EE (OC4J)

shy Oracle Application Server 10g (904x)

shy Oracle9iAS Release 2 (903x)

shy Oracle9iAS Release 2 (902x)

Scope

shyshyshyshyshy

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches

Symptoms

shyshyshyshyshyshyshyshy

shy You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 47: 7101564-Daily-Work

Oracle HTTP Server error_log file

Unix $ORACLE_HOMEApacheApachelogserror_log

Windows ORACLE_HOMEApacheApachelogserror_log

(a) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(b) MOD_OC4J_0015 MOD_OC4J_0078

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013 MOD_OC4J_0207

(c) MOD_OC4J_0145 MOD_OC4J_0119 MOD_OC4J_0013

MOD_OC4J_0080 MOD_OC4J_0058 MOD_OC4J_0035

(d) MOD_OC4J_0121 MOD_OC4J_0013 MOD_OC4J_0080 MOD_OC4J_0058

The above list is not definitive and other sequences may be possible

The following is one example sequence as seen in a log file

MOD_OC4J_0145 There is no oc4j process (for destination home)

available to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0145 There is no oc4j process (for destination home) available

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 48: 7101564-Daily-Work

to service request

MOD_OC4J_0119 Failed to get an oc4j process for destination home

MOD_OC4J_0013 Failed to call destination homes service() to service

the request

MOD_OC4J_0207 In internal process table failed to find an available

oc4j process for destination home

Changes

shyshyshyshyshyshyshy

shy The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04shy004 Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

shy It may be seen only with certain browsers such as Internet Explorer

5x and 6x

shy The client machines will have a wininetdll with a version number of

6028001405 To identify this

Use Windows Explorer to locate the file at WINNTsystem32wininetdll

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 49: 7101564-Daily-Work

shygt Right click on the file

shygt Select Properties

shygt click on the Version tab

(see httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

for further details)

Cause

shyshyshyshyshy

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 11 KeepAlive

idle time In these cases the requests are resubmitted by the browser without

the needed HTTP headers

Fix

shyshyshy

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 50: 7101564-Daily-Work

configuration changes to httpdconf

Unix $ORACLE_HOMEApacheApacheconfhttpdconf

Windows ORACLE_HOMEApacheApacheconfhttpdconf

1 Locate the KeepAlive directive in httpdconf

KeepAlive On

2 Replace the KeepAlive directive in httpdconf with

vvv Oracle Note 2699801 vvvvvvv

KeepAlive On

KeepAlive Off

^^^ Oracle Note 2699801 ^^^^^^^

3 If you are making this change manually please run following command to

propagate these changes into the central configuration repository

Unix $ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

Windows ORACLE_HOMEdcmbindcmctl updateConfig shyco ohs shyv shyd

shy This step is not needed if the changes are mande via Enterprise Manager

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 51: 7101564-Daily-Work

References

shyshyshyshyshyshyshyshyshyshy

httpsupportmicrosoftcomdefaultaspxscid=kbenshyus831167

Checked for relevancy 282007

Activate your FREE membership today

Expert Answer Center gt Expert Knowledgebase gt View Answer

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 52: 7101564-Daily-Work

Expert Knowledgebase

EXPERT KNOWLEDGEBASE HOME

RSS FEEDS

I am having a problem exporting an Oracle database The error I got is exporting operators exporting referential integrity constraints exporting triggers

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00056 ORACLE error 6550 encountered

ORA-06550 line 1 column 26

PLS-00201 identifier XDBDBMS_XDBUTIL_INT must be declared

ORA-06550 line 1 column 14

PLSQL Statement ignored

EXP-00000 Export terminated unsuccessfully

Please tell me how can I solve this QUESTION POSED ON 23 SEP 2004

QUESTION ANSWERED BY Brian Peasland

First verify that this package exists with the following query

SELECT statusobject_idobject_typeownerobject_name

FROM dba_objects

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 53: 7101564-Daily-Work

SELECT object_name object_type statusFROM user_objects WHERE object_type LIKE JAVAoffline NORMAL performs a checkpoint for all data files in the tablespace All of these data files must be online You need not perform media recovery on this tablespace before bringing it back online You must use this option if the database is in noarchivelog mode TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written Any offline files may require media recovery before you bring the tablespace back online IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint You must perform media recovery on the tablespace before bringing it back online

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager even you can do this as SYS userhowever connecting to the database as SYS user is not recommended by oracle

AIX ndashFIND MEMORY SIZE

Prtconf 1Login in that db2 user su - db2inst1bash 2Go to sqllib directory cd sqllib

3Stopping the instance

$ db2stop

4Start an instance

As an instance owner on the host running db2 issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 54: 7101564-Daily-Work

Range for this size is 2000 to 1000000

From documentationOPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once You can use this parameter to prevent a session from opening an excessive number of cursorsIt is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors The number will vary from one application to another Assuming that a session does not open the number of cursors specified by OPEN_CURSORS there is no added overhead to setting this value higher than actually needed

Werner

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 55: 7101564-Daily-Work

Billy Verreynne Posts 4016 Registered 52799

Re no of open cursor Posted Aug 26 2007 1033 PM in response to 174313

Reply

gt how to resolve this if no of open cursor exeeds then value given in initora

The error is caused in the vast majority of cases by application code leaking cursors

Ie application code defining ref cursors using ref cursors but never closing ref cursors

Ive in fact never see this not to be the case

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away allowing yourself to run even faster into faster it

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL Typically one will see a cursor leaking application with 100s of open cursor handles for the very same SQL

select csid caddress chash_value COUNT() as Cursor Copiesfrom v$open_cursor cgroup by csid caddress chash_valuehaving COUNT() gt 2order by 3 DESC

Once the application has been identified using V$SESSION you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles and then trace and fix the problem in the application

Nagaraj for performance tuning

you may first start checking the following viewstablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits amp v$system_events

if you have statspack report generated then you can have a look at the timed events

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 56: 7101564-Daily-Work

This is what I could find out from otn and through google

Apparantly sqlnetora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracles network

services funcionality) features The file is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows

A little about Net8 Net8 establishes network sessions and transfers data between a client machine and a server or between two servers It is located on each machine in the network and once a network session is established Net8 acts as a data courier for the client and the

server

Some other configuration files used by Net8 are

1) Local Naming Configuration File (TNSNAMESORA)2) Listener Configuration File (LISTENERORA)

3) Oracle Names Server Configuration File (NAMESORA) The Oracle Names server configuration file (NAMESORA) contains the parameters that specify the location domain

information and optional configuration parameters for each Oracle Names server NAMESORA is located in $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on

Windows NT

4) Oracle Connection Manager Configuration File (CMANORA) The Connection Manager configuration file (CMANORA) contains the parameters that specify preferences for using

Oracle Connection Manager CMANORA is located at $ORACLE_HOMEnetworkadmin on UNIX and ORACLE_HOMEnetworkadmin on Windows NT

first | lt previous | next gt | lastRestore database sybdata1syb126IQcso_otcso_otdbfrom backupsybasectsintcocso6csoasecso_ot rename IQ_SYSTEM_MAIN to sybdata1syb126IQcso_dataprepcso_dataprep_NEWiqrename IQ_SYSTEM_MAIN1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqrename IQ_SYSTEM_MAIN2 to sybdata1syb126IQcso_dataprepcso_dataprep03iqrename IQ_SYSTEM_TEMP to sybdata1syb126IQcso_dataprepcso_dataprepiqtmprename IQ_SYSTEM_TEMP1 to sybdata1syb126IQcso_dataprepcso_dataprep02iqtmpselect from sysiqfile sp_iqstatus

stop_asiqRestore database lsquosybdata1syb126IQcso_dataprepcso_dataprep_newdbrsquo from lsquosybdata1dumpcso_otdmprsquorename IQ_SYSTEM_MAIN to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqrsquorename IQ_SYSTEM_MAIN1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqrsquorename IQ_SYSTEM_MAIN2 to lsquosybdata1syb126IQcso_dataprepcso_dataprep03_newiqrsquorename IQ_SYSTEM_TEMP to lsquosybdata1syb126IQcso_dataprepcso_dataprep_newiqtmp

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 57: 7101564-Daily-Work

rename IQ_SYSTEM_TEMP1 to lsquosybdata1syb126IQcso_dataprepcso_dataprep02_newiqtmp

A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well

To create a symbolic link the syntax of the command is similar to a copy or move command

existing file first destination file second For example to link the directory

exportspacecommonarchive to archive for easy access use

ln -s exportspacecommonarchive archive

A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser

To create a hard link of the file exporthomefredstuff to vartmpthing use

ln exporthomefredstuff vartmpthing

The syntax for creating a hard link of a directory is the same To create a hard link of

varwwwhtml to varwwwwebroot use

ln varwwwhtml varwwwwebrootselect alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 58: 7101564-Daily-Work

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following

gtspool lturpathgtobjects_movelog

gt select alter || segment_typesegment_name || move tablespace xyz from dba_segments where tablespace_name=RAKESH

gtspool off

result of the query will stores in the spool file objects_movelog

gtlturpathgtobjects_movelog

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name=XYZ

rebuild the indexes

and gather statistics for those objects

How to enable trace in Oracle

1 Enable trace at instance level

Put the following line in initora It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUSgt ALTER SYSTEM SET trace_enabled = TRUE

to stop trace run

SQLPLUSgt ALTER SYSTEM SET trace_enabled = FALSE

2 Enable trace at session level

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 59: 7101564-Daily-Work

to start trace

ALTER SESSION SET sql_trace = TRUE

to stop trace

ALTER SESSION SET sql_trace = FALSE

- or -

EXECUTE dbms_sessionset_sql_trace (TRUE)EXECUTE dbms_sessionset_sql_trace (FALSE)

- or -

EXECUTE dbms_supportstart_traceEXECUTE dbms_supportstop_trace

3 Enable trace in another session

Find out SID and SERIAL from v$session For example

SELECT FROM v$session WHERE osuser = OSUSER

to start trace

EXECUTE dbms_supportstart_trace_in_session (SID SERIAL)

to stop trace

EXECUTE dbms_supportstop_trace_in_session (SID SERIAL)

- or -

EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL TRUE)EXECUTE dbms_systemset_sql_trace_in_session (SID SERIAL FALSE)

Using orapwd to Connect Remotely as SYSDBAAugust 5 2003Don Burleson

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users By default the user SYS is the only user that has these privileges Creating a password file via orapwd enables remote users to connect with administrative privileges through SQLNet The SYSOPER privilege allows instance startup shutdown mount and dismount It allows the DBA to perform general database maintenance without viewing user data The SYSDBA privilege is the same as connect internal was in prior versions It provides the ability to do

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 60: 7101564-Daily-Work

everything unrestricted If orapwd has not yet been executed attempting to grant SYSDBA or SYSOPER privileges will result in the following error SQLgt grant sysdba to scott ORA-01994 GRANT failed cannot add users to public password file

The following steps can be performed to grant other users these privileges

1 Create the password file This is done by executing the following command

$ orapwd file=filename password=password entries=max_users

The filename is the name of the file that will hold the password information The file location will default to the current directory unless the full path is specified The contents are encrypted and are unreadable The password required is the one for the SYS user of the database The max_usersis the number of database users that can be granted SYSDBA or SYSOPER This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file

2 Edit the initora parameter remote_login_passwordfile This parameter must be set

to either SHARED or EXCLUSIVEWhen set to SHARED the password file can be used by multiple databases yet only the SYS user is recognized When set to EXCLUSIVE the file can be used by only one database yet multiple users can exist in the file The parameter setting can be confirmed by SQLgt show parameter password NAME TYPE VALUE----------------------------- ----------- ----------remote_login_passwordfile string EXCLUSIVE

3 Grant SYSDBA or SYSOPER to users When SYSDBA or SYSOPER privileges are

granted to a user that users name and privilege information are added to the password file

SQLgt grant sysdba to scott Grant succeeded

4 Confirm that the user is listed in the password file

SQLgt select from v$pwfile_users USERNAME SYSDBA SYSOPER------------------------------ ------ -------

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 61: 7101564-Daily-Work

SYS TRUE TRUESCOTT TRUE FALSE

Now the user SCOTT can connect as SYSDBA Administrative users can be connected and authenticated to a local or remote database by using the SQLPlus connect command They must connect using their username and password and with the AS SYSDBA or AS SYSOPER clause SQLgt connect scotttiger as sysdbaConnected

The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users The SYS password should never be shared and should be highly classified

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals but until now there has been no advice on how to use these utilities From tnspingexe to dbvexe to wrapexe Dave Moore describes each utility and has working examples in the online code depot Your time savings from a single script is worth the price of this great book Get your copy of Oracle Utilities Using Hidden Programs ImportExport SQL Loader oradebug Dbverify Tkprof and More today and receive immediate access to the Online Code Depot

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced Automatic PGA Memory Management may be used in place of setting the sort_area_size sort_area_retained_size sort_area_hash_size and other related memory management parameters that all Oracle DBAs are familiar with Those parameters may however still be used See the following for an interesting discussion on this topic

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if thats how you choose to set it up

bull pga_aggregate_target bull workarea_size_policy

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 62: 7101564-Daily-Work

Note that work_area_size_policy can be altered per database session allowing manual memory management on a per session basis if needed eg a session is loading a large import file and a rather large sort_area_size is needed A logon trigger could be used to set the work_area_size policy for the account doing the import

A session is normally allowed to use up to approximately 5 of the PGA memory available This is controlled by the undocumented initialization parameter _smm_max_size This value is specified in kilobytes eg a value of 1000 really means 1000k As with all undocumented parameters dont expect help from Oracle support with it as you are not supposed to use it If you experiment with it do so on a test system

Also note that Automate PGA management can only be used for dedicated server sessions

For more some good reading on Automatic PGA management please see

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings and how to monitor and tune them as needed

If your 9i database is currently using manual PGA management there are views available to help you make a reasonable estimate for the setting

If your database also has statspack statistics then there is also historical information available to help you determine the setting

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat and by querying the v$pga_target_for_estimate view

v$pgastat

select

from v$pgastat

order by lower(name)

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 63: 7101564-Daily-Work

aggregate PGA auto target 829440000 bytes

aggregate PGA target parameter 2516582400 bytes

bytes processed 2492928000 bytes

cache hit percentage 8631 percent

extra bytes readwritten 395366400 bytes

global memory bound 125747200 bytes

maximum PGA allocated 2666188800 bytes

maximum PGA used for auto workareas 17203200 bytes

maximum PGA used for manual workareas 52531200 bytes

over allocation count 00

PGA memory freed back to OS 675020800 bytes

total freeable PGA memory 6553600 bytes

total PGA allocated 2395750400 bytes

total PGA inuse 1528320000 bytes

total PGA used for auto workareas 00 bytes

total PGA used for manual workareas 00 bytes

16 rows selected

The statistic maximum PGA allocated will display the maximum amount of PGA memory allocated during the life of the instance

The statistic maximum PGA used for auto workareas and maximum PGA used for manual workareas will display the maximum amount of PGA memory used for each type of workarea during the life of the instance

v$pga_target_advice

select

from v$pga_target_advice

order by pga_target_for_estimate

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 64: 7101564-Daily-Work

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12582912 50 ON 17250304 0 10000 3

18874368 75 ON 17250304 0 10000 3

25165824 100 ON 17250304 0 10000 0

30198784 120 ON 17250304 0 10000 0

35231744 140 ON 17250304 0 10000 0

40264704 160 ON 17250304 0 10000 0

45297664 180 ON 17250304 0 10000 0

50331648 200 ON 17250304 0 10000 0

75497472 300 ON 17250304 0 10000 0

100663296 400 ON 17250304 0 10000 0

150994944 600 ON 17250304 0 10000 0

201326592 800 ON 17250304 0 10000 0

12 rows selected

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target As seen in the previous query an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions With a 25M PGA this would not have happened

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 65: 7101564-Daily-Work

Keep in mind that pga_aggregate_target is not set in stone It is used to help Oracle better manage PGA memory but Oracle will exceed this setting if necessary

There are other views that are also useful for PGA memory management

v$process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This will show the maximum PGA usage per process

select

max(pga_used_mem) max_pga_used_mem

max(pga_alloc_mem) max_pga_alloc_mem

max(pga_max_mem) max_pga_max_mem

from v$process

This displays the sum of all current PGA usage per process

select

sum(pga_used_mem) sum_pga_used_mem

sum(pga_alloc_mem) sum_pga_alloc_mem

sum(pga_max_mem) sum_pga_max_mem

from v$process

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 66: 7101564-Daily-Work

Be sure to read the documentation referenced earlier it contains an excellent explanation of Automatic PGA Memory Management

Following are some already canned scripts that may be of use

PGA Monitoring Scripts

These are the steps to get the user who issues drop table command in a database

1login into the db as sysdba

2 sqlgtshow parameter audit_trail - - - gtchecks if the audit trail is turned on

if the output is

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3else2(a) shutdown immediate - - - - [to enable the audit trail](b) edit initora in the location $ORACLE_HOMEadminpfile to put the entry audit_trail=db(c) create spfile from pifle(c) startup

3 truncate table aud$ - - - gt to remove any audit trail data residing in the table3 sqlgtaudit table - - - gtthis starts auditing events pertaining to tables

4 select action_nameusernameuserhostto_char(timestampdd-mon-yyyyhh24miss) from dba_audit_trail where action_name like DROP TABLE - - - - gtthis query gives you the username along with the the userhos from where the username is connected

CREATE DATABASE sybdata1syb126IQcsoperfcsoperfdb iq path sybdata1syb126IQcsoperfcsoperf01iq iq size 2000message path sybdata1syb126IQcsoperfcsoperfiqmsgtemporary path sybdata1syb126IQcsoperfcsoperfiqtmp temporary size 1000iq page size 65536

system

temp 1000MB

iq_system_main 2000MB

iq_system_main2 1000MBiq_system_main3 5000MB

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 67: 7101564-Daily-Work

iq_system_msg

create dbspace IQ_SYSTEM_MAIN2 as EsybIQdbscsoperfcsoperf02iq IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as EsybIQdbscsoperfcsoperf03iq IQ STORE size 1000go

http1023799289090applicationsdo

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP For

example suppose you do a full backup on Sunday On Monday you back up only the files that changed since Sunday on Tuesday you back up only the files that changed since Sunday and

so on until the next full backup

Differential backups are quicker than full backups because so much less data is being backed up But the amount of data being backed up grows with each differential backup until the next full back up Differential backups are more flexible than full backups but still unwieldy to do

more than about once a day especially as the next full backup approaches

Incremental backups also back up only the changed data but they only back up the data that has changed since the LAST BACKUP mdash be it a full or incremental backup They are sometimes

called differential incremental backups while differential backups are sometimes called cumulative incremental backups

Suppose if you do an incremental backup on Tuesday you only back up the data that changed since the incremental backup on Monday The result is a much smaller faster backup

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track I think you might have gotten some terms

mixed up According to some documentation on otn

There are two types of incremental backups 1) Differential Incremental Backups RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0 For example in a differential level 1

backup RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup If no level 1 is available RMAN copies all blocks changed

since the base level 0 backup

2) Cumulative Incremental Backups RMAN backs up all the blocks used since the most recent level 0 incremental backup Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level Cumulative backups require more space and time than differential backups however because

they duplicate the work done by previous backups at the same level

If you would like to read the entire document (its a short one) you can find it at this site

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process
Page 68: 7101564-Daily-Work

httpdownloadoraclecomdocscdB19306_01backup102b14191rcmconc1005htm Suraj

RE Incremantal RMAN BackupsI Tried to explain you things in a very simple way I am not able to find anything I am missing

If yes please let me know

ORA-27154 postwait create failed gt gt ORA-27300 OS system dependent operationsemget failed with status 28 gt gt ORA-27301 OS failure message No space left on device

gt No space left on device sounds quite clear for me gt Maybe the disk where you want to create the database is full Another gt point colud be insufficient swap space but I would expect another error gt message for that

Note that the error message is linked to semget You seem to have run out of semaphores You configure the max number of semphores in etcsystem

set semsysseminfo_semmni=100 set semsysseminfo_semmns=1024 set semsysseminfo_semmsl=256

  • Applies to
  • Symptoms
  • Changes
  • Cause
  • Solution
    • Guidelines for Using Partition-Level Import
      • Oracle Managed Files (OMF)
        • Managing Controlfiles Using OMF
        • Managing Redo Log Files Using OMF
        • Managing Tablespaces Using OMF
        • Default Temporary Tablespace
          • Auditing
            • Server Setup
            • Audit Options
            • View Audit Trail
            • Maintenance
            • Security
            • Oracle 10g linux TNS-12546 error
              • A symbolic link is a pointer to another file or directory It can be used just like the original file or directory A symbolic link appears in a long listing (ls -l) with a reference to the original filedirectory A symbolic link as opposed to a hard link is required when linking from one filesystem to another and can be used within a filesystem as well
              • A hard link is a reference to a file or directory that appears just like a file or directory not a link Hard links only work within a filesystem In other words dont use hard links between mounted filesystems A hard link is only a reference to the original file not a copy of the file If the original file is deleted the information will be lostuser
                • Oracle 9i Automatic PGA Memory Management
                  • v$pgastat
                  • v$pga_target_advice
                  • v$process