Database Tuning

Toad Configuration setting

posted Dec 20, 2011, 9:56 AM by Sachchida Ojha

Oracle Client Version :
Oracle Client DLL : C:\oracli\11.2.0\BIN\oci.dll
TNSNAMES.ora Location : C:\oracli\11.2.0\Network\Admin\tnsnames.ora
SQLPATH : C:\oracli\11.2.0\dbs

Connection 1 : snojha@testdb.WORLD   ** Active Session **
Oracle Server Version : Oracle Database 11g Enterprise Edition Release - 64bit Production
Oracle Session User : R2USCO (Normal)

Operating System : Windows XP (Build 2600) Service Pack 3
Is Terminal Server: No
Using XP Theme: No

Physical Memory Load : 71.70 %
Physical Memory (Total) : 2035 MB
Physical Memory (Free) : 576 MB
Virtual Memory (Total) : 2047 MB
Virtual Memory (Free) : 1817 MB
Swap Space (Total) : 3915 MB
Swap Space (Free) : 2623 MB

Environment Variable (PATH) :
     C:\Program Files\RSA SecurID Token Common
     C:\Program Files\Business Objects\Common\3.5\bin\NOTES\
     C:\Program Files\Business Objects\Common\3.5\bin\NOTES\DATA\
     c:\Program Files\Common Files\Roxio Shared\DLLShared\
     C:\Program Files\IBM\RationalSDLC\ClearCase\bin
     C:\Program Files\IBM\RationalSDLC\common
     C:\Program Files\Windows Imaging\

Environment Variable (TNS_ADMIN) : TNS_ADMIN is not a declared environment variable.

(Oracle Root)
     inst_loc = C:\Program Files\Oracle\Inventory
KEY_Oracle11gr2 ***** This is the home currently used by Toad *****
     ORACLE_HOME = C:\oracli\11.2.0
     ORACLE_HOME_NAME = Oracle11gr2
     ORACLE_GROUP_NAME = Oracle - Oracle11gr2
     ORACLE_BUNDLE_NAME = Enterprise
     OLEDB = C:\oracli\11.2.0\oledb\mesg
     OO4O = C:\oracli\11.2.0\oo4o\mesg
     MSHELP_TOOLS = C:\oracli\11.2.0\MSHELP
     SQLPATH = C:\oracli\11.2.0\dbs

AWR Reports - Useful Operations

posted Aug 31, 2011, 7:27 AM by Sachchida Ojha


    retention => 43200,        -- Minutes (43200 = 30 Days).
                               -- Current value retained if NULL.
    interval  => 30);          -- Minutes. Current value retained if NULL.

Creating a Baseline:

  DBMS_WORKLOAD_REPOSITORY.create_baseline (
    start_snap_id => 10, 
    end_snap_id   => 100,
    baseline_name => 'AWR First baseline');

In 11g, there is a newly introduced procedure DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE_TEMPLATE that specifies a template for how baselines should be created for future time periods:

start_time => to_date('&start_date_time','&start_date_time_format'),
end_time => to_date('&end_date_time','&end_date_time_format'),
baseline_name => 'MORNING',
template_name => 'MORNING',
expiration => NULL ) ;

"expiration => NULL" means that this baseline will be kept forever.

Dropping an AWR baseline:

    baseline_name => 'AWR First baseline');

You can also drop baseline from old databases:

DBMS_WORKLOAD_REPOSITORY.DROP_BASELINE (baseline_name => 'peak baseline',
cascade => FALSE, dbid => 3310949047);

Dropping the AWR snaps in range:


We can also specify a template for creating and dropping baseline based on repeating time periods:

day_of_week => 'MONDAY',
hour_in_day => 9,
duration => 3,
start_time => to_date('&start_date_time','&start_date_time_format'),
end_time => to_date('&end_date_time','&end_date_time_format'),
baseline_name_prefix => 'MONDAY_MORNING'
template_name => 'MONDAY_MORNING',
expiration => 30 );
/[Insert code here]

The baseline will be generated for the period from '&start_date_time' to '&end_date_time' every Monday.

Creating a SNAPSHOT Manually:


Workload Repository Views

The following workload repository views are available:

  • V$ACTIVE_SESSION_HISTORY - Displays the active session history (ASH) sampled every second.
  • V$METRIC - Displays metric information.
  • V$METRICNAME - Displays the metrics associated with each metric group.
  • V$METRIC_HISTORY - Displays historical metrics.
  • V$METRICGROUP - Displays all metrics groups.
  • DBA_HIST_ACTIVE_SESS_HISTORY - Displays the history contents of the active session history.
  • DBA_HIST_BASELINE - Displays baseline information.
  • DBA_HIST_DATABASE_INSTANCE - Displays database environment information.
  • DBA_HIST_SNAPSHOT - Displays snapshot information.
  • DBA_HIST_SQL_PLAN - Displays SQL execution plans.
  • DBA_HIST_WR_CONTROL - Displays AWR settings.

Automation of AWR reports ?

There is no product functionality to produce the AWR reports in an automated way. In regards to the Oracle product, the production of a report is a manual step. However, one could use the dbms_workload_repository.awr_report_text in a (bash for instance for Unix like OS's) script. It would be scheduled (using the crontab for Unix like OS's) and be instructed with or determine the variables to provide to the awr_report_text function. There are such scripts available on the web, but not officially provided by Oracle as part of the product.

Generating Various Types of AWR Reports

posted Aug 31, 2011, 7:17 AM by Sachchida Ojha   [ updated Feb 2, 2012, 6:36 PM ]

Running a Basic ReportWith appropriate licenses for AWR, you may generate an AWR report by executing the following script and pick the two snapshots you want to use for the sample :

Depending on the reasons for collecting the report, the default can be used, or for a more focused view, a short 10-15 minute snapshot could be used.

You will also be asked for the format of the report (text or html) along with the report name.

AWR reports can be generated by running various SQL scripts to satisfy various requirements. Each report is available in HTML or TXT format:

awrrpt.sql Displays various statistics for a range of snapshots Ids.

Displays statistics for a range of snapshot Ids on a specified database and instance.

Displays statistics of a particular SQL statement for a range of snapshot Ids. Run this report to inspect or debug the performance of a particular SQL statement.

Displays statistics of a particular SQL statement for a range of snapshot Ids on a specified SQL.

Compares detailed performance attributes and configuration settings between two selected time periods.

Compares detailed performance attributes and configuration settings between two selected time periods on a specific database and instance.


posted Aug 30, 2011, 8:30 AM by Sachchida Ojha

The Sql Tuning Set name is small_sh_sts_4.It is on Database server.
It has 4 sqls .

select sql_id, substr(sql_text,1, 15) text
from dba_sqlset_statements
where sqlset_name = 'small_sh_sts_4'
order by sql_id;

------------- -------------------------
4qdz7j26mdwzb SELECT /*+ my_q
6y289t15dqj9r SELECT /*+ my_q
ckm14c67njf0q SELECT /*+ my_q
g37muqb81wjau SELECT /*+ my_q

1) Create the Staging Table ON THE SOURCE SYSTEM ( IN this case ) .


PL/SQL procedure successfully completed.

SQL> select count(*) from test;


2) Popluate the table TEST using DBMS_SQLTUNE.PACK_STGTAB_SQLSET THE SOURCE  SYSTEM (IN this case )

execute DBMS_SQLTUNE.PACK_STGTAB_SQLSET(sqlset_name => 'small_sh_sts_4',staging_table_name => 'TEST');

PL/SQL procedure successfully completed.

select count(*) from test;

SQL> select count(*) from test;


3) Export the table table test on THE SOURCE SYSTEM ( IN this case )
and move the table to the Destination Server
and Import it . The staging table TEST can also be moved using
the mechanism of choice such as datapump or database link.

While exporting the table you will see something like

About to export specified tables via Conventional Path ...
Table(T) or Partition(T:P) to be exported: (RETURN to quit) > TEST

. . exporting table TEST 4 rows exported
. . exporting table TEST_CBINDS 0 rows exported
. . exporting table TEST_CPLANS 34 rows exported

Import on the Destination system ( IN this CASE it was )

While Importing the table you will see something like

. importing SH's objects into SH
. importing SH's objects into SH
. . importing table "TEST" 4 rows imported
. . importing table "TEST_CBINDS" 0 rows imported
. . importing table "TEST_CPLANS" 34 rows imported
Import terminated successfully without warnings.

SQL> select count(*) from test;


Verify the contents of DBA_SQLSET or USER_SQLSET on the Destination system


4) Unpack the table using DBMS_SQLTUNE.UNPACK_STGTAB_SQLSET on the Destination system ( IN this CASE it was )

execute DBMS_SQLTUNE.UNPACK_STGTAB_SQLSET( sqlset_name => '%',-
replace => TRUE,-
staging_table_name => 'TEST');


------------------------------ --------- ---------------
small_sh_sts_4 24-AUG-07 4

Using SQL tracing in the Oracle Tools

posted Jul 27, 2011, 11:52 AM by Sachchida Ojha

This article documents the use of SQL tracing in the various Oracle tools.
This is not to be confused with SQL*Net tracing which shows all database
operations sent through the SQL*Net connections.

In all cases, the trace files will be deposited in the location specified by
the init.ora parameter USER_DUMP_DEST.  It should be noted that this is given a
default value under UNIX, which can be seen from inside SQL*DBA.  This is
not set by default in a PC/Netware environment, so must be explicitly set.
Otherwise, no trace files will be produced.  Under VMS, this parameter may be
overwritten by setting the ORA_DUMP logical.  In all cases, ensure that the 
write permissions are set correctly for the destination directory.

Also, for more readable output, the tkprof utility can be invoked on the trace
file.  The use of tkprof is documented in Appendix B of the Application
Developers Guide for Oracle 7 and in Section 7 of the Performance Tuning Guide
for Oracle 6.

The trace facility may be turned on at the database level, by inserting the
following line in the init.ora file:

                           sql_trace = true

This will trace all database activity whilst the database is running.  Only use
this method if all transactions need to be monitored, since a large number of
trace files can be generated in this way.  In order to take effect, the 
parameter needs to be added to the init.ora file and the database restarted. 
Remember to remove the parameter and restart the instance, once the required 
trace information has been collected.


   Tracing can be set by issuing the command: 
                    SQL> alter session set sql_trace true;
   Once the transactions requiring tracing have executed, tracing can be turned
   off by issuing the command:
                    SQL> alter session set sql_trace false;


   In Oracle 6, trace may be invoked within SQL*Plus as above, prior to running
   the PL/SQL script.  In Oracle 7 the same method may be used, or the stored
   procedure dbms_session.set_sql_trace() may be invoked with the syntax:
   This may be inserted at the first line of the PL/SQL script.  If this package
   or procedure cannot be found, then it may need to be created by the dba 
   running the dbmsutil.sql script as the user sys.  This dbms_session package 
   is documented on page A-2 of the Version 7 Application Developers Guide.
   It is advisable to turn off tracing.

   From within SQL*Plus:
                SQL> alter session set sql_trace false;
   From within PL/SQL after the transactions requiring tracing have executed:             
   Either of these statements will ensure that no more trace output than is 
   necessary is generated.

Forms 2.3

   Tracing can be set by creating a user defined trigger called trace_on.  For
   example, containing only the line:

                 alter session set sql_trace true
   This trigger can be called from a key-startup trigger as follows:
                 #exemacro exetrg trace_on;
   Once this trigger has fired, then all further database actions will be
   traced and recorded in the trace file.  This can be useful if errors are 
   occurring saving the form to the database, as tracing can be invoked
   inside the form followed by the save/load/generate operation that results 
   in error(s) and all database actions will still be stored in the trace file.

Forms 3

   There is a statistics option for a runform session, which can either be
   invoked from the options screen of the designer or by using the -s flag
   when calling runform from the operating system prompt.  This statistics
   option is documented on page 23-5 of the SQL*Forms 3 Reference Manual.
   If it becomes necessary to trace the actions when a form is saved/loaded
   to/from the database, then a form must be run from the designer with
   statistics on and then the database action performed.  If the session is 
   running against an Oracle 7 database then an alternative to using the 
   statistics option is to use:
   This is to be used as a line inside a trigger or PL/SQL procedure, usually 
   the pre-form trigger.

Forms 4

   There is a statistic option which is documented on page 1-18 of the Forms 4 
   Reference Manual Volume 1 (packaged procedure dbms_session).  The parameter
   'set_sql_trace()' may also be invoked inside a when-new-form-instance 
   trigger, similarly to the method described in the previous section.

Graphics 2

   Create the following PL/SQL program unit:
                  procedure set_trace is
   Next, define the open display trigger to invoke this procedure.  If an open
   display trigger has already been defined, then just adding the following line
   will invoke the trace facility:
   However, note that much more sophisticated debugging methods are available
   within the PL/SQL debugger.

Reportwriter 1.1

   Tracing can be invoked from inside a report by creating a field with the
   following settings:
                      Name   -> sql_trace
                      Source -> &SQL alter session set sql_trace true
                      Group  -> REPORT
                      Width  -> 40
   Ensure that the field label is deleted, the skip field is set on, and that
   there is no reference to this field on the text screens.
   Running the report will enable the tracing, which will continue until the
   reports session is ended.
   However, there is a known BUG 190946, which states that setting alter session
   set sql_trace true inside a report produces a trace file that only has a 
   trace of the alter session statement.  This bug was first noted in version and is fixed in version  This basically means that a 
   report cannot set a trace onto itself, and hence the following workaround is 
   First, create and execute a dummy trace report and then execute the real 
   report.  This means that tracing can only be done from within the designer, 
   as both reports cannot be run with a single runrep command (and therefore in 
   the same session).
   To create the trace report follow these steps:

      1. Create a report with 1 field and 1 query as in:
         Field: Name   -> sql_trace
                Source -> &SQL alter session set sql_trace true
                Group  -> Report
                Width  -> 40
         Query: Anything, for example, select 'a' from dual.
         It may be preferable to ensure that this report writes to a null file 
         to ensure that there is no output generated.

      2. Each time a report is to be traced, follow these instructions:
         a. Start the designer
         b. Execute this trace report
         c. Execute the report to be traced.

   This method has the advantage that the report being traced does not have to 
   be altered in any way.
   Bear in mind that once the sql_trace has been enabled, it will remain enabled
   until the end of the reports session.  This means that in the event of 
   multiple reports beginning debug it is advisable to exit the reports designer
   after each report has been traced, thus resulting in a separate trace file 
   for each report.
Reports 2

   There are 2 methods for setting sql_trace here.  Firstly, use in the
   Before-Report trigger:
   Secondly, create a function column at report level, with a formula of:
            srw.do_sql('alter session set sql_trace true') 
   and a return type of char.  Ensure that this column is not displayed on the 
   report layout.

   Trace can be invoked by including the following line in the source program:
                 exec sql alter session set sql_trace true;
   Tracing can also be turned off by using:
                 exec sql alter session set sql_trace false;

TKPROF Usage - Quick Reference

posted Jul 27, 2011, 11:51 AM by Sachchida Ojha

Quick Reference to Using TKPROF

  TKPROF allows you to analyse a trace file to determine where time is 
  being spent and what query plans are being used on SQL statements.

   1 - Set TIMED_STATISTICS if required at database level.
   2 - Get SQL_TRACE output for the session you wish to monitor
       (See Note:15160.1 for how to set this in various tools)

   3 - Find the appropriate trace file (In USER_DUMP_DEST, default
       $ORACLE_HOME/rdbms/log on Unix). 
       You can find the most recent trace files on Unix with the command:
         ls -ltr
       This will list the most recent files LAST

   4 - Run tkprof on the trace file thus:

         tkprof tracefile outfile [explain=user/password] [options...]

TKPROF Options
   print=integer    List only the first 'integer' SQL statements.
   insert=filename  List SQL statements and data inside INSERT statements.
   sys=no           TKPROF does not list SQL statements run as user SYS.
   record=filename  Record statements found in the trace file.
   sort=option      Set of zero or more of the following sort options:

     prscnt  number of times parse was called
     prscpu  cpu time parsing
     prsela  elapsed time parsing
     prsdsk  number of disk reads during parse
     prsqry  number of buffers for consistent read during parse
     prscu   number of buffers for current read during parse
     prsmis  number of misses in library cache during parse

     execnt  number of execute was called
     execpu  cpu time spent executing
     exeela  elapsed time executing
     exedsk  number of disk reads during execute
     exeqry  number of buffers for consistent read during execute
     execu   number of buffers for current read during execute
     exerow  number of rows processed during execute
     exemis  number of library cache misses during execute

     fchcnt  number of times fetch was called
     fchcpu  cpu time spent fetching
     fchela  elapsed time fetching
     fchdsk  number of disk reads during fetch
     fchqry  number of buffers for consistent read during fetch
     fchcu   number of buffers for current read during fetch
     fchrow  number of rows fetched

     userid  userid of user that parsed the cursor

RMAN Performance Tuning Using Buffer Memory Parameters

posted Jul 26, 2011, 10:54 AM by Sachchida Ojha

The purpose of RMAN performance tuning is to identify the bottlenecks for a given backup or restore job and use RMAN commands, initialization parameters, or adjustments to physical media to improve overall performance.
As database sizes continue to grow unabated, with tens to hundreds of terabytes prevalent in customer’s environments, server and storage systems have also dramatically increased in resource, network, and I/O capacity to support required database backup and restore windows. Against this backdrop, RMAN performance tuning becomes more important than ever.

Oracle Performance Diagnostic Guide

posted Jul 11, 2011, 7:47 AM by Sachchida Ojha

Welcome to the Oracle Performance Diagnostic Guide .
This guide is intended to help you resolve query tuning, hang/locking, and slow database issues.
The guide is not an automated tool but rather seeks to show methodologies, techniques, common causes, and solutions to performance problems.

10g New Functionality of V$SESSION and WAIT MODEL

posted Mar 9, 2011, 11:23 AM by Sachchida Ojha

In prior releases, we joined v$session_wait with the v$session to determine the sessions waits.
From Oracle 10g, all wait event columns from v$session_wait have been added
to v$session thus increasing performance by eliminating the overhead of joins.

1. New columns of V$SESSION related with enhanced wait model

(1) All V$SESSION_WAIT columns


(2) Brand new columns
BLOCKING_SESSION : Session identifier of blocking session
WAIT_CLASS# : Wait class number
WAIT_CLASS : Name of the wait class.

2. How to determine a blocking session using only V$SESSION ?

session 1 : update dept set loc='SEOUL' where deptno=10;
session 2 : update dept set loc='SEOUL' where deptno=10;

(1) Finding Blocking Session

--- blockings.sql
col program format a40
col username format a10

select s.sid blocker, substr(s.program,1,40) program, w.username, w.sid blocked
from v$session s, v$session w
where w.blocking_session = s.sid
and w.blocking_session_status='VALID';

---------- ---------------------------------------- ---------- ----------
14 sqlplus@dhcp-samhwa-152-69-41-87 (TNS V1 SCOTT 9


(2) Finding Specific Type of WAIT EVENT.

Some enqueues and latches are now broken down in the specific type of event.
You will be able to see EVENT column for the enqueue in particular whether the breakdown
is made in v$session. The expected result is that instead of having a 'enqueue' wait,
you will see several 'enq%' type of waits.

--- waitevent.sql
col hevent format a40
col wevent format a40

select s.sid blocker,
s.event hevent,
w.event wevent,
w.sid blocked
from v$session s, v$session w
where w.blocking_session = s.sid
and w.blocking_session_status='VALID';

---------- ---------------------------- ---------------------------- ---------
14 SQL*Net message from client enq: TX - row lock contention 9

(3) Finding Blocked OBJECT#, FILE#, BLOCK# and ROW#

--- blockedobj.sql
select sid, event, row_wait_obj# obj, row_wait_file# file,
row_wait_block# block, row_wait_row# row
from v$session s, v$session w
where w.blocking_session = s.sid

---- ------------------------------ ---------- ------ --------- ----
9 enq: TX - row lock contention 41456 1 46818 0

(4) Finding outstanding wait classes of sessions.

In Oracle 10G, wait events are classified into 14 categories of wait classes.
V$SESSION.WAIT_CLASS is used to gain quick insight into contentious areas of the

--- waitclass.sql
col CLASS format a40
select wait_class# CLASS#, wait_class CLASS, count(event#)
from v$event_name
group by wait_class#, wait_class;

------- ------------------------------------- --------------
0 Other 400
1 Application 8
2 Configuration 18
3 Administrative 40
4 Concurrency 12
5 Commit 1
6 Idle 49
7 Network 18
8 Archival Process Scheduling 19
9 Managed Recovery Processing Scheduling 10
10 User I/O 13
11 System I/O 17
12 Scheduler 6
13 Cluster 82

--- mostwaitclass.sql
select wait_class, count(username)
from v$session
group BY wait_class;

-------------------------------- ---------------
Application 3
Idle 1
Other 1

Understand each field of AWR

posted Mar 9, 2011, 11:20 AM by Sachchida Ojha

AWR report is broken into multiple parts.

1)Instance information:-
This provides information the instance name , number,snapshot ids,total time the report was taken for and the database time during this elapsed time.

Elapsed time= end snapshot time - start snapshot time
Database time= Work done by database during this much elapsed time( CPU and I/o both add to Database time).If this is lesser than the elapsed time by a great margin, then database is idle.Database time does not include time spend by the background processes.

2)Cache Sizes : This shows the size of each SGA region after AMM has changed them. This information
can be compared to the original init.ora parameters at the end of the AWR report.

3)Load Profile: This important section shows important rates expressed in units of per second and
transactions per second.This is very important for understanding how is the instance behaving.This has to be compared to base line report to understand the expected load on the machine and the delta during bad times.

4)Instance Efficiency Percentages (Target 100%): This section talks about how close are the vital ratios like buffer cache hit, library cache hit,parses etc.These can be taken as indicators ,but should not be a cause of worry if they are low.As the ratios cold be low or high based in database activities, and not due to real performance problem.Hence these are not stand alone statistics, should be read for a high level view .

5)Shared Pool Statistics: This summarizes changes to the shared pool during the snapshot

6)Top 5 Timed Events :This is the section which is most relevant for analysis.This section shows what % of database time was the wait event seen for.Till 9i, this was the way to backtrack what was the total database time for the report , as there was no Database time column in 9i.

7)RAC Statistics :This part is seen only incase of cluster instance.This provides important indication on the average time take for block transfer, block receiving , messages ., which can point to performance problems in the Cluster instead of database.

8)Wait Class : This Depicts which wait class was the area of contention and where we need to focus.Was that network, concurrency, cluster, i/o Application, configuration etc.

9)Wait Events Statistics Section: This section shows a breakdown of the main wait events in the
database including foreground and background database wait events as well as time model, operating
system, service, and wait classes statistics.

10)Wait Events: This AWR report section provides more detailed wait event information for foreground
user processes which includes Top 5 wait events and many other wait events that occurred during
the snapshot interval.

11)Background Wait Events: This section is relevant to the background process wait events.

12)Time Model Statistics: Time mode statistics report how database-processing time is spent. This
section contains detailed timing information on particular components participating in database
processing.This gives information about background process timing also which is not included in database time.

13)Operating System Statistics: This section is important from OS server contention point of view.This section shows the main external resources including I/O, CPU, memory, and network usage.

14)Service Statistics: The service statistics section gives information services and their load in terms of CPU seconds, i/o seconds, number of buffer reads etc.

15)SQL Section: This section displays top SQL, ordered by important SQL execution metrics.

a)SQL Ordered by Elapsed Time: Includes SQL statements that took significant execution
time during processing.

b)SQL Ordered by CPU Time: Includes SQL statements that consumed significant CPU time
during its processing.

c)SQL Ordered by Gets: These SQLs performed a high number of logical reads while
retrieving data.

d)SQL Ordered by Reads: These SQLs performed a high number of physical disk reads while
retrieving data.

e)SQL Ordered by Parse Calls: These SQLs experienced a high number of reparsing operations.

f)SQL Ordered by Sharable Memory: Includes SQL statements cursors which consumed a large
amount of SGA shared pool memory.

g)SQL Ordered by Version Count: These SQLs have a large number of versions in shared pool
for some reason.

16)Instance Activity Stats: This section contains statistical information describing how the database
operated during the snapshot period.

17)I/O Section: This section shows the all important I/O activity.This provides time it took to make 1 i/o say Av Rd(ms), and i/o per second say Av Rd/s.This should be compared to the baseline to see if the rate of i/o has always been like this or there is a diversion now.

18)Advisory Section: This section show details of the advisories for the buffer, shared pool, PGA and
Java pool.

19)Buffer Wait Statistics: This important section shows buffer cache waits statistics.

20)Enqueue Activity: This important section shows how enqueue operates in the database. Enqueues are
special internal structures which provide concurrent access to various database resources.

21)Undo Segment Summary: This section gives a summary about how undo segments are used by the database.
Undo Segment Stats: This section shows detailed history information about undo segment activity.

22)Latch Activity: This section shows details about latch statistics. Latches are a lightweight
serialization mechanism that is used to single-thread access to internal Oracle structures.The latch should be checked by its sleeps.The sleepiest Latch is the latch that is under contention , and not the latch with high requests.Hence  run through the sleep breakdown part of this section to arrive at the latch under highest contention.

23)Segment Section: This portion is important to make a guess in which segment and which segment type the contention could be.Tally this with the top 5 wait events.

Segments by Logical Reads: Includes top segments which experienced high number of
logical reads.

Segments by Physical Reads: Includes top segments which experienced high number of disk
physical reads.

Segments by Buffer Busy Waits: These segments have the largest number of buffer waits
caused by their data blocks.

Segments by Row Lock Waits: Includes segments that had a large number of row locks on
their data.

Segments by ITL Waits: Includes segments that had a large contention for Interested
Transaction List (ITL). The contention for ITL can be reduced by increasing INITRANS storage
parameter of the table.

24)Dictionary Cache Stats: This section exposes details about how the data dictionary cache is

25)Library Cache Activity: Includes library cache statistics  which are needed in case you see library cache in top 5 wait events.You might want to see if the reload/invalidations are causing the contention or there is some other issue with library cache.

26)SGA Memory Summary:This would tell us the difference in the respective pools at the start and end of report.This could be an indicator of setting minimum value for each, when sga)target is being used..

27)init.ora Parameters: This section shows the original init.ora parameters for the instance during
the snapshot period.

There would be more Sections in case of RAC setups to provide details.

1-10 of 17

Sachchida Ojha,
Sep 13, 2010, 1:58 PM