DBA's Forum, Routine Tasks and Activities

A simple RMAN backup scripts for Windows Env

posted Jun 5, 2012, 3:28 PM by Sachchida Ojha   [ updated Jun 5, 2012, 3:35 PM ]

C:\Users\snojha>rman target=sys/<password>@orcl

Recovery Manager: Release - Production on Tue Jun 5 18:13:01 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL (DBID=1312789174)

RMAN> connect catalog rman/<password>@rcat

connected to recovery catalog database

RMAN> configure controlfile autobackup format for device type disk to 'c:/rmanbackup/%F';
RMAN> configure controlfile autobackup on;

RMAN> run{
allocate channel d1 type disk;
allocate channel d2 type disk;
allocate channel d3 type disk;
allocate channel d4 type disk;
backup tag FULL_DB format 'c:/rmanbackup/db_%t_%s.bk' (database);
release channel d1;
release channel d2;
release channel d3;
release channel d4;

Screen output
allocated channel: d1
channel d1: SID=37 device type=DISK

allocated channel: d2
channel d2: SID=193 device type=DISK

allocated channel: d3
channel d3: SID=226 device type=DISK

allocated channel: d4
channel d4: SID=34 device type=DISK

Starting backup at 05-JUN-12
channel d1: starting full datafile backup set
channel d1: specifying datafile(s) in backup set
input datafile file number=00006 name=C:\APP\ORACLE\ORADATA\ORCL\APP_DATA01
input datafile file number=00005 name=C:\APP\ORACLE\ORADATA\ORCL\EXAMPLE01.DBF
input datafile file number=00004 name=C:\APP\ORACLE\ORADATA\ORCL\USERS01.DBF
channel d1: starting piece 1 at 05-JUN-12
channel d2: starting full datafile backup set
channel d2: specifying datafile(s) in backup set
input datafile file number=00003 name=C:\APP\ORACLE\ORADATA\ORCL\UNDOTBS01.DBF
input datafile file number=00014 name=C:\APP\ORACLE\ORADATA\ORCL\PART_IDX03
input datafile file number=00001 name=C:\APP\ORACLE\ORADATA\ORCL\SYSTEM01.DBF
input datafile file number=00002 name=C:\APP\ORACLE\ORADATA\ORCL\SYSAUX01.DBF
channel d2: starting piece 1 at 05-JUN-12
channel d3: starting full datafile backup set
channel d3: specifying datafile(s) in backup set
input datafile file number=00007 name=C:\APP\ORACLE\ORADATA\ORCL\APP_IDX01
input datafile file number=00010 name=C:\APP\ORACLE\ORADATA\ORCL\PART_DATA03
input datafile file number=00012 name=C:\APP\ORACLE\ORADATA\ORCL\PART_IDX01
input datafile file number=00015 name=C:\APP\ORACLE\ORADATA\ORCL\PART_IDX04
channel d3: starting piece 1 at 05-JUN-12
channel d4: starting full datafile backup set
channel d4: specifying datafile(s) in backup set
input datafile file number=00008 name=C:\APP\ORACLE\ORADATA\ORCL\PART_DATA01
input datafile file number=00009 name=C:\APP\ORACLE\ORADATA\ORCL\PART_DATA02
input datafile file number=00011 name=C:\APP\ORACLE\ORADATA\ORCL\PART_DATA04
input datafile file number=00013 name=C:\APP\ORACLE\ORADATA\ORCL\PART_IDX02
channel d4: starting piece 1 at 05-JUN-12
channel d3: finished piece 1 at 05-JUN-12
piece handle=C:\RMANBACKUP\DB_785183133_23.BK tag=FULL_DB comment=NONE
channel d3: backup set complete, elapsed time: 00:00:20
channel d4: finished piece 1 at 05-JUN-12
piece handle=C:\RMANBACKUP\DB_785183143_24.BK tag=FULL_DB comment=NONE
channel d4: backup set complete, elapsed time: 00:00:25
channel d2: finished piece 1 at 05-JUN-12
piece handle=C:\RMANBACKUP\DB_785183130_22.BK tag=FULL_DB comment=NONE
channel d2: backup set complete, elapsed time: 00:01:32
channel d1: finished piece 1 at 05-JUN-12
piece handle=C:\RMANBACKUP\DB_785183129_21.BK tag=FULL_DB comment=NONE
channel d1: backup set complete, elapsed time: 00:06:54
Finished backup at 05-JUN-12

Starting Control File and SPFILE Autobackup at 05-JUN-12
piece handle=C:\RMANBACKUP\C-1312789174-20120605-01 comment=NONE
Finished Control File and SPFILE Autobackup at 05-JUN-12

released channel: d1
released channel: d2
released channel: d3
released channel: d4


Export and Import schema statistics

posted Jun 4, 2012, 7:14 PM by Sachchida Ojha

Easy way to export and import schema statistics – in case the optimizer is behaving strangely and you want to re-import old statistics when the optimizer behaved properly.


Create the table to store the statistics-

Export schema stats – will be stored in the temp_stat

If required import these statistics back to TEST schema.

Finally drop the table created to backup the schema stats

How to find partition keys for partition table

posted Jun 3, 2012, 8:11 AM by Sachchida Ojha


NAME                   OBJEC
------------------------------ -----
F_SALES                TABLE

Unindexed Foreign Keys

posted Jun 2, 2012, 7:58 PM by Sachchida Ojha   [ updated Jun 2, 2012, 7:59 PM ]

  a.owner                                 owner
 ,a.constraint_name                       cons_name
 ,a.table_name                            tab_name
 ,b.column_name                           cons_column
 ,NVL(c.column_name,'***Check index****') ind_column
FROM dba_constraints  a
    ,dba_cons_columns b
    ,dba_ind_columns  c
WHERE constraint_type = 'R'
AND a.owner           = SYS_CONTEXT ('USERENV', 'SESSION_USER')
AND a.owner           = b.owner
AND a.constraint_name = b.constraint_name
AND b.column_name     = c.column_name(+)
AND b.table_name      = c.table_name(+)
AND b.position        = c.column_position(+)
ORDER BY tab_name, ind_column;

 CASE WHEN ind.index_name IS NOT NULL THEN
   CASE WHEN ind.index_type IN ('BITMAP') THEN
     '** Bitmp idx **'
   '** Check idx **'
 END checker
,cons.owner, cons.table_name, ind.index_name, cons.constraint_name, cons.cols
        c.owner, c.table_name, c.constraint_name
       ,LISTAGG(cc.column_name, ',' ) WITHIN GROUP (ORDER BY cc.column_name) cols
      FROM dba_constraints  c
          ,dba_cons_columns cc
      WHERE c.owner           = cc.owner
      AND   c.constraint_name = cc.constraint_name
      AND   c.constraint_type = 'R'
      GROUP BY c.owner, c.table_name, c.constraint_name) cons
  table_owner, table_name, index_name, index_type, cbr
 ,LISTAGG(column_name, ',' ) WITHIN GROUP (ORDER BY column_name) cols
        ic.table_owner, ic.table_name, ic.index_name
       ,ic.column_name, ic.column_position, i.index_type
       ,CONNECT_BY_ROOT(ic.column_name) cbr
       FROM dba_ind_columns ic
           ,dba_indexes     i
       WHERE ic.table_owner = SYS_CONTEXT ('USERENV', 'SESSION_USER')
       AND   ic.table_owner = i.table_owner
       AND   ic.table_name  = i.table_name
       AND   ic.index_name  = i.index_name
       CONNECT BY PRIOR ic.column_position-1 = ic.column_position
       AND PRIOR ic.index_name = ic.index_name)
  GROUP BY table_owner, table_name, index_name, index_type, cbr) ind
ON  cons.cols       = ind.cols
AND cons.table_name = ind.table_name
AND cons.owner      = ind.table_owner
ORDER BY checker, cons.owner, cons.table_name;

Calculating the size of the index before creating it

posted Jun 2, 2012, 6:17 PM by Sachchida Ojha

SQL> variable used_bytes number;
SQL> variable alloc_bytes number;
SQL> exec dbms_space.create_index_cost('create index cust_idx2 on cust(first_name)',:used_bytes,:alloc_bytes);

PL/SQL procedure successfully completed.

SQL> print :used_bytes


SQL> print :alloc_bytes;


SQL> create index cust_idx2 on cust(first_name);

Index created.

SQL> select bytes from user_segments where segment_name='CUST_IDX2';



Dataguard Broker

posted May 20, 2012, 6:27 AM by Sachchida Ojha

The Data Guard broker is a distributed management framework that automates and centralizes the creation, maintenance, and monitoring of Data Guard configurations. You can use either the Oracle Enterprise Manager graphical user interface (GUI) or command-line interface (CLI) to automate and simplify:

  1. Creating and enabling Data Guard configurations, including setting up log transport services and log apply services.
  2. Managing an entire Data Guard configuration from any system in the configuration.
  3. Managing and monitoring Data Guard configurations that contain Real Application Clusters primary or standby databases.


On both Primary and Standby sites, change the initialization parameter in the spfile to enable the Data guard broker.

SQL> Alter system set dg_broker_start=True scope=both;
System Altered.

On the PRIMARY site, open the ‘cmd’ and start Command Line Interface (CLI) of the Dataguard Broker (DGMGRL).

C:\> dgmgrl
DGMGRL for 32-bit Windows: Version - Production
Copyright (c) 2000, 2005, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.


Now connect to the database through the service you made previously.

DGMGRL> connect sys/oracle@to_primary

Create broker configuration.

DGMGRL> create configuration ‘broker1’ as
> primary database is ‘primary’
> connect identifier is to_primary;

(‘to_primary’ in Connect identifier is the service name through which the broker is connected to the PRIMARY database)

Add Standby Database to the above configuration.

DGMGRL> Add database ‘standby’ as
> connect identifier is to_standby
> maintained as physical;

(‘to_standby’ in Connect identifier is the service name through which the broker is connected to the STANDBY database)

Now the configuration has been set up but it is still disabled. You can view the configuration by executing:

DGMGRL> show configuration
 Name:                    broker1
 Enabled:                 NO
 Protection Mode:         MaxPerformance
 Fast-Start Failover:     DISABLE
   primary - Physical standby database
   standby - Primary database

Current status for "broker1":

The next step is to ENABLE the configuration ‘broker1’.

DGMGRL> enable configuration;

Again view the configuration.

DGMGRL> show configuration

 Name:                    broker1
 Enabled:                 YES
 Protection Mode:         MaxPerformance
 Fast-Start Failover:     DISABLE
   primary - Physical standby database
   standby - Primary database

Current status for "broker1":

[edit] Switchover:

Now we are ready to switch over the PRIMARY database Role to STANDBY database Role.

DGMGRL> switchover to ‘Standby’;

Primary Database Successfully converted to Physical Standby. You can again switch over the Standby Database to Primary by executing following command.

DGMGRL> switchover to ‘Primary’;

Standby Database is successfully converted to Primary Database.


Failover can be done through the same configuration without any alteration. You simply need to execute following command:

DGMGRL> failover to ‘Standby’;
Failover to standby succeeded.

And also…

DGMGRL> failover to ‘Primary’;
Failover to primary succeeded.

Ways to manage a Data Guard environment

posted May 20, 2012, 6:25 AM by Sachchida Ojha

Oracle provides three ways to manage a Data Guard environment:

1. SQL*Plus and SQL Statements

    Using SQL*Plus and SQL commands to manage Data Guard environment.The following SQL statement initiates a switchover operation:

    SQL> alter database commit to switchover to physical standby;

2. Data Guard Broker GUI Interface (Data Guard Manager)

    Data Guard Manger is a GUI version of Data Guard broker interface that allows you to automate many of the tasks involved in configuring and monitoring a Data Guard environment.

3. Data Guard Broker Command-Line Interface (CLI)

    It is an alternative interface to using the Data Guard Manger. It is useful if you want to use the broker from batch programs or scripts. You can perform most of the activities required to manage and monitor the Data Guard environment using the CLI.

Data Guard Services

posted May 20, 2012, 6:23 AM by Sachchida Ojha

Log apply services automatically apply redo to standby databases to maintain synchronization with the primary database and allow transactionally consistent access to the data. By default, log apply services wait for the full archived redo log file to arrive on the standby database before applying it to the standby database.

Log Apply Services Configuration Options

  1. Using Real-Time Apply to Apply Redo Data Immediately : If the real-time apply feature is enabled, log apply services can apply redo data as it is received, without waiting for the current standby redo log file to be archived. This results in faster switchover and failover times because the standby redo log files have been applied already to the standby database by the time the failover or switchover begins.
  2. Specifying a Time Delay for the Application of Archived Redo Log Files: In some cases, you may want to create a time lag between the time when redo data is received from the primary site and when it is applied to the standby database. You can specify a time interval (in minutes) to protect against the application of corrupted or erroneous data to the standby database. When you set a DELAY interval, it does not delay the transport of the redo data to the standby database. Instead, the time lag you specify begins when the redo data is completely archived at the standby destination.

SQL Apply

Oracle Base and Diagnostic Destination Configuration

posted May 20, 2012, 6:20 AM by Sachchida Ojha

The directory that you specify when you are prompted for ORACLE_BASE by Oracle Universal Installer is stored in the Oracle home inventory. Database Upgrade Assistant uses this value to derive the default database locations and the DIAGNOSTIC_DEST parameter. The diagnostic destination location contains all ADR directories (diagnostic files, such as the alert logs, trace files, and so on). This diagnostic destination directory is required while upgrading an earlier Oracle Database release to Oracle Database 11g release of the database. If the Oracle base directory already exists, then Oracle Database Upgrade Assistant automatically retrieves this information and populates its path. Starting with Oracle Database 11g Release 1, the initialization parameter settings for background dump (BACKGROUND_DUMP_DEST), user dump (USER_DUMP_DEST), and core dump (CORE_DUMP_DEST) destinations are replaced by the Diagnostic Destination (DIAGNOSTIC_DEST).

Use the following script to generate SQL to kill off multiple sessions all waiting for the same library cache pin

posted May 20, 2012, 6:17 AM by Sachchida Ojha

  SELECT 'alter system kill session ''' || s.sid || ','  || s.serial# || ''';'
  FROM x$kglpn p, v$session s
  WHERE p.kglpnuse=s.saddr
  AND kglpnhdl='value of p1raw'

1-10 of 153