ASM

How to resolve ORA-29701: Unable to Connect to Cluster Manager -- Host name changed for the server and server is rebooted.

posted May 23, 2011, 6:55 AM by Sachchida Ojha

SA changed the host name in addition to the IP address, so things were a little different to recover.  All the below are from the ASM home.

 1.       First, I had to run the following as root:   $GRID_HOME/crs/install/roothas.pl -deconfig -force -verbose

2.       Then, again as root, you have to run $GRID_HOME/root.sh .  you may run into a message saying that RAC was turned off, if so, then you’ll need to run make -f ins_rdbms.mk rac_off as root in $GRID_HOME/rdbms/lib  (if you get “make: 1254-004 The error code from the last command is 1.” follow 1290438.1 from metalink)

3.       After this, make sure you update the $GRID_HOME/network/bin listener with the correct host

4.       Start the listener

5.       Do steps 2-9 below.  The disk groups will auto-mount, but it will take a while.

6.       No need to do steps 10-15

How to resolve ORA-29701: Unable to Connect to Cluster Manager -- IP address change for the server and server is rebooted.

posted May 18, 2011, 12:20 PM by Sachchida Ojha   [ updated May 23, 2011, 6:55 AM ]


I ran into ORA-29701 when I tried to start the ASM instance on va-ieng03 after Sys Admin rebooted the server. The only change that occurred was that the IP address had changed. Below you will find the steps that I followed to resolve this issue:

1. Set ORACLE_HOME to ASM

2. crsctl status resource -t
-> check if services are offline

crsctl status resource -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd 1 OFFLINE OFFLINE va-ieng03
ora.diskmon 1 OFFLINE OFFLINE va-ieng03

3. crsctl status resource –p

-> what are auto_start settings?

4. If they are offline, you can try to set the following:
crsctl status resource –p

-> what are auto_start settings?

5. Modify settings if they are not set to 1
crsctl modify resource "ora.cssd" -attr "AUTO_START=1"
crsctl modify resource "ora.diskmon" -attr "AUTO_START=1"

6. Start crs
crs_start –all

7. verify that statuses changed:

crsctl status resource -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd 1 ONLINE ONLINE va-ieng03
ora.diskmon 1 ONLINE ONLINE va-ieng03


crsctl status resource -p


8. Add ASM service

srvctl add asm


9. Start ASM
srvctl start asm

10. Connect to ASM (with sys as sysasm) and mount at least one of the disk groups

11. Create temporary init.ora for ASM

"init+ASM.ora" [New file]

asm_diskgroups='RAT1_DATA','TDB1_DATA','PROD_FRA'
instance_type='asm'
large_pool_size=12M
remote_login_passwordfile='EXCLUSIVE'

12. Bounce ASM instance

13. Create spfile for ASM

14. Bounce ASM instance

15. Start DB instances

 

How to connect to ASM instance from a remote client

posted Mar 8, 2011, 2:14 PM by Sachchida Ojha

The solution is to
   * Edit the listener.ora on ASM server
   * Edit the tnsnames.ora on the client
   * Setup your remote login password for your ASM instance on the ASM server
   * Set your SYS password using ORAPWD for the ASM instance
   * Use the properly formatted connect string to connect with your tool

1) Edit the listener.ora on ASM server

   a) Logon to the ASM/Database server

   b) Locate your listener.ora (typically located in the $ORACLE_HOME/network/admin)

   c) Add a SID_LIST_LISTENER entry for your ASM instance (see example below)

EXAMPLE

SID_LIST_LISTENER =
    (SID_LIST =
      (SID_DESC =
         (SID_NAME = +ASM)
         (ORACLE_HOME = c:\oracle\app\product\11.1.0\db_1)
      )
   )

   d) Stop the listener

lsnrctl stop

   e) restart the listener

lsnrctl start


2) Edit the tnsnames.ora on the client

   a) Logon to the client machine that will be used to connect to the ASM instance
          NOTE: the client machine can also be the ASM/Database server (ie for DBCONTROL)

   b) Locate your tnsnames.ora (typically in the $ORACLE_HOME/network/admin)

   c) Add an entry (tnsalias) for your ASM instance (see example)

EXAMPLE

 

ASM =
   (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = kbcook-1)(PORT = 1521))
      (CONNECT_DATA =
         (SERVER = DEDICATED)
         (SID_NAME = +ASM)
         (UR=A)
      )
   )

3) Setup your remote login password for your ASM instance on the ASM server

   a) Logon to the ASM/Database server

   b) Locate the parameter file for your ASM instance (typically $ORACLE_HOME/dbs/init+ASM.ora [unix] or $ORACLE_HOME/dbs/init+ASM.ora [windows] )

   c) Edit the parameter file and add

         remote_login_passwordfile = exclusive ... for stand alone ASM setups
         remote_login_passwordfile = shared ... for ASM setups that also use Real Application Cluster (RAC)
 
   d) Save the file

   NOTE: It may be required that an PFILE be created from an SPFILE in order to be able to edit the file properly ... once the line have been added ... the process can be reversed

                 For more details Note 249664.1 Pfile vs SPfile ... may be used

4) Set your SYS password using ORAPWD for the ASM instance


  
a) Logon to the ASM/Database server

   b) Locate your orapw<sid> file for your ASM instance (typically $ORACLE_HOME/dbs/orapw+ASM)

   c) Rename the file to orapw<sid>.old
 
   d) Run orapwd to reset the password (see example below)

EXAMPLE
 
mv "orapw+ASM" "orapw+ASM.old"
orapwd file=orapw+ASM password=kbcook

5) Use the properly formatted connect string for your tool

Connnect to the ASM instance using the password (#4 above) and the tnsalias (#2 above)

EXAMPLES

C:\oracle\app\product\11.1.0\db_1\BIN\SQLPLUS.EXE "sys/kbcook@asm as sysdba"

<OR>

sqlplus "sys/kbcook@asm as sysdba"

<OR>

sqlplus "sys@asm as sysdba" ... then supply the password when prompted

How to resize a physical disk or LUN and an ASM DISKGROUP

posted Mar 8, 2011, 11:47 AM by Sachchida Ojha

Master Note for Automatic Storage Management (ASM) [ID 1187723.1]
How to resize a physical disk or LUN and an ASM DISKGROUP [ID 311619.1]

Applies to:

Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 11.1.0.7 - Release: 10.1 to 11.1
Generic UNIX
Generic Linux
Generic Windows
***Checked for relevance on 21-Jan-2011***

Purpose

The purpose of this document is to provide the steps required to resize a physical disk
or LUN at the operating system level and subsequently resize the diskgroup using the new space.

Scope and Application

This document is created for any person involved with ASM administration. 


For this example the storage used is a SCSI storage on Linux Red Hat 3.0 using RAW DEVICES to
simulate the LUN.

How to resize a physical disk or LUN and an ASM DISKGROUP

Resizing a physical disk or LUN and the ASM DISKGROUP

Today most of the storage technology used is SAN or NAS.  The disks in the storage are configured
as a unit (LUN) and presented to the host. When the LUN run out of space, new
disks can be added to the LUN.

In all platforms a LUN is identified in the host as a physical device (disk). The hole LUN can
be used or it can be partitioned. Each partition can be seen as an individual disk.
There are platforms that require a LVM in order to partition a LUN/disk. If a LVM is not used,
then the hole disk needs to be used.


When the space in the LUN is exhausted, in the storage side it is a simple task adding
new disks to the current LUN. Now, it is required the operating system recognize
the new space.

In Linux in order to work on a physical disk, it requires at least one partition. The fdisk command is used
to display and manipulate the partitions.

fdisk -l /dev/sdg returns:

[root@arlnx2 root]# fdisk -l /dev/sdg

Disk /dev/sdg: 9105 MB, 9105018880 bytes
64 heads, 32 sectors/track, 8683 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System
/dev/sdg1 1 1908 1953776 83 Linux


For this particular example we are using a disk of 9g but only 2gb have been used.

Also a diskgroup has been created using this disk:



SQL> select name,path,total_mb,free_mb from v$asm_disk;



NAME

PATH

TOTAL_MB 

FREE_MB

DG1_0000

/dev/raw/raw4

1500

90

The previous output shows that most of the space has been allocated, so assuming that there are not ASM disks

available, then the LUN will have to be expanded at the OS level and finally at the ASM level resize the diskgroup.


The first step will be adding space to the LUN. In this exercise we are using a DAS (Direct Attached Storage),

using SCSI interface, but on SAN or NAS the steps will be similar.


As we saw in the output of fdisk command, the disk has a capacity of 9gb but only 2gb have been used

for the partition.

In order to use more space, the partition has to be recreated. This operation is at the partition table level,

which is stored in the first sectors of the disk. Changing the partition table is not going to affect the data.

In other words, is extending the partition.



The general steps are:

1. resize LUN from storage
2. pick up new size from OS
3. in rolling fashion, shutdown ASM instances.
4. Only when all ASM have been recycled, you should resize the said disk on 1 ASM instance.


Now, some details/examples to resize the LUN in Linux:

A. Delete the current  partition.
B. Recreate the partition with the new size.

The device /dev/raw/raw4 is attached to the physical disk /dev/sdg.  fdisk command is used to modify the partition table.  Use d to delete the partition, n to create the partition and specify the new size.  Finish the operation using w to write the partition table..


At this point the partition has been configured but it requires the operating system recognize the new size.


When the LUN is ready at the OS level and the ASM instance has been recycled, the next step is resizing the diskgroup at the ASM level:

SQL> alter diskgroup DG1 resize disk dg1_0000 size 8000m;
 Note: this will not trigger a rebalance.

Disks configured multipath on IBM ESS San are not discovered by ASM

posted Mar 8, 2011, 11:47 AM by Sachchida Ojha

The information in this document applies to: 
Oracle 10gR1 using ASM with disks IBM ESS San configured for multipath running on Linux 3.0


Symptoms

* Query from v$asm_disk does not return rows. In other words disks are not discovered by ASM
select path,header_status, state from v$asm_disk;
* DBCA fails as disks are not recognized.
* SDD 1.5.1.1-10 installed.
SDD (Subsystem Device Driver) is the pseudo device driver designed to support the multipath configuration environment
in IBM products.

a. cat /proc/modules to verify that SDD sdd-mod driver is loaded. loaded. If it is successfully loaded, output similar
to the following is displayed:

sdd-mod 233360 0 (unused) *************** SDD MODULE **************
qla2300 192000 0 (autoclean)
nls_iso8859-1 2880 1 (autoclean)
cs4232 3760 1 (autoclean)
ad1848 16752 0 (autoclean) [cs4232]
uart401 6352 0 (autoclean) [cs4232]
sound 56192 1 (autoclean) [cs4232 ad1848 uart401]
soundcore 4048 4 (autoclean) [sound]
nfsd 67664 4 (autoclean)
usb-uhci 20928 0 (unused)
usbcore 48320 1 [usb-uhci]
ipv6 131872 -1 (autoclean)
olympic 15856 1 (autoclean)
ipchains 34112 0 (unused)
lvm-mod 40880 0 (autoclean)

b. check cat/proc/scsi/scsi to verify that all expected devices (LUNs)
were detected and configured.
c. The SDD server was started
ps was | grep sddsrv

If processes are not listed, SDD server was not started.

* The vpath devices are configured with command cfgvpath. Check /etc/vpath.conf
* To review the existance of vpath devices use commands cfgvpath or lsfpcfg.

* ASMLIB allows to mark the disks:
/etc/init.d/oracleasm createdisk VOL2 /dev/vpatha1.

dd if=/dev/vpatha1 bs=4096 count=1 |od -c

0000000 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 V O L 2
0000020 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
0000040 O R C L C L R D \0 \0 \0 \0 \0 \0 \0 \0
0000060 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
*
0000760 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 U 252
0001000 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
*
0010000

A NON ASM DISK wont display previous information.

* Same command executed as the owner of oracle software failed with
INPUT OUTPUT error

THAT WAS THE KEY FOR THE DIAGNOSTIC OF THE PROBLEM.
DD COMMAND FAILED WITH THE ORACLE USER

Changes
No changes. New installation of this environment.


Cause
When a non-root user of Linux SDD 1.5.1.1-10 or below attempts to open a vpath, the vpath open
command may fail. The situation described above may be a concern if non-root users are not using
filesystems with vpath devices.

For example, ORACLE,DB2 and other database applications may use rvpath devices (raw devices).

Fix
Install a higher version of SDD 1.5.1.1-10

Reference
For the explanation of this problem:
http://www-1.ibm.com/support/docview.wss?uid=ssg1S1002369

For documentation of SDD:
Publication No. SC26-7637-00
http://www-1.ibm.com/support/docview.wss?uid=ssg1S7000303&aid=1

How to identify exactly which disks on a SAN have been allocated to an ASM Diskgroup

posted Mar 8, 2011, 11:42 AM by Sachchida Ojha

How to identify exactly which disks on a SAN have been allocated to an ASM Diskgroup

To find the major, minor numbers of a device associated with a Diskgroup, e.g. PTTQA_BF
-------------------------------------------------------------------------------------------------

# /etc/init.d/oracleasm querydisk PTTQA_BF

Take the major,minor number set and correlate it to a device in the partitions table
------------------------------------------------------------------------------------------------------------
# cat /proc/partitions

Exact Steps To Migrate ASM Diskgroups To Another SAN Without Downtime

posted Mar 8, 2011, 11:25 AM by Sachchida Ojha

Here are the various steps required to  migrate ASM diskgroups from one SAN to another SAN without a downtime.

if your plans are replacing the current disks associated to your diskgroups with a new storage, this operation can be accomplished without any downtime, so you can follow the next steps

1) Backup all your databases and valid the backup (always required to protect your data).

2) Add the new path (new disks from the new storage) to your asm_disktring to be recognized by ASM:

SQL> alter system set asm_disktring = '/dev/rhdisk*' , '/dev/rhdisknew*';

Where: '/dev/rhdisk*' are the current disks.
Where: '/dev/rhdisknew*' are the new disks.


3) Confirm that the new disks are being detected by ASM:

SQL> select path from v$asm_disk;

4) Add the new disks to your desired diskgroup:

SQL> alter diskgroup <diskgroup name> add disk
‘<new disk 1>’,
‘<new disk 2>’,
‘<new disk 3>’,
‘<new disk 4>’,
.
.
.
‘<new disk N>’ rebalance power <#>;

5) Then wait until the rebalance operation completes:

SQL> select * from v$asm_operation;
SQL> select * from gv$asm_operation;

6) Finally, remove the old disks:

SQL> alter diskgroup <diskgroup name> drop disk
<disk name A>,
<disk name B>,
<disk name D>,
<disk name E>,
.
.
.
<disk name X>  rebalance power <#>;

7) Then wait until the rebalance operation completes:

SQL> select * from v$asm_operation;
SQL> select * from gv$asm_operation;


8) Done, your ASM diskgroups and database have been migrated to the new storage.

Note: Alternatively, we can execute add disk & drop disk statements in one operation, in that way only one rebalance operation will be started as follow:

SQL> alter diskgroup <diskgroup name>
add disk '<new device physical name 1>', .., '<new device physical name N>'
drop disk <old disk logical name 1>, <old disk logical name 2>, ..,<old disk logical name N>
rebalance power <#>;

This is more efficient than separated commands (add disk & drop disk statements).

select path from v$asm_disk;

select * from v$asm_operation;

select * from V$ASM_CLIENT;


SQL> select path from v$asm_disk;

PATH
--------------------------------------------------------------------------------

/dev/rhdisk10
/dev/rhdisk11
/dev/rhdisk12
/dev/rhdisk13
/dev/rhdisk14
/dev/rhdisk15
/dev/rhdisk16
/dev/rhdisk17
/dev/rhdisk18
/dev/rhdisk19

PATH
--------------------------------------------------------------------------------
/dev/rhdisk20
/dev/rhdisk21
/dev/rhdisk22
/dev/rhdisk23
/dev/rhdisk24
/dev/rhdisk5
/dev/rhdisk6
/dev/rhdisk7
/dev/rhdisk8
/dev/rhdisk9

21 rows selected.

SQL> show parameter asm_disktring;
SQL> select * from v$asm_operation;

no rows selected

SQL> select * from V$ASM_CLIENT;

GROUP_NUMBER INSTANCE_NAME
------------ ----------------------------------------------------------------
DB_NAME  STATUS
-------- ------------
SOFTWARE_VERSION
------------------------------------------------------------
COMPATIBLE_VERSION
------------------------------------------------------------
           1 TXQADB2
TXQADB2  CONNECTED
11.2.0.1.0
10.2.0.4.0


GROUP_NUMBER INSTANCE_NAME
------------ ----------------------------------------------------------------
DB_NAME  STATUS
-------- ------------
SOFTWARE_VERSION
------------------------------------------------------------
COMPATIBLE_VERSION
------------------------------------------------------------
           1 TXQADB3
TXQADB3  CONNECTED
11.2.0.1.0
10.2.0.4.0


GROUP_NUMBER INSTANCE_NAME
------------ ----------------------------------------------------------------
DB_NAME  STATUS
-------- ------------
SOFTWARE_VERSION
------------------------------------------------------------
COMPATIBLE_VERSION
------------------------------------------------------------
           2 TXQADB3
TXQADB3  CONNECTED
11.2.0.1.0
10.2.0.4.0


GROUP_NUMBER INSTANCE_NAME
------------ ----------------------------------------------------------------
DB_NAME  STATUS
-------- ------------
SOFTWARE_VERSION
------------------------------------------------------------
COMPATIBLE_VERSION
------------------------------------------------------------
           1 TXQADB4
TXQADB4  CONNECTED
11.2.0.1.0
10.2.0.4.0


GROUP_NUMBER INSTANCE_NAME
------------ ----------------------------------------------------------------
DB_NAME  STATUS
-------- ------------
SOFTWARE_VERSION
------------------------------------------------------------
COMPATIBLE_VERSION
------------------------------------------------------------
           2 TXQADB4
TXQADB4  CONNECTED
11.2.0.1.0
10.2.0.4.0


GROUP_NUMBER INSTANCE_NAME
------------ ----------------------------------------------------------------
DB_NAME  STATUS
-------- ------------
SOFTWARE_VERSION
------------------------------------------------------------
COMPATIBLE_VERSION
------------------------------------------------------------
           1 TXQADB5
TXQADB5  CONNECTED
11.2.0.1.0
10.2.0.3.0


GROUP_NUMBER INSTANCE_NAME
------------ ----------------------------------------------------------------
DB_NAME  STATUS
-------- ------------
SOFTWARE_VERSION
------------------------------------------------------------
COMPATIBLE_VERSION
------------------------------------------------------------
           2 TXQADB5
TXQADB5  CONNECTED
11.2.0.1.0
10.2.0.3.0


GROUP_NUMBER INSTANCE_NAME
------------ ----------------------------------------------------------------
DB_NAME  STATUS
-------- ------------
SOFTWARE_VERSION
------------------------------------------------------------
COMPATIBLE_VERSION
------------------------------------------------------------
           1 STGRPTDB
STGRPTDB CONNECTED
11.2.0.1.0
11.0.0.0.0


8 rows selected.

SQL> show parameter ASM_DISKGROUPS;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
asm_diskgroups                       string      DEV_DATA, QA_FLASH
SQL> show parameter ASM_DISKSTRING;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
asm_diskstring                       string      /dev/rhdisk*


ASM_DISKSTRING

Property Description
Parameter type String
Syntax ASM_DISKSTRING = discovery_string [, discovery_string ] ...
Default value The null string; Automatic Storage Management discovery finds all disks in an operating system-specific location to which the Automatic Storage Management instance has read/write access.
Modifiable ALTER SYSTEM
Real Application Clusters Multiple instances can have different values. Different nodes might see the same disks under different names; however, each instance must be able to use its ASM_DISKSTRING to discover the same physical media as the other nodes in the cluster.

Note:

This parameter may only be specified in an Automatic Storage Management instance.

ASM_DISKSTRING specifies an operating system-dependent value used by Automatic Storage Management to limit the set of disks considered for discovery. When a new disk is added to a disk group, each Automatic Storage Management instance that has the disk group mounted must be able to discover the new disk using the value of ASM_DISKSTRING.

In most cases, the default value will be sufficient. Using a more restrictive value may reduce the time required for Automatic Storage Management to perform discovery, and thus improve disk group mount time or the time for adding a disk to a disk group. A "?" at the beginning of the string gets expanded to the Oracle home directory. Depending on the operating system, wildcard characters can be used. It may be necessary to dynamically change ASM_DISKSTRING before adding a disk so that the new disk will be discovered.

An attempt to dynamically modify ASM_DISKSTRING will be rejected and the old value retained if the new value cannot be used to discover a disk that is in a disk group that is already mounted.


Duplicate database from non ASM to ASM (vise versa) to a different host

posted Mar 8, 2011, 6:49 AM by Sachchida Ojha

Applies to:
Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 10.2.0.5 - Release: 10.1 to 10.2
Information in this document applies to any platform.
Add ***Checked for relevance on 18-Dec-2010***
Goal
This note demonstrates how to duplicate database from host A to host B and at the same time converting a database from a file system to ASM or ASM to filesystem depending on the requirements. The example in this document uses catalog database, however this task can also be performed without a catalog.
Solution
Duplicate database from non ASM to ASM (vise versa) to a different host
Assumed database names:
Primary Database SID:       PROD
Duplicate Database SID:     AUX
RMAN Catalog SID:          RMAN

====================================================================================
Steps
1. Backup the primary database.

2 Determine how much disk space will be required.

3 Ensuring you have enough space within your ASM instance.

4 Making the backup available for the duplicate process.

5 Creating the init.ora & administration directories for the duplicate database.

6. Ensuring SQL*NET connections to primary database and RMAN catalog are working.

7. Prepare RMAN duplicate script.

8. Execute the RMAN script.

=====================================================================================

1. Backup of the primary database.

Host A (Target)

Prior to creating a duplicate database you must have a backup of the target database. In this example we assume the backup media is disk. Find appropriate disk space on Host A and take a full backup including archive logs and current controlfile (Figure 1a). If you are planning to duplicate a subset of the database refer to (Figure 1b) this illustrates the RMAN command to backing up certain tablespaces.

 [oracle@linux] export ORACLE_SID=PROD

%rman target=/ catalog=rman/rman@RMAN

RMAN> run {
allocate channel d1 type disk;
backup format '/backups/ORA102/df_t%t_s%s_p%p' database;
sql 'alter system archive log current';
backup format '/backups/ORA102/al_t%t_s%s_p%p' archivelog all;
release channel d1;
}
      Figure 1a - This command will perform a full database backup including archivelogs and the current controlfile.  If backup pieces are on ASM, then a new backup to filesystem is required.


  [oracle@linux] export ORACLE_SID=PROD

 [oracle@linux] rman target=/ catalog=rman/rman@RMAN

RMAN> run {
allocate channel d1 type disk;
backup format '/backups/ORA102/df_t%t_s%s_p%p' tablespace SYSTEM, SYSAUX, UNDO, USERS;
sql 'alter system archive log current';
backup format '/backups/ORA102/al_t%t_s%s_p%p' archivelog all;
release channel d1;
}
      Figure 1b- This command will perform a tablespace backup (SYSTEM, SYSAUX, UNDO & USERS) including archive logs
      and the current controlfile.

2 Determine how much disk space will be required.
Host A(Target)

After deciding what you will be duplicating, calculate the entire space this will require on the new Host. The full database calculation (Figure 2a) will calculate the entire space required whereas (figure 2b) allows you to enter the tablespace names in order to calculate the space required. Figure 2c provides a sample output.

SQL> select DF.TOTAL/1048576 "DataFile Size Mb",
LOG.TOTAL/1048576 "Redo Log Size Mb",
CONTROL.TOTAL/1048576 "Control File Size Mb",
(DF.TOTAL + LOG.TOTAL + CONTROL.TOTAL)/1048576 "Total Size Mb" from dual,
(select sum(a.bytes) TOTAL from dba_data_files a) DF,
(select sum(b.bytes) TOTAL from v$log b) LOG,
(select sum((cffsz+1)*cfbsz) TOTAL from x$kcccf c) CONTROL;
      Figure 2a - Calculate total space for all datafiles within database.

SQL> select DF.TOTAL/1048576 "DataFile Size Mb",
LOG.TOTAL/1048576 "Redo Log Size Mb",
CONTROL.TOTAL/1048576 "Control File Size Mb",
(DF.TOTAL + LOG.TOTAL + CONTROL.TOTAL)/1048576 "Total Size Mb" from dual,
(select sum(a.bytes) TOTAL from dba_data_files a where tablespace_name in ('SYSTEM','SYSAUX','UNDO','USERS')) DF,
(select sum(b.bytes) TOTAL from v$log b) LOG,
(select sum((cffsz+1)*cfbsz) TOTAL from x$kcccf c) CONTROL;
      Figure 2b - Calculate space for list of datafiles within primary database.



DataFile Size Mb Redo Log Size Mb Control File Size Mb Total Size Mb
---------------- ---------------- -------------------- -------------
             900              150             20.34375    1070.34375
    Figure 2c - Sample output of space calculation.

3. Ensuring you have enough space within your ASM instance.
Host B (AUX)

Prior to starting  the duplicate process you must ensure you have enough disk space within ASM to hold the duplicate database.

% export ORACLE_SID=+ASM

SQL> select NAME, STATE, TOTAL_MB, FREE_MB from v$asm_diskgroup;

NAME                                 STATE   TOTAL_MB    FREE_MB
------------------------------ ----------- ---------- ----------
DGROUP2                            MOUNTED        976          3
DGROUP3                            MOUNTED       4882       4830
Figure 3a - Sample output showing the space available with the ASM diskgroup.


Compare the result received from this query with the output from 2a or 2b and ensure you have enough disk space for your duplicate database.

If you do not have enough disk space within your ASM instance ensure you must add it prior to continuing.

Keep in mind you may also be storing archive logs and/or the recovery area within the ASM instance so ample room must be available.

If you are duplicating from ASM to a filesystem ensure you have enough disk space on the source file system.
4. Making the backup available for the duplicate process.
If your backup resides on disk you will need to copy this back up from Host A to Host B. Ensure you place it in the same directory as where it was created. In the example below (figure 4a) the backup piece resides in ‘/backups/ORA102’ these files need to be copied into the same directory on host B.

Furthermore also ensure than all archive log backups have also been moved and any archive logs in the archive log directory that may be require for the recovery. The archive logs required will depend on the point in time of the recovery.

RMAN> list backup;

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
22 Full 529M DISK 00:00:51 2006/05/16 11:12:54
BP Key: 22 Status: AVAILABLE Compressed: NO Tag: TAG20060516T111203
Piece Name: /backups/PROD/df_t590584323_s23_p1
List of Datafiles in backup set 22
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- ------------------- ----
1 Full 1393845 2006/05/16 11:12:03 /oradata/PROD/system01.dbf
2 Full 1393845 2006/05/16 11:12:03 /oradata/PROD/undotbs01.dbf
3 Full 1393845 2006/05/16 11:12:03 /oradata/PROD/sysaux01.dbf
4 Full 1393845 2006/05/16 11:12:03 /oradata/PROD/users01.dbf


BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ -------------------
24 48M DISK 00:00:06 2006/05/16 11:13:07
BP Key: 24 Status: AVAILABLE Compressed: NO Tag: TAG20060516T111301
Piece Name: /backups/PROD/al_t590584381_s25_p1

List of Archived Logs in backup set 24
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- ------------------- ---------- ---------
1 78 1344750 2006/05/15 04:55:34 1353208 2006/05/15 10:00:19
1 79 1353208 2006/05/15 10:00:19 1353213 2006/05/15 10:00:20
1 80 1353213 2006/05/15 10:00:20 1371564 2006/05/15 22:00:11
1 81 1371564 2006/05/15 22:00:11 1373291 2006/05/15 22:00:59
1 82 1373291 2006/05/15 22:00:59 1381066 2006/05/16 03:00:05
1 83 1381066 2006/05/16 03:00:05 1390685 2006/05/16 09:03:00
1 84 1390685 2006/05/16 09:03:00 1393870 2006/05/16 11:13:00
1 85 1393870 2006/05/16 11:13:00 1393929 2006/05/16 11:13:00
     Figure 4a - A list backup showing the backup pieces that need to be copied across Host B
5. Creating the init.ora & administration directories for the duplicate database.
Host B(Target)

Create the pfile [initAUX.ora] parameter file in the $ORACLE_HOME/dbs directory for the auxiliary database. An example is shown below (figure 5a) with bare minimum settings. Note you may require to set parameters which are the same as your production database (refer to you primary init.ora)

# +----------------------------------------+
# | FILE : initAUX.ora                     |
# | DATABASE NAME : AUX                    |
# +----------------------------------------+
# Set the below to location of the clone Duplicate database / name of
# clone database.

audit_file_dest =/apps/oracle/admin/AUX/adump
background_dump_dest =/apps/oracle/admin/AUX/bdump
core_dump_dest =/apps/oracle/admin/AUX/cdump
user_dump_dest =/apps/oracle/admin/AUX/udump
db_name ="AUX"
instance_name =AUX

# Set the below to the location of the duplicate clone control file.

control_files =+DGROUP1/control01.ctl

# Set the below for the from and to location for all data files / redo
# logs to be cloned. As you can the the ASM disk group is specified.

db_file_name_convert =("/u01/AUX/ORA102", "+DGROUP1")
log_file_name_convert =("/u01/AUX/ORA102", "+DGROUP1")
#Set the below to the same as the production target

### NOTE IF YOU ARE DUPLICATING FROM ASM TO FILE SYSTEM YOUR CONVERT PARAMETER WOULD BE
### REVERSED.
## control_files =/u01/AUX/ORA102/control01.ctl
## db_file_name_convert =("+DGROUP1","/u01/AUX/ORA102")
## log_file_name_convert =("+DGROUP1","/u01/AUX/ORA102")

undo_management =AUTO
undo_retention =10800
undo_tablespace =UNDOTBS1
db_block_size = 8192
compatible = 10.2.0.1.0
     Figure 5a - Sample initAUX.ora with minimal settings

Following the creation of the initAUX.ora startup nomount the auxiliary instance.

[oracle@linux]export ORACLE_SID=AUX

[oracle@linux] sqlplus '/as sysdba'

SQLPLUS> startup nomount;
      Figure 5b - startup nomount the AUX instance.

6. Ensuring SQL*NET connections to primary database and RMAN catalog are working.
Host B(Target)

Ensure the production target database is open or at least mounted. If using a catalog database this must also be open. Test your SQL*NET connections: From Host B you must be able to connect to the primary DB as sysdba and make an rman connection to RMAN. Perform a basic test using sqlplus. When you have successfully connected exit sqlplus and move onto the next step.

[oracle@linux]% sqlplus ‘sys/oracle@PROD as sysdba’

[oracle@linux]% sqlplus rman/rman@RMAN

Figure 6a - SQL*NET connections
7. Prepare RMAN duplicate script.
In a working directory on Host B create an RMAN script file cr_duplicate.rcv. The example below (figure 7a) shows the command for a partial duplicate.

RUN {
allocate auxiliary channel C1 device type disk;
duplicate target database to AUX;
}
     Figure 7a - Sample standby creation command.
8. Execute the RMAN script.
Start RMAN, connect to the production target, the catalog instance and also the auxiliary clone. Run the RMAN duplicate script as shown below. Before doing this ensure that the Oracle SID environment variable is set to the duplicate clone database.

export ORACLE_SID=AUX

%rman target sys/sys@PROD catalog rman/rman@rman auxiliary /

RMAN> @cr_duplicate.rcv

SAN Migration of the ASM diskgroups

posted Mar 8, 2011, 6:31 AM by Sachchida Ojha

If you’re more comfortable with GUI, all tasks here can be accomplished using the Enterprise Manager.

1. Add new disks to ASM diskgroups.

ALTER DISKGROUP PMDW_DG1 ADD DISK
'/dev/rdsk/c4t60060E80056FB30000006FB300000823d0s6' NAME PMDW_DG1_0003,
'/dev/rdsk/c4t60060E80056FB30000006FB300000826d0s6' NAME PMDW_DG1_0004,
'/dev/rdsk/c4t60060E80056FB30000006FB300000829d0s6' NAME PMDW_DG1_0005
REBALANCE POWER 11;

We go with the rebalance power of 11 which is full throttle because it is planned maintenance.

2. Check rebalance status from Enterprise Manager or v$ASM_OPERATION.

3. When rebalance completes, drop the old disks.

ALTER DISKGROUP PMDW_DG1 DROP DISK
PMDW_DG1_0000,
PMDW_DG1_0001,
PMDW_DG1_0002
REBALANCE POWER 11;

clone db (asm) to another server which is non asm

posted Mar 8, 2011, 6:27 AM by Sachchida Ojha

I have an oracle database orcl1 in server dev1 which is running on asm. I have to clone it to another server dev2 which I need to be non asm. Can any body tell me what are the options available to do so?

Run below command in sql to get a script. Spool it to a file.

select 'copy datafile '||file_id||' to '||'''/u01/oracle/dev2/'||substr(file_name,21,instr(file_name,'.')-21)||'_01.dbf'''||';' from dba_data_files order by 1;

Shut immediate;

startup mount;

rman target /

Run the spooled file.

zip backup files.

backup controlfile to trace(create control file script)

transfer files to dev2

create pfile for new instance.

startup nomount

create controlfile

startup db

1-10 of 15