Clone a Database Using Hot Backup

posted Mar 17, 2011, 8:47 AM by Sachchida Ojha   [ updated Mar 17, 2011, 10:31 AM ]
This document gives you a detailed flow of how to backup the database in a 24x7 operation and use it for cloning a Development or Test database. By using the System switching of Log files option, it reduces the amount of archive logs to be applied and reduces the time of recovery (the goal).Most sites running Oracle databases need the latest production databases to be cloned for the development and test teams to work on. These sites need to be refreshed periodically with the production database and Application. Most DBAs use Cold Backup procedures to clone a database. Although cold backups are the most common backup procedures for database cloning, there may be situations where no downtime is available. Hot Backups can be performed when the database is up and in Archive log mode.

Definitions and Assumptions

Source Database: The active production instance/database to be cloned.
Target Database: The cloned database, typically a test/development database.

Cloning steps

Switching Logs


We need to make sure that all the present online logs are switched and archived, so that the time to do the media recovery to the cloned database is as small as possible.
Query the number of log groups that exists in the database and switch as many times as there are log groups.

Select group#,archived,status from v$log;
 
SQL> Select group#,archived,status from v$log;


GROUP# ARC STATUS
---------- --- ----------------
1 NO CURRENT
2 YES UNUSED
3 YES UNUSED
4 YES UNUSED
5 YES INACTIVE

SQL> SQL>

We can see from the example above, we need to switch logs to archive the online logs.

alter system switch logfile;
System altered.

Make sure that no log group shows a “STALE” status.

Create the destination directory structures as you would like to move the data files and redo log files. Also create the dump directory to hold the t
arget database dumps (udump, bdump cdump, adump etc).Let’s note down the last archive log file from the archive log destination directory or from the alert log and then do another

Alter system switch logfile;

This would give us the current log that got archived by switching and also a checkpoint occurs. Remember that switching leads to check pointing and check
pointing not necessarily need lead to a log switch. So the data files all get consistent since the current scn’s are stamped. We do the switch to the number
of log groups we have. That means the first archive log file that was switched might be your full file size and the
subsequent ones would have lesser size depending on the data.

Copy source database files to the target

The next step is to copy all the data files of the source database to the destination database directories.

Before starting to copy the files of a particular tablespace, we need to put that tablespace into hot backup mode.

By putting a tablespace into hot backup mode, we will ensure two things.

1) The first time a block is changed in a data file that is in hot backup mode, the entire block is written to the redo log files, not just
the changed bytes. Normally only the changed bytes (a redo vector) are written. In hot backup mode, the entire block is logged the first time.
This is because you can get into a situation where the process copying the data file and DBWR are working on the same block simultaneously.

2) The data file headers which contain the SCN of the last completed checkpoint are not updated while a file is in hot backup mode. This lets
the recovery process understand what archive redo log files might be needed to fully recover this file.

To limit the effect of this additional logging, you should ensure you only place one tablespace at a time in backup mode and bring the
tablespace out of backup mode as soon as you have backed it up. This will reduce the number of blocks that may have to be logged to the
minimum possible.

List out all the tablespaces present in the system and their corresponding data files.

SQL> select tablespace_name,status,contents from dba_tablespaces;

TABLESPACE_NAME STATUS CONTENTS
------------------------------ --------- ---------
SYSTEM ONLINE PERMANENT
UNDOTBS1 ONLINE UNDO
TEMP ONLINE TEMPORARY
DRSYS ONLINE PERMANENT
TOOLS ONLINE PERMANENT
USERS ONLINE PERMANENT
XDB ONLINE PERMANENT
ATA_DATA ONLINE PERMANENT
ATA_DATA_CHARGE ONLINE PERMANENT
ATA_DATA_NE ONLINE PERMANENT
ATA_IC_LOB ONLINE PERMANENT

TABLESPACE_NAME STATUS CONTENTS
------------------------------ --------- ---------
ATA_INDEX ONLINE PERMANENT
ATA_INDEX_CHARGE ONLINE PERMANENT
ATA_INDEX_NE ONLINE PERMANENT
ATA_TMP ONLINE TEMPORARY
PERFSTAT ONLINE PERMANENT
TPSYSADM_DATA ONLINE PERMANENT
TPSYSADM_INDEX ONLINE PERMANENT
FORECASTING ONLINE PERMANENT
ARCHIVED_CHARGE_DATA ONLINE PERMANENT
ARCHIVED_CHARGE_IDX ONLINE PERMANENT
ARCHIVED_NE_DATA ONLINE PERMANENT

TABLESPACE_NAME STATUS CONTENTS
------------------------------ --------- ---------
ARCHIVED_NE_IDX ONLINE PERMANENT

23 rows selected.

SQL>
select tablespace_name,file_name from dba_data_files order by tablespace_name;

TABLESPACE_NAME FILE_NAME
-------------------------------- ----------------------------------------------------------
ARCHIVED_CHARGE_DATA /u08/SVPRD1B/oradata/archived_charge_data.dbf
ARCHIVED_CHARGE_DATA /u08/SVPRD1B/oradata/archived_charge_data02.dbf
ARCHIVED_CHARGE_IDX /u09/SVPRD1B/oradata/archived_charge_idx01.dbf
ARCHIVED_NE_DATA /u08/SVPRD1B/oradata/archived_ne_data02.dbf
ARCHIVED_NE_DATA /u08/SVPRD1B/oradata/archived_ne_data03.dbf
ARCHIVED_NE_DATA /u08/SVPRD1B/oradata/archived_ne_data01.dbf
ARCHIVED_NE_IDX /u09/SVPRD1B/oradata/archived_ne_idx01.dbf
ATA_DATA /u08/SVPRD1B/oradata/ata_data01.dbf
ATA_DATA_CHARGE /u08/SVPRD1B/oradata/ata_data_charge01.dbf
ATA_DATA_NE /u08/SVPRD1B/oradata/ata_data_ne01.dbf
ATA_IC_LOB /u08/SVPRD1B/oradata/ata_ic_lob01.dbf

TABLESPACE_NAME FILE_NAME
-------------------------------- ----------------------------------------------------------
ATA_INDEX /u09/SVPRD1B/oradata/ata_index01.dbf
ATA_INDEX_CHARGE /u09/SVPRD1B/oradata/ata_index_charge01.dbf
ATA_INDEX_NE /u09/SVPRD1B/oradata/ata_index_ne01.dbf
DRSYS /u02/SVPRD1B/oradata/drsys01.dbf
FORECASTING /u08/SVPRD1B/oradata/forecasting.dbf
PERFSTAT /u04/SVPRD1B/oradata/perfstat01.dbf
SYSTEM /u02/SVPRD1B/oradata/system01.dbf
TOOLS /u08/SVPRD1B/oradata/tools01.dbf
TPSYSADM_DATA /u08/SVPRD1B/oradata/tpsysadm_data01.dbf
TPSYSADM_INDEX /u09/SVPRD1B/oradata/tpsysadm_index01.dbf
UNDOTBS1 /u04/SVPRD1B/oradata/undotbs01.dbf

TABLESPACE_NAME FILE_NAME
-------------------------------- ----------------------------------------------------------
USERS /u08/SVPRD1B/oradata/users01.dbf
XDB /u03/SVPRD1B/oradata/xdb01.dbf

Now put each tablespace into hot backup mode, copy it’s corresponding data files to the destination directory, and then bring out the tablespace from the hot backup mode, one tablespace at a time.
Alter tablespace ARCHIVED_CHARGE_DATA begin backup;
!cp /u08/SVPRD1B/oradata/archived_charge_data.dbf  /mnt/u08/SVPRD1B/oradata/archived_charge_data.dbf
Alter tablespace ARCHIVED_CHARGE_DATA end backup;

It is better to do the above exercise using a script, so that manual intervention, and hence, chance of committing an error, is also less. Following is a script that you can use to generate a hot backup copy script from any database.

================================================================
set serveroutput on
set heading off
set feedback off
spool backup.sql
declare
fname varchar2(55);
tname varchar2(55);
tname1 varchar2(55);
cursor tspaces is
select tablespace_name,file_name from v$datafile,sys.dba_data_files
where enabled like '%WRITE%' and file# = file_id order by 1
begin
dbms_output.enable(32000);
dbms_output.put_line('spool hotback');
open tspaces;
fetch tspaces into tname,fname;
tname1 := tname;
dbms_output.put_line('alter tablespace '||tname||' begin backup;');
while tspaces%FOUND loop
if tname1 != tname then
dbms_output.put_line('alter tablespace '||tname1||' end backup;');
dbms_output.put_line('alter tablespace '||tname||' begin backup;');
tname1 := tname;
end if;
dbms_output.put_line('!cp '||fname||' /mnt/u08/SVPRD1B/oradata/');
--Change the line above line with os specific copy command.
fetch tspaces into tname,fname;
end loop;
dbms_output.put_line('alter tablespace '||tname1||' end backup;');
close tspaces;
dbms_output.put_line('spool off');
end;
/
spool off
set heading on
set feedback on
set serveroutput off

================================================================

Switch logs again


Once again, once the copy is complete, switch log groups the required number of times. In this way we need to copy over only those archive files starting with the ones that we noted down when we started the log switch to the ones we have when we did the log switch after the backup. These would be very few and hence the recovery would also be faster when we
apply these archive files. We would also have a set of archive files that we are sure of and consistency can thus be assured.

Backup the Control File.

Now we need a control file creation script for the target database. The best way to achieve this is to backup the source control file to trace with the following command.

alter database backup controlfile to trace;

Database altered.

Copy the appropriate archive logs and online redo logs to the target location.

Prepare Initialization Parameter File

Make appropriate changes to the initialization parameter file of the target db to reflect the new file locations and new database.
Create appropriate target dump locations (adump, bdump, udump etc).

Creating Service and control files in target

Set the sid to the target database.

If the target host is windows, then you will need to create database
service control with the oradim command. Use the following syntax to
create the service.
Oradim –new –sid <target db name> -intpwd <sys password> -startmode manual

start a sqlplus session as sysdba and start the instance in nomount mode with target init file.

Open the control file trace backup. Towards the end, you’ll find
controlfile creation statement with resetlogs option .Following is an
example:

CREATE CONTROLFILE reuse DATABASE "DB1" RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 2920
LOGFILE
GROUP 1 'C:\FLIRT\REDO1\REDO01.LOG' SIZE 5M,
GROUP 2 'C:\FLIRT\REDO2\REDO02.LOG' SIZE 5M
-- STANDBY LOGFILE
DATAFILE
'C:\FLIRT\DBFILES\SYSTEM01.DBF',
'C:\FLIRT\DBFILES\UNDOTBS01.DBF',
'C:\FLIRT\DBFILES\SYSAUX01.DBF',
'C:\FLIRT\DBFILES\USERS01.DBF',
'C:\FLIRT\DBFILES\FLIRT01.DBF',
'C:\FLIRT\DBFILES\DGOD01.DBF'
CHARACTER SET WE8MSWIN1252
;

Change the first line of the statement to look like this :
CREATE CONTROLFILE set DATABASE "DB2" RESETLOGS ARCHIVELOG ……..
Also change the file locations of the database mentioned in the statement to reflect the target file location. After all the changes to the statement, it should look like following :
CREATE CONTROLFILE set DATABASE "DB2" RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 2920
LOGFILE
GROUP 1 'C:\DB2\REDO1\REDO01.LOG' SIZE 5M,
GROUP 2 'C:\DB2\REDO2\REDO02.LOG' SIZE 5M
-- STANDBY LOGFILE
DATAFILE
'C:\DB2\DBFILES\SYSTEM01.DBF',
'C:\DB2\DBFILES\UNDOTBS01.DBF',
'C:\DB2\DBFILES\SYSAUX01.DBF',
'C:\DB2\DBFILES\USERS01.DBF',
'C:\DB2\DBFILES\flirt01.DBF',
'C:\DB2\DBFILES\DGOD01.DBF'
CHARACTER SET WE8MSWIN1252
;
Go the sqlplus session and execute the above (changed) controlfile creation statement to create the controlfiles. Controlfile should be created without any errors.

Apply Media Recovery


Now you need to apply media recovery to make the database consistent. Execute the following statement.
RECOVER DATABASE USING BACKUP CONTROLFILE;
You may need to apply the online redo logs to make the media recovery complete.
After media recovery is complete open the database in restelogs mode.
Code :

Alter database open resetlogs;

Add tempfiles to the temporary tablespaces.
Code :

ALTER TABLESPACE TEMP ADD TEMPFILE
'C:\valen\DBFILES\TEMP01.DBF'
SIZE 20971520 REUSE AUTOEXTEND OFF;

Cloning is complete.





Comments