Oracle 11g New Features


Data Pump Quick Reference

posted Aug 5, 2011, 6:32 AM by Sachchida Ojha

Data Pump Components:

  • Direct Path API (DPAPI): Oracle Database 10g supports a direct path API interface that minimizes data conversion and parsing at both unload and load time.
  • External Table Services: Data Pump uses the new ORACLE_DATAPUMP access driver that provides external tables write and read access to files containing binary streams.
  • The DBMS_METADATA package is used by worker processes for all metadata unloading and loading. Database object definitions are stored using XML rather than SQL.
  • The DBMS_DATAPUMP package embodies the API for high-speed export and import utilities for bulk data and metadata movement.
  • The SQL*Loader client has been integrated with external tables, thereby providing automatic migration of loader control files to external table access parameters.
  • The expdp and impdp clients are thin layers that make calls to the DBMS_DATAPUMP package to initiate and monitor Data Pump operations.

Master Table

  • During the operation, a master table is maintained in the schema of the user who initiated the Data Pump export. The master table has the same name as the name of the Data Pump job. This table maintains one row per object with status information. In the event of a failure, Data Pump uses the information in this table to restart the job. The master table is the heart of every Data Pump operation; it maintains all the information about the job. Data Pump uses the master
    table to restart a failed or suspended job. The master table is dropped (by default) when the Data Pump job finishes successfully.
  • The master table is written to the dump file set as the last step of the export dump operation and is removed from the user’s schema. For the import dump operation, the master table is loaded from the dump file set to the user’s schema as the first step and is used to sequence the objects being imported.

Data Pump Processes

All Data Pump work is done though jobs. Data Pump jobs, unlike DBMS jobs, are merely server processes that process the data on behalf of the main process. The main process, known as a master control process, coordinates this effort via Advanced Queuing; it does so through a special table created at runtime known as a master table.

  • Client process: This process is initiated by the client utility: expdp,impdp, or other clients to make calls to the Data Pump API. Since the Data Pump is completely integrated to the database, once the Data Pump job is initiated, this process is not necessary for the progress of the job.
  • Shadow process: When a client logs into the Oracle database, a foreground process is created (a standard feature of Oracle). This shadow process services the client data dump API requests. This process creates the master table and creates Advanced Queuing (AQ) queues used for communication. Once the client process is ended, the shadow process also goes away.
  • Master control process (MCP): Master control process controls the execution of the Data Pump job; there is one MCP per job. MCP divides the Data Pump job into various metadata and data load or unload jobs and hands them over to the worker processes. The MCP has a process name of the format ORACLE_SID_DMnn_PROCESS_ID. It maintains the job state, job description, restart, and file information in the master table.
  • Worker process: The MCP creates the worker processes based on the value of the PARALLEL parameter. The worker process performs the tasks requested by MCP, mainly loading or unloading data and metadata. The worker processes have the format ORACLE_SID_DWnn_PROCESS_ID. The worker processes maintain the current status in the master table that can beused to restart a failed job.
  • Parallel Query (PQ) processes: The worker processes can initiate parallel query processes if external table is used as the data access method for loading or unloading. These are standardparallel query slaves of the parallel execution architecture.

 Data Pump Benefits

  • Data access methods:
    – Direct path
    – External tables
  • Detachment from and reattachment to long-running jobs
  • Restarting of Data Pump jobs
  • Fine-grained object and data selection
  • Explicit database version specification
  • Parallel execution
  • Estimation of export job space consumption
  • Network mode in a distributed environment: the data pump operations can be performed from one database to another without writing to a dump file, using the network method.
  • Remapping capabilities during import
  • Data sampling and metadata compression

 Create Data Pump Directory

  • Since the Data Pump is server based, directory objects must be created in the database where the
    Data Pump files will be stored.
  • The user executing Data Pump must have been granted permissions on the directory. READ permission is required to perform import, and WRITE permission is required to perform export and to create log files or SQL files.
  • The user (who owns the software installation and database files) must have READ and WRITE operating system privileges on the directory. BUT the database user does NOT need any operating system privileges on the directory for Data Pump to succeed.

SQL> SELECT * FROM dba_directories;

SQL> CREATE OR REPLACE DIRECTORY data_pump_dir AS 'c:\temp';
-- Set up default directory data_pump_dir (The name of the default directory must be DATA_PUMP_DIR)

SQL> CREATE OR REPLACE DIRECTORY dpdata AS '';

SQL> GRANT READ, WRITE ON DIRECTORY dpdataTO scott;

Export Modes

1. Full

expdp \'/ as sysdba\'  DIRECTORY=dpdata DUMPFILE=exp%U.dmp FULL=y LOGFILE=exp.log JOB_NAME=expdp PARALLEL=30

2. User (Owner)   
expdp \'/ as sysdba\'  DIRECTORY=dpdata DUMPFILE=scott.dmp SCHEMAS=scott LOGFILE=exp.log JOB_NAME=expdp


3. Table   
expdp \'/ as sysdba\' DIRECTORY=dpdata DUMPFILE=scott.dmp TABLES=scott.emp,blake.dept LOGFILE=exp.log JOB_NAME=expdp


4. Tablespace
expdp \'/ as sysdba\' DIRECTORY=dpdata DUMPFILE=exp.dmp TABLESPACES=users, tools TRANSPORT_FULL_CHECK=y LOGFILE=exp.log JOB_NAME=expdp


5. Transportable Tablespace Export

ALTER TABLESPACE users READ ONLY;
ALTER TABLESPACE example READ ONLY;

expdp \'/ as sysdba\' DIRECTORY=dpdata DUMPFILE=exp_tbs.dmp
TRANSPORT_TABLESPACES=users,example TRANSPORT_FULL_CHECK=y LOGFILE=exp_tbs.log

ALTER TABLESPACE users READ WRITE;
ALTER TABLESPACE example READ WRITE;


6. Export metadata

expdp \'/ as sysdba\' SCHEMAS=scott DIRECTORY=dpdata DUMPFILE=meta.dmp CONTENT=metadata_only


 

Import Modes


1. Full

impdp  \'/ as sysdba\' DIRECTORY=dpdata DUMPFILE=exp.dmp FULL=y LOGFILE=imp.log JOB_NAME=impdp
impdp \'/ as sysdba\' DIRECTORY=dpdata DUMPFILE=exp%U.dmp FULL=y LOGFILE=imp.log JOB_NAME=DMFV_impdp PARALLEL=30


2. User (Owner)   
impdp  \'/ as sysdba\'  DIRECTORY=dpdata DUMPFILE=scott.dmp SCHEMAS=scott LOGFILE=imp.log JOB_NAME=impdp

3. Table 

impdp \'/ as sysdba\' DIRECTORY=dpdata DUMPFILE=scott.dmp TABLES=scott.emp,blake.dept LOGFILE=imp.log JOB_NAME=impdp

4. Tablespace

5. Transportable Tablespace

 

Other Features

  • REMAP_DATAFILE

  • REMAP_SCHEMA

  • REMAP_TABLESPACE

  • TRANSFORM

  • NETWORK_LINK

    Data Pump Monitoring

    1. Script:

    @dpstatus.sql

  • dpstatus.sql

  • TTITLE 'Currently Active DataPump Operations'
    COL owner_name          FORMAT A06      HEADING 'Owner'
    COL job_name            FORMAT A20      HEADING 'JobName'
    COL operation           FORMAT A12      HEADING 'Operation'
    COL job_mode            FORMAT A12      HEADING 'JobMode'
    COL state               FORMAT A12      HEADING 'State'
    COL degree              FORMAT 9999     HEADING 'Degr'
    COL attached_sessions   FORMAT 9999     HEADING 'Sess'

    SELECT
         owner_name
        ,job_name
        ,operation
        ,job_mode
        ,state
        ,degree
        ,attached_sessions
      FROM dba_datapump_jobs
    ;

    TTITLE 'Currently Active DataPump Sessions'
    COL owner_name          FORMAT A06      HEADING 'Owner'
    COL job_name            FORMAT A20      HEADING 'Job'
    COL osuser              FORMAT A12      HEADING 'UserID'

    SELECT
         DPS.owner_name
        ,DPS.job_name
        ,S.osuser
        ,S.sid
        ,S.serial#
        ,S.status
      FROM
         dba_datapump_sessions DPS
        ,v$session S
     WHERE S.saddr = DPS.saddr
    ;

    2. DBA_DATAPUMP_JOBS

    how many worker processes (column DEGREE) are working on the job.

    3. DBA_DATAPUMP_SESSIONS

    when joined with the previous view and V$SESSION gives the SID of the session of the main foreground process.

    SQL> select sid, serial# from v$session s, dba_datapump_sessions d where s.saddr = d.saddr;

    4. Alert log

    When the process starts up, the MCP and the worker processes are shown in the alert log as follows:
    kupprdp: master process DM00 started with pid=23, OS id=20530 to execute -   SYS.KUPM$MCP.MAIN('CASES_EXPORT', 'ANANDA');
    kupprdp: worker process DW01 started with worker id=1, pid=24, OS id=20532 to execute - SYS.KUPW$WORKER.MAIN('CASES_EXPORT', 'ANANDA');

    kupprdp: worker process DW03 started with worker id=2, pid=25, OS id=20534 to execute - SYS.KUPW$WORKER.MAIN('CASES_EXPORT', 'ANANDA');

    5. V$SESSION_LONGOPS

    predict the time it will take to complete the job.

    select sid, serial#, sofar, totalwork from v$session_longops where sofar != totalwork;

    Controlling Data Pump Jobs

    1. Switching Between Logging and Interactive Client Mode:

    expdp system/oracle parfile=c:\rmancmd\longexport.dpectl

    1.1 switch this job to interactive client mode by typing CTRL-C
    1.2 switch from interactive client mode back to logging mode
    Export> CONTINUE_CLIENT

    2.Viewing Job Status:

    From interactive mode

    Export> status

    3. Closing and Reattaching To A Job

    3.1 detach from a client session, but leave a running job still executing

    Export> exit_client


    3.2 reattach to a running job after closing the client connection, reopen a session by
    $ expdp system/oracle attach=longexport

    4. Halt the execution of the job:

    Export> stop_job

    restart the job:
    Export> START_JOB

    5. End the job

    Export> kill_job
  • Monitoring Data Pump

    posted Aug 5, 2011, 6:09 AM by Sachchida Ojha   [ updated Nov 14, 2014, 3:01 PM ]

    The gadget spec URL could not be found

    A simple way to gain insight into the status of a Data Pump job is to look into a few views maintained within the Oracle instance the Data Pump job is running. These views are DBA_DATAPUMP_JOBS, DBA_DATAPUMP_SESSIONS, and V$SESSION_LOGOPS. These views are critical in the monitoring of your export jobs so, as we will see in a later article, you can attach to a Data Pump job and modify the execution of the that job.

    DBA_DATAPUMP_JOBS
    This view will show the active Data Pump jobs, their state, degree of parallelism, and the number of sessions attached.

    SQL> select * from dba_datapump_jobs
    
    OWNER_NAME JOB_NAME               OPERATION  JOB_MODE   STATE         DEGREE    ATTACHED_SESSIONS
    ---------- ---------------------- ---------- ---------- ------------- --------- -----------------
    JKOOP      SYS_EXPORT_FULL_01     EXPORT     FULL       EXECUTING     1          1
    JKOOP      SYS_EXPORT_SCHEMA_01   EXPORT     SCHEMA     EXECUTING     1          1

    DBA_DATAPUMP_SESSIONS
    This view give gives the SADDR that assist in determining why a Data Pump session may be having problems. Join to the V$SESSION view for further information.

    SQL> SELECT * FROM DBA_DATAPUMP_SESSIONS
    
    The gadget spec URL could not be found
    OWNER_NAME JOB_NAME SADDR ---------- ------------------------------ -------- JKOOPMANN SYS_EXPORT_FULL_01 225BDEDC JKOOPMANN SYS_EXPORT_SCHEMA_01 225B2B7C

    V$SESSION_LONGOPS
    This view helps determine how well a Data Pump export is doing. Basically gives you a progress indicator through the MESSAGE column.

    SQL> select username,opname,target_desc,sofar,totalwork,message from V$SESSION_LONGOPS
    
    USERNAME OPNAME               TARGET_DES SOFAR TOTALWORK  MESSAGE
    -------- -------------------- ---------- ----- ---------- ------------------------------------------------
    JKOOP    SYS_EXPORT_FULL_01   EXPORT       132        132 SYS_EXPORT_FULL_01:EXPORT:132 out of 132 MB done
    JKOOP    SYS_EXPORT_FULL_01   EXPORT        90        132 SYS_EXPORT_FULL_01:EXPORT:90 out of 132 MB done
    JKOOP    SYS_EXPORT_SCHEMA_01 EXPORT        17         17 SYS_EXPORT_SCHEMA_01:EXPORT:17 out of 17 MB done
    JKOOP    SYS_EXPORT_SCHEMA_01 EXPORT        19         19 SYS_EXPORT_SCHEMA_01:EXPORT:19 out of 19 MB done
    The gadget spec URL could not be found

    SQL> select sid, serial#, sofar, totalwork, dp.owner_name, dp.state, dp.job_mode
    from gv$session_longops sl, gv$datapump_job dp
    where sl.opname = dp.job_name and sofar != totalwork;

    SID SERIAL# SOFAR TOTALWORK OWNER_NAME STATE JOB_MODE
    ———- ———- ———- ———- —————————— —————————— ——————————
    122 64151 1703 2574 SYSTEM EXECUTING FULL

    You can monitor an Oracle import in several ways:

    Monitor at the OS - Do a "ps -ef" on the data pump process and watch it consume CPU. You can also monitor the data pump log file with the "tail -f", command, watching the progress of the import in real time. If you watch the import log, be sure to include the feedback=1000 parameter to direct import to display a dot every 1,000 lines of inserts.

    Monitor with the data pump views - The main view to monitor import jobs are dba_datapump_jobs and dba_datapump_sessions.

    Monitor with longops - You can query the v$session_longops to see the progress of data pump, querying the sofar and totalwork columns.

    select    sid,    serial#
    from    v$session s,    dba_datapump_sessions d
    where     s.saddr = d.saddr;

    select    sid,    serial#,    sofar,    totalwork
    from    v$session_longops;

    select x.job_name,b.state,b.job_mode,b.degree
    , x.owner_name,z.sql_text, p.message
    , p.totalwork, p.sofar
    , round((p.sofar/p.totalwork)*100,2) done
    , p.time_remaining
    from dba_datapump_jobs b
    left join dba_datapump_sessions x on (x.job_name = b.job_name)
    left join v$session y on (y.saddr = x.saddr)
    left join v$sql z on (y.sql_id = z.sql_id)
    left join v$session_longops p ON (p.sql_id = y.sql_id)
    WHERE y.module='Data Pump Worker'
    AND p.time_remaining > 0;

    The gadget spec URL could not be found

    The following are the major new features that provide this increased performance, as well as enhanced ease of use:

    • The ability to specify the maximum number of threads of active execution operating on behalf of the Data Pump job. This enables you to adjust resource consumption versus elapsed time. See PARALLEL for information about using this parameter in export. See PARALLEL for information about using this parameter in import. (This feature is available only in the Enterprise Edition of Oracle Database 10g.)

    • The ability to restart Data Pump jobs. See START_JOB for information about restarting export jobs. See START_JOB for information about restarting import jobs.

    • The ability to detach from and reattach to long-running jobs without affecting the job itself. This allows DBAs and other operations personnel to monitor jobs from multiple locations. The Data Pump Export and Import utilities can be attached to only one job at a time; however, you can have multiple clients or jobs running at one time. (If you are using the Data Pump API, the restriction on attaching to only one job at a time does not apply.) You can also have multiple clients attached to the same job. See ATTACH for information about using this parameter in export. See ATTACH for information about using this parameter in import.

    • Support for export and import operations over the network, in which the source of each operation is a remote instance. See NETWORK_LINK for information about using this parameter in export. See NETWORK_LINK for information about using this parameter in import.

    • The ability, in an import job, to change the name of the source datafile to a different name in all DDL statements where the source datafile is referenced. See REMAP_DATAFILE.

    • Enhanced support for remapping tablespaces during an import operation. See REMAP_TABLESPACE.

    • Support for filtering the metadata that is exported and imported, based upon objects and object types. For information about filtering metadata during an export operation, see INCLUDE and EXCLUDE. For information about filtering metadata during an import operation, see INCLUDE and EXCLUDE.

    • Support for an interactive-command mode that allows monitoring of and interaction with ongoing jobs. See Commands Available in Export's Interactive-Command Mode and Commands Available in Import's Interactive-Command Mode.

    • The ability to estimate how much space an export job would consume, without actually performing the export. See ESTIMATE_ONLY.

    • The ability to specify the version of database objects to be moved. In export jobs, VERSION applies to the version of the database objects to be exported. See VERSION for more information about using this parameter in export.

      In import jobs, VERSION applies only to operations over the network. This means that VERSION applies to the version of database objects to be extracted from the source database. See VERSION for more information about using this parameter in import.

    • Most Data Pump export and import operations occur on the Oracle database server. (This contrasts with original export and import, which were primarily client-based.) See Default Locations for Dump, Log, and SQL Files for information about some of the implications of server-based operations.

    The gadget spec URL could not be found

    The DBA_DATAPUMP_JOBS and USER_DATAPUMP_JOBS Views

    The DBA_DATAPUMP_JOBS and USER_DATAPUMP_JOBS views identify all active Data Pump jobs, regardless of their state, on an instance (or on all instances for Real Application Clusters). They also show all Data Pump master tables not currently associated with an active job. You can use the job information to attach to an active job. Once you are attached to the job, you can stop it, change its parallelism, or monitor its progress. You can use the master table information to restart a stopped job or to remove any master tables that are no longer needed.

    Table 1-1 describes the columns in the DBA_DATAPUMP_JOBS view and the USER_DATAPUMP_JOBS view.

    Table 1-1 DBA_DATAPUMP_JOBS View and USER_DATAPUMP_JOBS View

    Column Datatype Description
    OWNER_NAME
    
    VARCHAR2(30)
    
    User who initiated the job (valid only for DBA_DATAPUMP_JOBS)
    JOB_NAME
    
    VARCHAR2(30)
    
    User-supplied name for the job (or the default name generated by the server)
    OPERATION
    
    VARCHAR2(30)
    
    Type of job
    JOB_MODE
    
    VARCHAR2(30)
    
    Mode of job
    STATE
    
    VARCHAR2(30)
    
    State of the job
    DEGREE
    
    NUMBER
    
    Number of worker processes performing the operation
    ATTACHED_SESSIONS
    
    NUMBER
    
    Number of sessions attached to the job


    Note:

    The information returned is obtained from dynamic performance views associated with the executing jobs and from the database schema information concerning the master tables. A query on these views can return multiple rows for a single Data Pump job (same owner and job name) if the query is executed while the job is transitioning between an Executing state and the Not Running state.

    The DBA_DATAPUMP_SESSIONS View

    The DBA_DATAPUMP_SESSIONS view identifies the user sessions that are attached to a job. The information in this view is useful for determining why a stopped operation has not gone away.

    Table 1-2 describes the columns in the DBA_DATAPUMP_SESSIONS view.

    Table 1-2 The DBA_DATAPUMP_SESSIONS View

    Column Datatype Description
    OWNER_NAME
    
    VARCHAR2(30)
    
    User who initiated the job.
    JOB_NAME
    
    VARCHAR2(30)
    
    User-supplied name for the job (or the default name generated by the server).
    SADDR
    
    RAW(4) (RAW(8) on 64-bit systems)
    
    Address of session attached to the job. Can be used with V$SESSION view.

    Monitoring the Progress of Executing Jobs

    Data Pump operations that transfer table data (export and import) maintain an entry in the V$SESSION_LONGOPS dynamic performance view indicating the job progress (in megabytes of table data transferred). The entry contains the estimated transfer size and is periodically updated to reflect the actual amount of data transferred.


    Note:

    The usefulness of the estimate value for export operations depends on the type of estimation requested when the operation was initiated, and it is updated as required if exceeded by the actual transfer amount. The estimate value for import operations is exact.

    The V$SESSION_LONGOPS columns that are relevant to a Data Pump job are as follows:

    • USERNAME - job owner

    • OPNAME - job name

    • TARGET_DESC - job operation

    • SOFAR - megabytes (MB) transferred thus far during the job

    • TOTALWORK - estimated number of megabytes (MB) in the job

    • UNITS - 'MB'

    • MESSAGE - a formatted status message of the form:

      '<job_name>: <operation_name> : nnn out of mmm MB done'
      
    || Usage Notes:
    || This script is provided to demonstrate various features of Oracle 10g's 
    || new DataPump and should be carefully proofread before executing it against || any existing Oracle database to insure that no potential 
    damage can occur. || */
    The gadget spec URL could not be found
    ----- -- Listing 1.1: Setting up a DIRECTORY object for DataPump use. -- Note that the directory folder need not exist for this command -- to succeed, but any subsequent attempt to utilize the DIRECTORY -- object will fail until the folder is created on the server. -- This should be run from SYSTEM for best results ----- DROP DIRECTORY export_dir; CREATE DIRECTORY export_dir as 'c:\oracle\export_dir'; GRANT READ, WRITE ON DIRECTORY export_dir TO hr, sh; ----- -- Listing 1.2: Determining what object types can be exported/imported -- and filtering levels available ----- COL object_path FORMAT A25 HEADING 'Object Path Name' COL comments FORMAT A50 HEADING 'Object Description' COL named FORMAT A3 HEADING 'Nmd|Objs' TTITLE 'Database-Level Exportable Objects' SELECT object_path ,named ,comments FROM database_export_objects; TTITLE 'Schema-Level Exportable Objects' SELECT object_path ,named ,comments FROM schema_export_objects; TTITLE 'Table-Level Exportable Objects' SELECT object_path ,named ,comments FROM table_export_objects; ----- -- Listing 1.3: A simple DataPump Export operation. Note that if the export -- dump file already exists when this is executed, Oracle will -- return an ORA-39000 error and terminate the operation ----- EXPDP hr/hr DUMPFILE=export_dir:hr_schema.dmp LOGFILE=export_dir:hr_schema.explog >> DataPump Export command issued: SET ORACLE_SID=zdcdb EXPDP system/******** PARFILE=c:\rmancmd\dpe_1.expctl >> DataPump Export parameters file (dpe_1.expctl): DIRECTORY=export_dir SCHEMAS=HR,OE JOB_NAME=hr_oe_schema DUMPFILE=export_dir:hr_oe_schemas.dmp LOGFILE=export_dir:hr_oe_schemas.explog >> Results of Export Operation: Export: Release 10.1.0.2.0 - Production on Thursday, 10 March, 2005 17:52 Copyright (c) 2003, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production With the Partitioning, OLAP and Data Mining options FLASHBACK automatically enabled to preserve database integrity. Starting "SYSTEM"."HR_OE_SCHEMA": system/******** parfile=c:\rmancmd\dpe_1.expctl Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 2.562 MB Processing object type SCHEMA_EXPORT/USER Processing object type SCHEMA_EXPORT/SYSTEM_GRANT Processing object type SCHEMA_EXPORT/ROLE_GRANT Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/SE_PRE_SCHEMA_PROCOBJACT/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/GRANT/OBJECT_GRANT Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS Processing object type SCHEMA_EXPORT/TABLE/COMMENT Processing object type SCHEMA_EXPORT/TABLE/AUDIT_OBJ Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE Processing object type SCHEMA_EXPORT/VIEW/VIEW Processing object type SCHEMA_EXPORT/VIEW/GRANT/OBJECT_GRANT Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/TRIGGER Processing object type SCHEMA_EXPORT/TABLE/INDEX/SE_TBL_FBM_INDEX_INDEX/INDEX Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/SE_TBL_FBM_IND_STATS/INDEX_STATISTICS Processing object type SCHEMA_EXPORT/SE_POST_SCHEMA_PROCOBJACT/PROCACT_SCHEMA . . exported "HR"."LOBSEP" 9.195 KB 1 rows . . exported "HR"."BIGATFIRST" 277.7 KB 10000 rows . . exported "HR"."APPLICANTS" 10.46 KB 30 rows . . exported "HR"."APPLICANTS_1" 11.5 KB 45 rows . . exported "HR"."APPROLES" 6.078 KB 7 rows . . exported "HR"."APPS" 5.632 KB 3 rows . . exported "HR"."COST_CENTERS" 6.328 KB 29 rows . . exported "HR"."COST_CENTER_ASSIGNMENTS" 6.312 KB 20 rows . . exported "HR"."COUNTRIES" 6.093 KB 25 rows . . exported "HR"."DATEMATH" 6.984 KB 1 rows . . exported "HR"."DEPARTMENTS" 7.101 KB 28 rows . . exported "HR"."DIVISIONS" 5.335 KB 3 rows . . exported "HR"."EMPLOYEES" 16.67 KB 118 rows . . exported "HR"."EMPLOYEE_HIERARCHY" 6.414 KB 5 rows . . exported "HR"."JOBS" 7.296 KB 27 rows . . exported "HR"."JOB_HISTORY" 6.765 KB 15 rows . . exported "HR"."LOCATIONS" 7.710 KB 23 rows . . exported "HR"."MY_USER_ROLES" 6.453 KB 10 rows . . exported "HR"."PAYROLL_CHECKS" 7.609 KB 6 rows . . exported "HR"."PAYROLL_HOURLY" 6.039 KB 3 rows . . exported "HR"."PAYROLL_SALARIED" 5.687 KB 3 rows . . exported "HR"."PAYROLL_TRANSACTIONS" 7.195 KB 6 rows . . exported "HR"."REGIONS" 5.296 KB 4 rows . . exported "HR"."TIMECLOCK_PUNCHES" 5.718 KB 6 rows . . exported "HR"."USERS" 5.968 KB 3 rows . . exported "HR"."USER_ROLES" 6.453 KB 10 rows . . exported "HR"."IOT_TAB" 0 KB 0 rows . . exported "HR"."NO_UPDATES" 0 KB 0 rows . . exported "HR"."PLAN_TABLE" 0 KB 0 rows Master table "SYSTEM"."HR_OE_SCHEMA" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.HR_OE_SCHEMA is: C:\ORACLE\EXPORT_DIR\HR_OE_SCHEMAS.DMP Job "SYSTEM"."HR_OE_SCHEMA" successfully completed at 17:53 ----- -- Listing 1.4: A simple DataPump Import. Note that only database objects from -- the HR schema will be used to populate a new schema (HR_OLTP), -- and all objects other than tables and their dependent objects -- will be excluded from the import ----- >> SQL to create new HR_OLTP schema: DROP USER hr_oltp CASCADE; CREATE USER hr_oltp IDENTIFIED BY misdev DEFAULT TABLESPACE example TEMPORARY TABLESPACE temp02 QUOTA 50M ON example PROFILE default; GRANT CONNECT TO hr_oltp; GRANT RESOURCE TO hr_oltp; >> DataPump Import command issued: SET ORACLE_SID=dbaref IMPDP system/****** PARFILE=export_dir:dpi_1.impctl >> DataPump Import parameters file (dpi_1.impctl): DIRECTORY=export_dir JOB_NAME=hr_oltp_import DUMPFILE=export_dir:hr_oe_schemas.dmp LOGFILE=export_dir:hr_oltp_import.implog REMAP_SCHEMA=hr:hr_oltp STATUS=5 >> Results of Import operation: Import: Release 10.1.0.2.0 - Production on Thursday, 10 March, 2005 18:02 Copyright (c) 2003, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production With the Partitioning, OLAP and Data Mining options Master table "SYSTEM"."HR_OLTP_IMPORT" successfully loaded/unloaded Starting "SYSTEM"."HR_OLTP_IMPORT": system/******** parfile=c:\rmancmd\dpi_1.impctl Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA . . imported "HR_OLTP"."LOBSEP" 9.195 KB 1 rows . . imported "HR_OLTP"."BIGATFIRST" 277.7 KB 10000 rows . . imported "HR_OLTP"."APPLICANTS" 10.46 KB 30 rows . . imported "HR_OLTP"."APPLICANTS_1" 11.5 KB 45 rows . . imported "HR_OLTP"."APPROLES" 6.078 KB 7 rows . . imported "HR_OLTP"."APPS" 5.632 KB 3 rows . . imported "HR_OLTP"."COST_CENTERS" 6.328 KB 29 rows . . imported "HR_OLTP"."COST_CENTER_ASSIGNMENTS" 6.312 KB 20 rows . . imported "HR_OLTP"."COUNTRIES" 6.093 KB 25 rows . . imported "HR_OLTP"."DATEMATH" 6.984 KB 1 rows . . imported "HR_OLTP"."DEPARTMENTS" 7.101 KB 28 rows . . imported "HR_OLTP"."DIVISIONS" 5.335 KB 3 rows . . imported "HR_OLTP"."EMPLOYEES" 16.67 KB 118 rows . . imported "HR_OLTP"."EMPLOYEE_HIERARCHY" 6.414 KB 5 rows . . imported "HR_OLTP"."JOBS" 7.296 KB 27 rows . . imported "HR_OLTP"."JOB_HISTORY" 6.765 KB 15 rows . . imported "HR_OLTP"."LOCATIONS" 7.710 KB 23 rows . . imported "HR_OLTP"."MY_USER_ROLES" 6.453 KB 10 rows . . imported "HR_OLTP"."PAYROLL_CHECKS" 7.609 KB 6 rows . . imported "HR_OLTP"."PAYROLL_HOURLY" 6.039 KB 3 rows . . imported "HR_OLTP"."PAYROLL_SALARIED" 5.687 KB 3 rows . . imported "HR_OLTP"."PAYROLL_TRANSACTIONS" 7.195 KB 6 rows . . imported "HR_OLTP"."REGIONS" 5.296 KB 4 rows . . imported "HR_OLTP"."TIMECLOCK_PUNCHES" 5.718 KB 6 rows . . imported "HR_OLTP"."USERS" 5.968 KB 3 rows . . imported "HR_OLTP"."USER_ROLES" 6.453 KB 10 rows . . imported "HR_OLTP"."IOT_TAB" 0 KB 0 rows . . imported "HR_OLTP"."NO_UPDATES" 0 KB 0 rows . . imported "HR_OLTP"."PLAN_TABLE" 0 KB 0 rows Processing object type SCHEMA_EXPORT/TABLE/GRANT/OBJECT_GRANT Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS Processing object type SCHEMA_EXPORT/TABLE/COMMENT Processing object type SCHEMA_EXPORT/TABLE/AUDIT_OBJ Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/TRIGGER ORA-39082: Object type TRIGGER:"HR_OLTP"."BIN$55fGDdubQL6YVYB0dGS/nw==$1" created with compilation warnings ORA-39082: Object type TRIGGER:"HR_OLTP"."BIN$55fGDdubQL6YVYB0dGS/nw==$1" created with compilation warnings ORA-39082: Object type TRIGGER:"HR_OLTP"."SECURE_EMPLOYEES" created with compilation warnings ORA-39082: Object type TRIGGER:"HR_OLTP"."SECURE_EMPLOYEES" created with compilation warnings ORA-39082: Object type TRIGGER:"HR_OLTP"."TR_BRIU_APPLICANTS" created with compilation warnings ORA-39082: Object type TRIGGER:"HR_OLTP"."TR_BRIU_APPLICANTS" created with compilation warnings ORA-39082: Object type TRIGGER:"HR_OLTP"."UPDATE_JOB_HISTORY" created with compilation warnings ORA-39082: Object type TRIGGER:"HR_OLTP"."UPDATE_JOB_HISTORY" created with compilation warnings Processing object type SCHEMA_EXPORT/TABLE/INDEX/SE_TBL_FBM_INDEX_INDEX/INDEX Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/SE_TBL_FBM_IND_STATS/INDEX_STATISTICS Job "SYSTEM"."HR_OLTP_IMPORT" completed with 8 error(s) at 18:02 ----- -- Listing 1.5: Querying status of DataPump operations ----- TTITLE 'Currently Active DataPump Operations' COL owner_name FORMAT A06 HEADING 'Owner' COL job_name FORMAT A20 HEADING 'JobName' COL operation FORMAT A12 HEADING 'Operation' COL job_mode FORMAT A12 HEADING 'JobMode' COL state FORMAT A12 HEADING 'State' COL degree FORMAT 9999 HEADING 'Degr' COL attached_sessions FORMAT 9999 HEADING 'Sess' SELECT owner_name ,job_name ,operation ,job_mode ,state ,degree ,attached_sessions FROM dba_datapump_jobs ; TTITLE 'Currently Active DataPump Sessions' COL owner_name FORMAT A06 HEADING 'Owner' COL job_name FORMAT A06 HEADING 'Job' COL osuser FORMAT A12 HEADING 'UserID' SELECT DPS.owner_name ,DPS.job_name ,S.osuser FROM dba_datapump_sessions DPS ,v$session S WHERE S.saddr = DPS.saddr ;

    The gadget spec URL could not be found

    The gadget spec URL could not be found

    Using Original Export Parameters with Data Pump

    posted Aug 5, 2011, 5:02 AM by Sachchida Ojha

    Data Pump Export accepts original Export parameters when they map to a corresponding Data Pump parameter. Table 4-1 describes how Data Pump Export interprets original Export parameters. Parameters that have the same name and functionality in both original Export and Data Pump Export are not included in this table.

    Table 4-1 How Data Pump Export Handles Original Export Parameters

    Original Export Parameter Action Taken by Data Pump Export Parameter
    BUFFER
    

    This parameter is ignored.

    COMPRESS
    

    This parameter is ignored. In original Export, the COMPRESS parameter affected how the initial extent was managed. Setting COMPRESS=n caused original Export to use current storage parameters for the initial and next extent.

    The Data Pump Export COMPRESSION parameter is used to specify how data is compressed in the dump file, and is not related to the original Export COMPRESS parameter.

    CONSISTENT
    

    Data Pump Export determines the current time and uses FLASHBACK_TIME.

    CONSTRAINTS

    If original Export used CONSTRAINTS=n, then Data Pump Export uses EXCLUDE=CONSTRAINTS.

    The default behavior is to include constraints as part of the export.

    DIRECT 
    

    This parameter is ignored. Data Pump Export automatically chooses the best export method.

    FEEDBACK
    

    The Data Pump Export STATUS=30 command is used. Note that this is not a direct mapping because the STATUS command returns the status of the export job, as well as the rows being processed.

    In original Export, feedback was given after a certain number of rows, as specified with the FEEDBACK command. In Data Pump Export, the status is given every so many seconds, as specified by STATUS.

    FILE
    

    Data Pump Export attempts to determine the path that was specified or defaulted to for the FILE parameter, and also to determine whether a directory object exists to which the schema has read and write access.

    See "Management of File Locations in Data Pump Legacy Mode" for more information about how Data Pump handles the original Export FILE parameter.

    GRANTS
    

    If original Export used GRANTS=n, then Data Pump Export uses EXCLUDE=GRANT.

    If original Export used GRANTS=y, then the parameter is ignored and does not need to be remapped because that is the Data Pump Export default behavior.

    INDEXES
    

    If original Export used INDEXES=n, then Data Pump Export uses the EXCLUDE=INDEX parameter.

    If original Export used INDEXES=y, then the parameter is ignored and does not need to be remapped because that is the Data Pump Export default behavior.

    LOG

    Data Pump Export attempts to determine the path that was specified or defaulted to for the LOG parameter, and also to determine whether a directory object exists to which the schema has read and write access.

    See "Management of File Locations in Data Pump Legacy Mode" for more information about how Data Pump handles the original Export LOG parameter.

    The contents of the log file will be those of a Data Pump Export operation. See "Log Files" for information about log file location and content.

    OBJECT_CONSISTENT
    

    This parameter is ignored because Data Pump Export processing ensures that each object is in a consistent state when being exported.

    OWNER
    

    The Data Pump SCHEMAS parameter is used.

    RECORDLENGTH
    

    This parameter is ignored because Data Pump Export automatically takes care of buffer sizing.

    RESUMABLE
    

    This parameter is ignored because Data Pump Export automatically provides this functionality to users who have been granted the EXP_FULL_DATABASE role.

    RESUMABLE_NAME
    

    This parameter is ignored because Data Pump Export automatically provides this functionality to users who have been granted the EXP_FULL_DATABASE role.

    RESUMABLE_TIMEOUT
    

    This parameter is ignored because Data Pump Export automatically provides this functionality to users who have been granted the EXP_FULL_DATABASE role.

    ROWS
    

    If original Export used ROWS=y, then Data Pump Export uses the CONTENT=ALL parameter.

    If original Export used ROWS=n, then Data Pump Export uses the CONTENT=METADATA_ONLY parameter.

    STATISTICS
    

    This parameter is ignored because statistics are always saved for tables as part of a Data Pump export operation.

    TABLESPACES
    

    If original Export also specified TRANSPORT_TABLESPACE=n, then Data Pump Export ignores the TABLESPACES parameter.

    If original Export also specified TRANSPORT_TABLESPACE=y, then Data Pump Export takes the names listed for the TABLESPACES parameter and uses them on the Data Pump Export TRANSPORT_TABLESPACES parameter.

    TRANSPORT_TABLESPACE

    If original Export used TRANSPORT_TABLESPACE=n (the default), then Data Pump Export uses the TABLESPACES parameter.

    If original Export used TRANSPORT_TABLESPACE=y, then Data Pump Export uses the TRANSPORT_TABLESPACES parameter and only the metadata is exported.

    TRIGGERS
    

    If original Export used TRIGGERS=n, then Data Pump Export uses the EXCLUDE=TRIGGER parameter.

    If original Export used TRIGGERS=y, then the parameter is ignored and does not need to be remapped because that is the Data Pump Export default behavior.

    TTS_FULL_CHECK
    

    If original Export used TTS_FULL_CHECK=y, then Data Pump Export uses the TRANSPORT_FULL_CHECK parameter.

    If original Export used TTS_FULL_CHECK=y, then the parameter is ignored and does not need to be remapped because that is the Data Pump Export default behavior.

    VOLSIZE
    

    When the original Export VOLSIZE parameter is used, it means the location specified for the dump file is a tape device. The Data Pump Export dump file format does not support tape devices. Therefore, this operation terminates with an error.


    Using Original Import Parameters with Data Pump

    Data Pump Import accepts original Import parameters when they map to a corresponding Data Pump parameter. Table 4-2 describes how Data Pump Import interprets original Import parameters. Parameters that have the same name and functionality in both original Import and Data Pump Import are not included in this table.

    Table 4-2 How Data Pump Import Handles Original Import Parameters

    Original Import Parameter Action Taken by Data Pump Import Parameter
    BUFFER
    

    This parameter is ignored.

    CHARSET
    

    This parameter was desupported several releases ago and should no longer be used. It will cause the Data Pump Import operation to abort.

    COMMIT
    

    This parameter is ignored. Data Pump Import automatically performs a commit after each table is processed.

    COMPILE
    

    This parameter is ignored. Data Pump Import compiles procedures after they are created. A recompile can be executed if necessary for dependency reasons.

    CONSTRAINTS
    

    If original Import used CONSTRAINTS=n, then Data Pump Import uses the EXCLUDE=CONSTRAINT parameter.

    If original Import used CONSTRAINTS=y, then the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior.

    DATAFILES
    

    The Data Pump Import TRANSPORT_DATAFILES parameter is used.

    DESTROY
    

    If original Import used DESTROY=y, then Data Pump Import uses the REUSE_DATAFILES=y parameter.

    If original Import used DESTROY=n, then the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior.

    FEEDBACK
    

    The Data Pump Import STATUS=30 command is used. Note that this is not a direct mapping because the STATUS command returns the status of the import job, as well as the rows being processed.

    In original Import, feedback was given after a certain number of rows, as specified with the FEEDBACK command. In Data Pump Import, the status is given every so many seconds, as specified by STATUS.

    FILE

    Data Pump Import attempts to determine the path that was specified or defaulted to for the FILE parameter, and also to determine whether a directory object exists to which the schema has read and write access.

    See "Management of File Locations in Data Pump Legacy Mode" for more information about how Data Pump handles the original Import FILE parameter.

    FILESIZE
    

    This parameter is ignored because the information is already contained in the Data Pump dump file set.

    FROMUSER

    The Data Pump Import SCHEMAS parameter is used. If FROMUSER was used without TOUSER also being used, then import schemas that have the IMP_FULL_DATABASE role cause Data Pump Import to attempt to create the schema and then import that schema's objects. Import schemas that do not have the IMP_FULL_DATABASE role can only import their own schema from the dump file set.

    GRANTS
    

    If original Import used GRANTS=n, then Data Pump Import uses the EXCLUDE=OBJECT_GRANT parameter.

    If original Import used GRANTS=y, then the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior.

    IGNORE
    

    If original Import used IGNORE=y, then Data Pump Import uses the TABLE_EXISTS_ACTION=APPEND parameter. This causes the processing of table data to continue.

    If original Import used IGNORE=n, then the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior.

    INDEXES
    

    If original Import used INDEXES=n, then Data Pump Import uses the EXCLUDE=INDEX parameter.

    If original Import used INDEXES=y, then the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior.

    INDEXFILE

    The Data Pump Import SQLFILE={directory-object:}filename and INCLUDE=INDEX parameters are used.

    The same method and attempts made when looking for a directory object described for the FILE parameter also take place for the INDEXFILE parameter.

    If no directory object was specified on the original Import, then Data Pump Import uses the directory object specified with the DIRECTORY parameter.

    LOG

    Data Pump Import attempts to determine the path that was specified or defaulted to for the LOG parameter, and also to determine whether a directory object exists to which the schema has read and write access.

    See "Management of File Locations in Data Pump Legacy Mode" for more information about how Data Pump handles the original Import LOG parameter.

    The contents of the log file will be those of a Data Pump Import operation. See "Log Files" for information about log file location and content.

    RECORDLENGTH
    

    This parameter is ignored because Data Pump handles issues about record length internally.

    RESUMABLE
    

    This parameter is ignored because this functionality is automatically provided for users who have been granted the IMP_FULL_DATABASE role.

    RESUMABLE_NAME
    

    This parameter is ignored because this functionality is automatically provided for users who have been granted the IMP_FULL_DATABASE role.

    RESUMABLE_TIMEOUT
    

    This parameter is ignored because this functionality is automatically provided for users who have been granted the IMP_FULL_DATABASE role.

    ROWS=N
    

    If original Import used ROWS=n, then Data Pump Import uses the CONTENT=METADATA_ONLY parameter.

    If original Import used ROWS=y, then Data Pump Import uses the CONTENT=ALL parameter.

    SHOW
    

    If SHOW=y is specified, then the Data Pump Import SQLFILE=[directory_object:]file_name parameter is used to write the DDL for the import operation to a file. Only the DDL (not the entire contents of the dump file) is written to the specified file. (Note that the output is not shown on the screen as it was in original Import.)

    The name of the file will be the file name specified on the DUMPFILE parameter (or on the original Import FILE parameter, which is remapped to DUMPFILE). If multiple dump file names are listed, then the first file name in the list is used. The file will be located in the directory object location specified on the DIRECTORY parameter or the directory object included on the DUMPFILE parameter. (Directory objects specified on the DUMPFILE parameter take precedence.)

    STATISTICS
    

    This parameter is ignored because statistics are always saved for tables as part of a Data Pump Import operation.

    STREAMS_CONFIGURATION
    

    This parameter is ignored because Data Pump Import automatically determines it; it does not need to be specified.

    STREAMS_INSTANTIATION
    

    This parameter is ignored because Data Pump Import automatically determines it; it does not need to be specified

    TABLESPACES

    If original Import also specified TRANSPORT_TABLESPACE=n (the default), then Data Pump Import ignores the TABLESPACES parameter.

    If original Import also specified TRANSPORT_TABLESPACE=y, then Data Pump Import takes the names supplied for this TABLESPACES parameter and applies them to the Data Pump Import TRANSPORT_TABLESPACES parameter.

    TOID_NOVALIDATE
    

    This parameter is ignored. OIDs are no longer used for type validation.

    TOUSER

    The Data Pump Import REMAP_SCHEMA parameter is used. There may be more objects imported than with original Import. Also, Data Pump Import may create the target schema if it does not already exist.

    The FROMUSER parameter must also have been specified in original Import or the operation will fail.

    TRANSPORT_TABLESPACE

    The TRANSPORT_TABLESPACE parameter is ignored, but if you also specified the DATAFILES parameter, then the import job continues to load the metadata. If the DATAFILES parameter is not specified, then an ORA-39002:invalid operation error message is returned.

    TTS_OWNERS 
    

    This parameter is ignored because this information is automatically stored in the Data Pump dump file set.

    VOLSIZE
    

    When the original Import VOLSIZE parameter is used, it means the location specified for the dump file is a tape device. The Data Pump Import dump file format does not support tape devices. Therefore, this operation terminates with an error.


    Management of File Locations in Data Pump Legacy Mode

    Original Export and Import and Data Pump Export and Import differ on where dump files and log files can be written to and read from because the original version is client-based and Data Pump is server-based.

    Original Export and Import use the FILE and LOG parameters to specify dump file and log file names, respectively. These file names always refer to files local to the client system and they may also contain a path specification.

    Data Pump Export and Import use the DUMPFILE and LOGFILE parameters to specify dump file and log file names, respectively. These file names always refer to files local to the server system and cannot contain any path information. Instead, a directory object is used to indirectly specify path information. The path value defined by the directory object must be accessible to the server. The directory object is specified for a Data Pump job through the DIRECTORY parameter. It is also possible to prepend a directory object to the file names passed to the DUMPFILE and LOGFILE parameters. For privileged users, Data Pump supports the use of a default directory object if one is not specified on the command line. This default directory object, DATA_PUMP_DIR, is set up at installation time.

    If Data Pump legacy mode is enabled and the original Export FILE=filespec parameter and/or LOG=filespec parameter are present on the command line, then the following rules of precedence are used to determine a file's location:

    Note:

    If the FILE parameter and LOG parameter are both present on the command line, then the rules of precedence are applied separately to each parameter.

    Also, when a mix of original Export/Import and Data Pump Export/Import parameters are used, separate rules apply to them. For example, suppose you have the following command:

    expdp system FILE=/user/disk/foo.dmp LOGFILE=foo.log DIRECTORY=dpump_dir
    

    The Data Pump legacy mode file management rules, as explained in this section, would apply to the FILE parameter. The normal (that is, non-legacy mode) Data Pump file management rules, as described in "Default Locations for Dump, Log, and SQL Files", would apply to the LOGFILE parameter.

    1. If a path location is specified as part of the file specification, then Data Pump attempts to look for a directory object accessible to the schema executing the export job whose path location matches the path location of the file specification. If such a directory object cannot be found, then an error is returned. For example, assume that a server-based directory object named USER_DUMP_FILES has been defined with a path value of '/disk1/user1/dumpfiles/' and that read and write access to this directory object has been granted to the hr schema. The following command causes Data Pump to look for a server-based directory object whose path value contains '/disk1/user1/dumpfiles/' and to which the hr schema has been granted read and write access:

      expdp hr FILE=/disk1/user1/dumpfiles/hrdata.dmp
      

      In this case, Data Pump uses the directory object USER_DUMP_FILES. The path value, in this example '/disk1/user1/dumpfiles/', must refer to a path on the server system that is accessible to the Oracle Database.

      If a path location is specified as part of the file specification, then any directory object provided using the DIRECTORY parameter is ignored. For example, if the following command is issued, then Data Pump does not use the DPUMP_DIR directory object for the file parameter, but instead looks for a server-based directory object whose path value contains '/disk1/user1/dumpfiles/' and to which the hr schema has been granted read and write access:

      expdp hr FILE=/disk1/user1/dumpfiles/hrdata.dmp DIRECTORY=dpump_dir
      
    2. If no path location is specified as part of the file specification, then the directory object named by the DIRECTORY parameter is used. For example, if the following command is issued, then Data Pump applies the path location defined for the DPUMP_DIR directory object to the hrdata.dmp file:

      expdp hr FILE=hrdata.dmp DIRECTORY=dpump_dir
      
    3. If no path location is specified as part of the file specification and no directory object is named by the DIRECTORY parameter, then Data Pump does the following, in the order shown:

      1. Data Pump looks for the existence of a directory object of the form DATA_PUMP_DIR_schema_name, where schema_name is the schema that is executing the Data Pump job. For example, the following command would cause Data Pump to look for the existence of a server-based directory object named DATA_PUMP_DIR_HR:

        expdp hr FILE=hrdata.dmp
        

        The hr schema also must have been granted read and write access to this directory object. If such a directory object does not exist, then the process moves to step b.

      2. Data Pump looks for the existence of the client-based environment variable DATA_PUMP_DIR. For instance, assume that a server-based directory object named DUMP_FILES1 has been defined and the hr schema has been granted read and write access to it. Then on the client system, the environment variable DATA_PUMP_DIR can be set to point to DUMP_FILES1 as follows:

        setenv DATA_PUMP_DIR DUMP_FILES1
        expdp hr FILE=hrdata.dmp
        

        Data Pump then uses the served-based directory object DUMP_FILES1 for the hrdata.dmp file.

        If a client-based environment variable DATA_PUMP_DIR does not exist, then the process moves to step c.

      3. If the schema that is executing the Data Pump job has DBA privileges, then the default Data Pump directory object, DATA_PUMP_DIR, is used. This default directory object is established at installation time. For example, the following command causes Data Pump to attempt to use the default DATA_PUMP_DIR directory object, assuming that system has DBA privileges:

        expdp system FILE=hrdata.dmp
        

    See Also:

    "Default Locations for Dump, Log, and SQL Files" for information about Data Pump file management rules of precedence under normal Data Pump conditions (that is, non-legacy mode)

    Adjusting Existing Scripts for Data Pump Log Files and Errors

    Data Pump legacy mode requires that you review and update your existing scripts written for original Export and Import because of differences in file format and error reporting.

    Log Files

    Data Pump Export and Import do not generate log files in the same format as those created by original Export and Import. Any scripts you have that parse the output of original Export and Import must be updated to handle the log file format used by Data Pump Export and Import. For example, the message Successfully Terminated does not appear in Data Pump log files.

    Error Cases

    Data Pump Export and Import may not produce the same errors as those generated by original Export and Import. For example, if a parameter that is ignored by Data Pump Export would have had an out-of-range value in original Export, then an informational message is written to the log file stating that the parameter is being ignored. No value checking is performed, therefore no error message is generated.

    Exit Status

    Data Pump Export and Import have enhanced exit status values to allow scripts to better determine the success of failure of export and import jobs. Any scripts that look at the exit status should be reviewed and updated, if necessary.

    Oracle Database 11g Release 2 (11.2.0.1) New Features

    posted Aug 5, 2011, 4:59 AM by Sachchida Ojha

    The following is a list of new features or enhancements provided with Oracle Database 11g Release 2 (11.2.0.1):

    New Oracle Grid Infrastructure Installation Option

    Oracle Database 11g Release 2 introduces the Oracle Grid Infrastructure installation. For single instance databases, Oracle Grid Infrastructure includes Oracle Automatic Storage Management (Oracle ASM), the listener, and Oracle Restart. Oracle Restart is a new feature that provides the ability to monitor, manage, and automatically restart if the Oracle Database environment including the Oracle Database instance, Oracle Automatic Storage Management instance, and listeners fails. In a clustered environment, Oracle Grid Infrastructure includes Oracle Clusterware, Oracle ASM, and the listener.

    To use Oracle Grid Infrastructure for a standalone server, you must install the Oracle software from the Oracle Grid Infrastructure media before you install the database.

    New Desktop and Server Class Options

    Oracle Database 11g Release 2 introduces a new option that enables you to specify the type of system on which the database is installed. If you are installing on a laptop or a desktop, then select the Desktop Class option; otherwise, select the Server Class option to install on a server. These options are available on the System Class screen.

    There is no difference in the software that gets installed after you select any one option but the Desktop Class option installs a single instance database without the advanced configuration options.

    See Also:

    "Interactive Installation Types" for more information about the desktop and server class options

    Daylight Saving Time Upgrade of Timestamp with Timezone Data Type

    When time zone version files are updated due to daylight saving time changes, TIMESTAMP WITH TIMEZONE (TSTZ) data could become stale. In previous releases, database administrators ran the SQL script utltzuv2.sql to detect TSTZ data affected by the time zone version changes and then had to perform extensive manual procedures to update the TSTZ data.

    With this release, TSTZ data is updated transparently with minimal manual procedures using newly provided DBMS_DST PL/SQL packages. In addition, there is no longer a need for clients to patch their time zone data files.

    See Also:

    SYSASM Privilege

    Starting with Oracle Database 11g Release 2 (11.2), Oracle ASM administration must be done with the SYSASM privilege. The SYSASM privilege also can be granted using password authentication on the Oracle ASM instance.

    You can designate OPERATOR privileges (a subset of the SYSASM privileges, including starting and stopping Oracle ASM) to members of the OSOPER for an Oracle ASM group.

    Using the SYSASM privilege for ASM administration creates a clear division of responsibility between ASM administration and database administration. It also provides the optional capability to prevent different databases using the same storage from accidentally overwriting each other's files.

    See Also:

    Oracle Automatic Storage Management Administrator's Guide for more information about the SYSASM privilege, ASMSNMP account, and OSASM operating system group

    Fixup Scripts and Prerequisite Checks

    Starting with Oracle Database 11g Release 2 (11.2), the Oracle Universal Installer (OUI) detects when minimum requirements for an installation are not completed, and creates scripts, called fixup scripts, to resolve many incomplete system configuration requirements. If OUI detects an incomplete task, then click the Fix & Check Again button to generate the fixup script.

    For Oracle Clusterware, you also can have Cluster Verification Utility (CVU) generate fixup scripts before the installation.

    The fixup script is generated during the installation. You are prompted to run the script as root in a separate terminal session. When you run the script, it sets some of the system parameters to Oracle-recommended values, if necessary, and completes other operating system configuration tasks.

    Database Smart Flash Cache

    Database Smart Flash Cache feature is a transparent extension of the database buffer cache using solid state device (SSD) technology. The SSD acts as a Level 2 cache to the (Level 1) Oracle system global area (SGA).

    SSD storage is faster than disk storage, and cheaper than RAM. Database Smart Flash Cache with SSD storage gives you the ability to greatly improve the performance of your Oracle databases by reducing the amount of disk I/O at a lower cost than adding an equivalent amount of RAM.

    Database Smart Flash Cache is supported on Oracle Solaris and Oracle Linux only.

    See Also:

    New Tool to Configure Custom Installation Options

    Oracle Universal Installer no longer provides the custom installation option of individual components. Use the chopt tool, a command-line utility that is located in the ORACLE_HOME\bin directory, to configure the database options.

    Deinstallation Tool

    Use the new deinstallation tool (deinstall) available as an Oracle Technology Network download (before installation) and in the Oracle home directory (after installation) to remove Oracle Database software.

    See "Removing Oracle Software Using the Deinstallation Tool"

    Intelligent Data Placement

    The Intelligent Data Placement feature enables you to specify disk regions on Oracle ASM disks to ensure that frequently accessed data is placed on the outermost (hot) tracks which provide higher performance.

    See Also:

    Oracle Automatic Storage Management Administrator's Guide for more information about Oracle ASM Intelligent Data Placement

    Oracle Automatic Storage Management Cluster File System (Oracle ACFS)

    Oracle Automatic Storage Management Cluster File System (Oracle ACFS) is a new multiplatform, scalable file system, and storage management design that extends Oracle ASM technology in both single instance and cluster configurations. Additionally, Oracle ACFS provides snapshot functionality for a point in time copy of an Oracle ACFS system.

    The software required for Oracle Automatic Storage Management Cluster File System is installed with the Oracle Grid Infrastructure installation.

    See Also:

    Oracle Automatic Storage Management Administrator's Guide for more information about Oracle ACFS

    Oracle Data Pump Export and Oracle Data Pump Import

    Data Pump provides a legacy mode in which you can use original export and import parameters when performing Oracle Data Pump export and import operations.

    See Also:

    Oracle Database Utilities for more information about Data Pump Legacy Mode

    Use Oracle Restart to Automatically Restart Your Database

    Oracle Restart is a new feature included in this release to enhance the availability of Oracle databases in a single-instance environment. If you install Oracle Restart, and there is a temporary failure of any part of the Oracle Database software stack, including the database, listener, and Oracle ASM instance, Oracle Restart automatically restarts the failed component. In addition, Oracle Restart starts all these components when the database host computer is restarted. The components are started in the proper order, taking into consideration the dependencies among components.

    See Also:

    Chapter 4, "Configuring Automatic Restart of an Oracle Database" in the Oracle Database Administrator's Guide for more information about Oracle Restart

    New Method of Installing Oracle Automatic Storage Management

    In past releases, Oracle ASM was installed as part of the Oracle Database installation. With Oracle Database 11g Release 2 (11.2), Oracle ASM is installed when you install the Oracle Grid Infrastructure components and shares an Oracle home with Oracle Clusterware when installed in a cluster such as with Oracle Real Application Cluster (Oracle RAC) or with Oracle Restart on a single instance database.

    If you want to upgrade an existing Oracle ASM, then you must upgrade Oracle ASM by running an Oracle Grid Infrastructure upgrade. If you do not have Oracle ASM installed and you want to use Oracle ASM as your storage option, then you must complete an Oracle Grid Infrastructure installation before you start your Oracle Database installation.

    See Also:

    "Oracle Grid Infrastructure" for more information about installing the Oracle Grid Infrastructure software

    SRVCTL Support for a Single-Instance Database in a Cluster

    SRVCTL was enhanced to support single-instance databases with Oracle Restart on standalone servers and on clusters with Oracle Clusterware. SRVCTL is a command-line interface used to manage Oracle processes (database instance, listener, Oracle ASM instance) when using Oracle Restart. With SRVCTL, you can manage the Oracle Restart configuration, see the status of processes managed by Oracle Restart, and start or stop processes such as the Oracle Database.

    See Also:

    Chapter 4, "Configuring Automatic Restart of an Oracle Database" in the Oracle Database Administrator's Guide for more information about SRVCTL commands

    Deprecated in Oracle Database 11g Release 2 (11.2)

    The following are not supported or not available anymore with Oracle Database 11g Release 2:

    • Installing data files directly on raw devices is no longer available during installation with Oracle Universal Installer or Database Configuration Assistant. You must use a file system or use Oracle ASM.

    • Oracle Ultra Search

    Data Pump Enhancements in Oracle Database 11g Release 1

    posted Aug 5, 2011, 4:56 AM by Sachchida Ojha

    Oracle Data Pump was introduced in Oracle 10g. This article provides an overview of the main Data Pump enhancements in Oracle Database 11g Release 1, including:
    Related articles.

    COMPRESSION

    The COMPRESSION parameter allows you to decide what, if anything, you wish to compress in your export. The syntax is shown below.
    COMPRESSION={ALL | DATA_ONLY | METADATA_ONLY | NONE}
    The available options are:
    • ALL: Both metadata and data are compressed.
    • DATA_ONLY: Only data is compressed.
    • METADATA_ONLY: Only metadata is compressed. This is the default setting.
    • NONE: Nothing is compressed.
    Here is an example of the COMPRESSION parameter being used.
    expdp test/test schemas=TEST directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
      compression=all
    The COMPATIBLE initialization parameter should be set to "11.0.0" or higher to use these options, except for the METADATA_ONLY option, which is available with a COMPATIBLE setting of "10.2".

    Encryption Parameters

    Data pump encryption is an Enterprise Edition feature, so the parameters described below are only relevant for Enterprise Edition installations. In addition, the COMPATIBLE initialisation parameter must be set to "11.0.0" or higher to use these features.

    ENCRYPTION and ENCRYPTION_PASSWORD

    The use of encryption is controlled by a combination of the ENCRYPTION or ENCRYPTION_PASSWORD parameters. The syntax for the ENCRYPTION parameter is shown below.
    ENCRYPTION = {ALL | DATA_ONLY | ENCRYPTED_COLUMNS_ONLY | METADATA_ONLY | NONE}
    
    The available options are:
    • ALL: Both metadata and data are encrypted.
    • DATA_ONLY: Only data is encrypted.
    • ENCRYPTED_COLUMNS_ONLY: Only encrypted columns are written to the dump file in an encrypted format.
    • METADATA_ONLY: Only metadata is encrypted.
    • NONE: Nothing is encrypted.
    If neither the ENCRYPTION or ENCRYPTION_PASSWORD parameters are set, it is assumed the required level of encryption is NONE. If only the ENCRYPTION_PASSWORD parameter is specified, it is assumed the required level of encryption is ALL. Here is an example of these parameters being used.
    expdp test/test schemas=TEST directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
      encryption=all encryption_password=password

    ENCRYPTION_ALGORITHM

    The ENCRYPTION_ALGORITHM parameter specifies the encryption algorithm to be used during the export, with the default being "AES128". The syntax is shown below.
    ENCRYPTION_ALGORITHM = { AES128 | AES192 | AES256 }
    The ENCRYPTION_ALGORITHM parameter must be used in conjunction with the ENCRYPTION or ENCRYPTION_PASSWORD parameters, as shown below.
    expdp test/test schemas=TEST directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
      encryption=all encryption_password=password encryption_algorithm=AES256

    ENCRYPTION_MODE

    The ENCRYPTION_MODE parameter specifies the type of security used during export and import operations. The syntax is shown below.
    ENCRYPTION_MODE = { DUAL | PASSWORD | TRANSPARENT }
    The allowable values and their default settings are explained below:
    • DUAL: This mode creates a dump file that can be imported using an Oracle Encryption Wallet, or the the ENCRYPTION_PASSWORD specified during the export operation. This is the default setting if the ENCRYPTION_PASSWORD parameter is set and there is an open wallet.
    • PASSWORD: This mode creates a dump file that can only be imported using the ENCRYPTION_PASSWORD specified during the export operation. This is the default setting if the ENCRYPTION_PASSWORD parameter is set and there isn't an open wallet.
    • TRANSPARENT: This mode creates an encrypted dump file using and open Oracle Encryption Wallet. If the ENCRYPTION_PASSWORD is specified while using this mode and error is produced. This is the default setting of only the ENCRYPTION parameter is set.
    Wallet setup is described here.

    The ENCRYPTION_MODE requires either the ENCRYPTION or ENCRYPTION_PASSWORD parameter to be specified.
    expdp test/test schemas=TEST directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
      encryption=all encryption_password=password encryption_mode=password

    TRANSPORTABLE

    The TRANSPORTABLE parameter is similar to the TRANSPORT_TABLESPACES parameter available previously in that it only exports and imports metadata about a table, relying on you to manually transfer the relevent tablespace datafiles. The export operation lists the tablespaces that must be transfered. The syntax is shown below.
    TRANSPORTABLE = {ALWAYS | NEVER}
    The value ALWAYS turns on the transportable mode, while the default value of NEVER indicates this is a regular export/import.

    The following restrictions apply during exports using the TRANSPORTABLE parameter:
    • This parameter is only applicable during table-level exports.
    • The user performing the operation must have the EXP_FULL_DATABASE privilege.
    • Tablespaces containing the source objects must be read-only.
    • The COMPATIBLE initialization parameter must be set to 11.0.0 or higher.
    • The default tablespace of the user performing the export must not be the same as any of the tablespaces being transported.
    Some extra restictions apply during import operations:
    • The NETWORK_LINK parameter must be specified during the import operation. This parameter is set to a valid database link to the source schema.
    • The schema performing the import must have both EXP_FULL_DATABASE and IMP_FULL_DATABASE privileges.
    • The TRANSPORT_DATAFILES parameter is used to identify the datafiles holding the table data.
    Examples of the export and import operations are shown below.
    expdp system tables=TEST1.TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
      transportable=ALWAYS
    
    impdp system tables=TEST1.TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=impdpTEST.log
      transportable=ALWAYS network_link=DB11G transport_datafiles='/u01/oradata/DB11G/test01.dbf'

    PARTITION_OPTIONS

    The PARTITION_OPTIONS parameter determines how partitions will be handled during export and import operations. The syntax is shown below.
    PARTITION_OPTIONS={none | departition | merge}
    The allowable values are:
    • NONE: The partitions are created exactly as they were on the system the export was taken from.
    • DEPARTITION: Each partition and sub-partition is created as a separate table, named using a combination of the table and (sub-)partition name.
    • MERGE: Combines all partitions into a single table.
    The NONE and MERGE options are not possible if the export was done using the TRANSPORTABLE parameter with a partition or subpartition filter. If there are any grants on objects being departitioned, an error message is generated and the objects are not loaded.
    expdp test/test directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log tables=test.tab1
      partition_options=merge

    REUSE_DUMPFILES

    The REUSE_DUMPFILES parameter can be used to prevent errors being issued if the export attempts to write to a dump file that already exists.
    REUSE_DUMPFILES={Y | N}
    When set to "Y", any existing dumpfiles will be overwritten. When the default values of "N" is used, an error is issued if the dump file already exists.
    expdp test/test schemas=TEST directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
      reuse_dumpfiles=y

    REMAP_TABLE

    This parameter allows a table to be renamed during the import operations performed using the TRANSPORTABLE method. It can also be used to alter the base table name used during PARTITION_OPTIONS imports. The syntax is shown below.
    REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablename
    An example is shown below.
    impdp test/test tables=TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=impdpTEST.log
      remap_table=TEST.TAB1:TAB2
    Existing tables are not renamed, only tables created by the import.

    DATA_OPTIONS

    SKIP_CONSTRAINT_ERRORS

    During import operations using the external table acces method, setting the DATA_OPTIONS parameter to SKIP_CONSTRAINT_ERRORS allows load operations to continue through non-deferred constraint violations, with any violations logged for future reference. Without this, the default action would be to roll back the whole operation. The syntax is shown below.
    DATA_OPTIONS=SKIP_CONSTRAINT_ERRORS
    An example is shown below.
    impdp test/test tables=TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=impdpTEST.log
      data_options=SKIP_CONSTRAINT_ERRORS
    This parameter has no impact on deferred constraints, which still cause the operation to be rolled back once a violation is detected. If the object being loaded has existing unique indexes or constraints, the APPEND hint will not be used, which may adversely affect performance.

    XML_CLOBS

    During an export, if XMLTYPE columns are currently stored as CLOBs, they will automatically be exported as uncompressed CLOBs. If on the other hand they are currently stored as any combination of object-relational, binary or CLOB formats, they will be exported in compressed format by default. Setting the DATA_OPTIONS parameter to XML_CLOBS specifies that all XMLTYPE columns should be exported as uncompressed CLOBs, regardless of the default action. The syntax is shown below.
    DATA_OPTIONS=XML_CLOBS
    An example is shown below.
    expdp test/test tables=TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
      version=11.1 data_options=XML_CLOBS
    Both the export and import must use the same XML schema and the job version must be set to 11.0.0 or higher.

    REMAP_DATA

    During export and import operations, the REMAP_DATA parameter allows you to associate a remap packaged function that will accept the column value as a parameter and return a modified version of the data. The syntax is shown below.
    REMAP_DATA=[schema.]tablename.column_name:[schema.]pkg.function
    This can be used to mask sensitive data during export and import operations by replacing the original data with random alternatives. The mapping is done on a column-by-column basis, as shown below.
    expdp test/test tables=TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
      remap_data:tab1.col1:remap_pkg.remap_col1
      remap_data:tab1.col2:remap_pkg.remap_col2
    The remapping function must return the same datatype as the source column and it must not perform a commit or rollback.

    Oracle Database 11g: New Features - Master links

    posted Aug 5, 2011, 4:49 AM by Sachchida Ojha   [ updated Jan 28, 2012, 7:01 AM ]

    Oracle Database 11g: New Features For Administrators OCP Exam Articles

    The articles in this section are specifically targeted at the 1Z0_050 Oracle Database 11g: New Features for Administrators OCP upgrade exam.

    Adaptive Cursor Sharing in Oracle Database 11g Release 1 - Overcome the problems associated with using bind variables against indexed columns containing skewed data.

    Automated Database Maintenance Task Management in Oracle Database 11g Release 1 - Reduce the impact of automated maintenance tasks on your system by controlling how and when they run.

    Automatic Memory Management in Oracle Database 11g Release 1 - Oracle 11g takes the next step down the road to making manual memory management a thing of the past.

    Automatic SQL Tuning in Oracle Database 11g Release 1 - Take advantage of the Automatic SQL Tuning features of Oracle 11g Release 1.

    Automatic Storage Manager (ASM) Enhancements in Oracle Database 11g Release 1 - This article provides an overview of the main Automatic Storage Manager (ASM) enhancements in Oracle Database 11g Release 1.

    AWR Baseline Enhancements in Oracle Database 11g Release 1 - Oracle 11g has extended the AWR baseline functionality and made alert threshold creation more flexible.

    Case Sensitive Passwords in Oracle Database 11g Release 1 - Understand the implications and administration of this new security feature in Oracle 11g.

    Cross-Session PL/SQL Function Result Cache in Oracle Database 11g Release 1 - Improve the performance of PL/SQL functions across the whole database instance by caching return values.

    Database Replay in Oracle Database 11g Release 1 - Capture workloads on a production system and replay them exactly as they happened on a test system.

    Database Resident Connection Pool (DRCP) in Oracle Database 11g Release 1 - Use the database resident connection pool to reduce the resource requirements of applications that do not support connection pooling.

    Data Recovery Advisor in Oracle Database 11g Release 1 - Automatically identify, diagnose and repair corruption or loss of persistent data on disk using this new feature.

    DDL With the WAIT Option (DDL_LOCK_TIMEOUT) - Avoid unnecessary "ORA-00054: resource busy" errors in 11g.

    Fine-Grained Access to Network Services in Oracle Database 11g Release 1 - Take control of the external services accessed by the UTL_TCP, UTL_INADDR, UTL_HTTP, UTL_SMTP, and UTL_MAIL packages.

    Finer Grained Dependency Management in Oracle Database 11g Release 1 - See how finer grained dependency management in 11g reduces the extent of invalidations associated with schema changes.

    Flashback and LogMiner Enhancements in Oracle Database 11g Release 1 - See how Oracle 11g Release 1 makes tracking data changes and recovering from logical errors easier because of enhancements to Flashback and LogMiner.

    Invisible Indexes in Oracle Database 11g Release 1 - Take control of the indexes available to the optimizer by using invisible indexes in 11g.

    Miscellaneous New Features in Oracle Database 11g Release 1 - A collection of new features and enhancements listed in the miscellaneous section of "Oracle Database 11G: New Features for Administrators" OCP syllabus.

    Online Table Redefinition Enhancements in Oracle Database 11g Release 1 - Take advantage of the reduced invalidations associated with online table redefintions in 11g.

    Partitioning Enhancements in Oracle Database 11g Release 1 - An introduction to the partitioning enhancements in Oracle 11g Release 1.

    Query Result Cache in Oracle Database 11g Release 1 - Improve the performance of SQL across the whole database instance by caching query results.

    Read-Only Tables in Oracle Database 11g Release 1 - Protect data in static tables using the read-only table feature in Oracle 11g Release 1.

    Resource Manager Enhancements in Oracle Database 11g Release 1 - An overview of the resource manager enhancements in Oracle Database 11g Release 1.

    RMAN Enhancements in Oracle Database 11g Release 1 - An overview of all the RMAN enhancements in Oracle Database 11g Release 1.

    Scheduler Enhancements in Oracle Database 11g Release 1 - Oracle 11g adds more functionality to the Oracle scheduler.

    SecureFiles in Oracle Database 11g Release 1 - Compress, deduplicate and encrypt LOBs using the new LOB functionality in Oracle 11g.

    SQL Access Advisor in Oracle Database 11g Release 1 - Use this advisor to get suggestions for indexes, materialized views and partitioning schemes to improve system performance.

    SQL Performance Analyzer in Oracle Database 11g Release 1 - Compare the performance of the statements in an SQL tuning set before and after database changes.

    SQL Plan Management in Oracle Database 11g Release 1 - Maintain consistent SQL performance regardless of changes in optimizer version, optimizer statistics, schema changes, system settings and SQL profile creation.

    Statistics Collection Enhancements in Oracle Database 11g Release 1 - Provide the optimizer with additional selectivity information and avoid new statistics adversely affecting performance.

    Table Compression Enhancements in Oracle Database 11g Release 1 - Save space and improve performance of OLTP systems using the enhanced table compression in 11g.

    Tablespace Encryption in Oracle Database 11g Release 1 - Simplify the administration of transparent data encryption (TDE) using tablespace encryption.

    Temporary Tablespace Enhancements in Oracle Database 11g Release 1 - Take advantage of better visibility and management of temporary tablespaces usage in 11g.

    Upgrading to Oracle Database 11g - This article provides a brief overview of the areas involved in upgrading existing databases to Oracle 11g.

    Feature List:

       

    Oracle Database 11g Release 1: Miscellaneous Articles

    The articles in this section are about new features that are NOT included in the 1Z0_050 OCP upgrade exam.

    Data Pump Enhancements in Oracle Database 11g Release 1 - This article provides an overview of the main Data Pump enhancements in Oracle Database 11g Release 1.

    DBMS_XA - Process a single transaction across multiple sessions using the DBMS_XA package.

    Native Oracle XML DB Web Services in Oracle 11g Release 1 - Present your exisitng PL/SQL procedures and functions as web services using XML DB in Oracle 11g.

    PIVOT and UNPIVOT Operators in Oracle Database 11g Release 1 - Learn how to use the new PIVOT and UNPIVOT operators in Oracle Database 11g Release 1.

    PL/SQL Hierarchical Profiler in Oracle Database 11g Release 1 - Use the DBMS_HPROF package and the plshprof utility to generate and analyze hierarchical profiler data for PL/SQL programs.

    PL/SQL New Features and Enhancements in Oracle Database 11g Release 1 - Get to grips with the new features and enhancements to PL/SQL in Oracle 11g Release 1.

    Real-Time SQL Monitoring using DBMS_SQLTUNE (REPORT_SQL_MONITOR, REPORT_SQL_MONITOR_LIST and REPORT_SQL_DETAIL) - Monitor the performance of SQL queries while the are running.

    Trigger Enhancements in Oracle Database 11g Release 1 - Learn about the enhancements to triggers in Oracle Database 11g Release 1.

    Virtual Columns in Oracle Database 11g Release 1 - Use virtual columns to make derived data appear as columns in your tables in Oracle 11g Release 1.

    Feature List:

       

    Oracle Database 11g Release 2: Miscellaneous Articles

    APPEND_VALUES Hint in Oracle Database 11g Release 2 - Use the APPEND_VALUES hint to perform direct-path inserts when using the FORALL statement.

    Oracle ASM Cluster File Systems (ACFS) in Oracle Database 11g Release 2 - This article describes three ways to create an ASM Cluster File System in an Oracle 11G Release 2 RAC database.

    Auditing Enhancements (DBMS_AUDIT_MGMT) in Oracle Database 11g Release 2 - Oracle 11gR2 allows better management of the audit trail using the DBMS_AUDIT_MGMT package.

    DBMS_PARALLEL_EXECUTE - Use the DBMS_PARALLEL_EXECUTE package to break down large workloads into manageable chunks that can be run in parallel.

    Direct NFS (DNFS) Clonedb in Oracle Database 11g Release 2 (Patchset 11.2.0.2) - Use this new DNFS feature of patchset 11.2.0.2 to clone databases from backups quicker than you ever have before.

    Edition Based Redefinition in Oracle Database 11g Release 2 - Upgrade database components online using this new feature or Oracle Database 11g Release 2.

    Oracle Database File System (DBFS) in Oracle Database 11g Release 2 - This article provides an overview of the Oracle Database File System (DBFS) introduced in Oracle Database 11g Release 2.

    Oracle Database File System (DBFS) PL/SQL APIs - This article provides an overview of the Oracle Database File System (DBFS) PL/SQL APIs introduced in Oracle Database 11g Release 2.

    Oracle Database 11g Release 2 Data Guard Setup using Oracle Grid Control - This article describes the creation of a standby database using Oracle Database 11g Release 2 and the Oracle Grid Control.

    Remote Scheduler Agent Installation for Oracle 11g Release 2 - An overview of the installation of the Remote Scheduler Agent for Oracle 11g Release 2.

    Segment Creation on Demand (Deferred Segment Creation) in Oracle Database 11g Release 2 - Save space when creating new tables containing no rows in Oracle Database 11g Release 2.

    Scheduler Enhancements in Oracle Database 11g Release 2 - Oracle 11gR2 adds more functionality to the Oracle scheduler.

    Oracle Database 11g Release 1 Installations

    Oracle Database 11g Release 1 (11.1) Installation On Oracle Linux 4 and 5 - A brief guide to installing Oracle Database 11g Release 1 (11.1) on Oracle Linux 4 and 5.

    Oracle Database 11g Release 1 (11.1) Installation On Fedora 7 - A brief guide to installing Oracle Database 11g Release 1 (11.1) on Fedora 7.

    Oracle Database 11g Release 1 (11.1) Installation On Fedora 8 - A brief guide to installing Oracle Database 11g Release 1 (11.1) on Fedora 8.

    Oracle Database 11g Release 1 (11.1) Installation On Fedora 9 - A brief guide to installing Oracle Database 11g Release 1 (11.1) on Fedora 9.

    Oracle Database 11g Release 1 (11.1) Installation On Fedora 10 - A brief guide to installing Oracle Database 11g Release 1 (11.1) on Fedora 10.

    Oracle Database 11g Release 1 (11.1) Installation On Fedora 11 - A brief guide to installing Oracle Database 11g Release 1 (11.1) on Fedora 11.

    Oracle Database 11g Release 1 RAC On Linux Using NFS - This article describes the installation of Oracle Database 11g release 1 (11.1) RAC on Linux (Oracle Linux 5) using NFS to provide the shared storage.

    Oracle Database 11g Release 1 RAC On RHEL5 Linux Using VMware ESX Server and NFS - This article describes the installation of Oracle Database 11g release 1 (11.1) RAC on Red Hat Enterprise Linux 5 using VMware ESX Server and NFS for shared storage.

    Oracle Database 11g Release 1 RAC On Linux Using VMware Server 1.x - This article describes the installation of Oracle Database 11g release 1 (11.1) RAC on Linux (Oracle Linux 5) using VMware Server 1.x with no additional shared disk devices.

    Oracle Database 11g Release 1 RAC On Linux Using VMware Server 2 - This article describes the installation of Oracle Database 11g release 1 (11.1) RAC on Linux (Oracle Linux 5) using VMware Server 2 with no additional shared disk devices.

    Oracle Database 11g RAC On Windows 2003 Using VMware Server - This article describes the installation of Oracle Database 11g release 1 (11.1) RAC on Windows 2003 Server Standard Edition using VMware Server with no additional shared disk devices.

    Oracle Database 11g Release 2 Installations

    DNS Configuration for the SCAN used with Oracle RAC Database 11g Release 2 - A basic description of the DNS configuration required for the SCAN associated with an Oracle RAC database in 11g Release 2.

    Oracle Database 11g Release 2 RAC On Linux Using NFS - This article describes the installation of Oracle Database 11g Release 2 (11.2) RAC on Linux (Oracle Linux 5) using NFS to provide the shared storage.

    Oracle Database 11g Release 2 (11.2) Installation On Oracle Linux 5 - A brief guide to installing Oracle Database 11g Release 2 (11.2) on Oracle Linux 5.

    Oracle Database 11g Release 2 (11.2) Installation On Oracle Linux 6 - A brief guide to installing Oracle Database 11g Release 2 (11.2) on Oracle Linux 6.

    Oracle Database 11g Release 2 (11.2) Installation On Fedora 12 (F12) - A brief guide to installing Oracle Database 11g Release 2 (11.2) on Fedora 12 (F12).

    Oracle Database 11g Release 2 (11.2) Installation On Fedora 13 (F13) - A brief guide to installing Oracle Database 11g Release 2 (11.2) on Fedora 13 (F13).

    Oracle Database 11g Release 2 (11.2) Installation On Fedora 14 (F14) - A brief guide to installing Oracle Database 11g Release 2 (11.2) on Fedora 14 (F14).

    Oracle Database 11g Release 2 (11.2) Installation On Fedora 15 (F15) - A brief guide to installing Oracle Database 11g Release 2 (11.2) on Fedora 15 (F15).

    Oracle Database 11g Release 2 (11.2.0.2) Installation On Solaris 10 (x86-64) - A brief guide to installing Oracle 11g Release 2 (11.2.0.2) on Solaris 10 (x86-64).

    Oracle Database 11g Release 2 RAC On Linux Using VirtualBox - This article describes the installation of Oracle Database 11g release 2 (11.2) RAC on Linux (Oracle Enterprise Linux 5.5) using VirtualBox with no additional shared disk devices.

    Oracle 11g Release 2 RAC On Linux Using VMware Server 2 - This article describes the installation of Oracle Database 11g release 2 (11.2) RAC on Linux (Oracle Enterprise Linux 5.4) using VMware Server 2 with no additional shared disk devices.

    Oracle 11g Grid Control

    OEM Grid Control 11gR1 Installation on Oracle Linux 5.5 (64-bit) - Learn how to install OEM Grid Control 11gR1 on Oracle Linux 5.5 (64-bit).

    External Links

    Oracle Database 11g: The Top New Features for DBAs and Developers

    Real-Time SQL Monitoring Using DBMS_SQLTUNE.REPORT_SQL_MONITOR

    posted Aug 2, 2011, 5:19 PM by Sachchida Ojha

    Many times a DBA wants to know where a SQL statement is in its execution plan and where the time is being spent. There are a few ways to find out this information, but an 11g new feature makes gathering this information extremely easy. Oracle 11g Real-Time SQL Monitoring allows you to monitor the performance of SQL statements while they are executing as well as see the breakdown of time and resources used for recently completed statements. It is on by default when STATISTICS_LEVEL is set to to ALL or TYPICAL (the default value) and monitors statements that consume more than 5 seconds of CPU or IO time, as well as any parallel execution (PQ, PDML, PDDL). One can override the default actions by using the MONITOR or NO_MONITOR hint. The 11g Documentation has a text version of a SQL Monitor Report but the report output can be html, text or xml.

    Automatic Memory Management (AMM) in 11G

    posted Jul 26, 2011, 8:55 AM by Sachchida Ojha

    Automatic Memory Management is being introduced in 11g. This enables automatic tuning of PGA and SGA with use of two new parameters named MEMORY_MAX_TARGET and MEMORY_TARGET.

    The most important SGA components are the following:

    Component
    Description
    Parameter
    Database Buffer Cache The database buffer cache is the portion of the SGA that holds copies of data blocks read from datafiles. All users concurrently connected to the instance share access to the database buffer cache.
    DB_CACHE_SIZE
    DB_KEEP_CACHE_SIZE
    DB_RECYCLE_CACHE_SIZE
    DB_nK_CACHE_SIZE

    Redo Log Buffer
    The redo log buffer is a circular buffer in the SGA that holds information about changes made to the database. This information is stored in redo entries. Redo entries contain the information necessary to reconstruct, or redo, changes made to the database by INSERT, UPDATE, DELETE, CREATE, ALTER, or DROP operations. Redo entries are used for database recovery, if necessary.
    LOG_BUFFER
    Shared Pool
    The shared pool portion of the SGA contains the library cache, the dictionary cache, the result cache, buffers for parallel execution messages, and control structures.
    SHARED_POOL_SIZE
    SHARED_POOL_RESERVED_SIZE
    RESULT_CACHE_SIZE *

    Large Pool
    Used for allocating session memory for shared server, Oracle XA, or parallel query buffers or for RMAN.
    LARGE_POOL_SIZE
    Java Pool
    Java pool memory is used in server memory for all session-specific Java code and data within the JVM.
    JAVA_POOL_SIZE
    Streams Pool
    The streams pool is used exclusively by Oracle Streams. The Streams pool stores buffered queue messages, and it provides memory for Oracle Streams capture processes and apply processes.
    STREAMS_POOL_SIZE

    * RESULT_CACHE_MAX_SIZE is new component which has been introduced as part of 11g Memory architecture. The result cache is composed of the SQL query result cache and PL/SQL function result cache, which share the same infrastructure.Results of queries and query fragments can be cached in memory in the SQL query result cache. The database can then use cached results to answer future executions of these queries and query fragments.  Similarly PL/SQL Function Result can also be cached.

    You have to use RESULT_CACHE_MODE initialization parameter which determines the SQL query result cache behavior. The possible initialization parameter values are MANUAL and FORCE.


    Oracle Database 11g supports various memory management methods, which are chosen by initialization parameter settings. Oracle recommends that you enable the automatic memory management method.

    1. Automatic Memory Management – For Both the SGA and Instance PGA
    2. Automatic Shared Memory Management – For the SGA
    3. Manual Shared Memory Management – For the SGA
    4. Automatic PGA Memory Management – For the Instance PGA
    5. Manual PGA Memory Management – For the Instance PGA

    1.Automatic Memory Management – For Both the SGA and Instance PGA

    Beginning with Oracle Database 11g, Oracle Database can manage the SGA memory and instance PGA memory completely automatically. You designate only the total memory size to be used by the instance, and Oracle Database dynamically exchanges memory between the SGA and the instance PGA as needed to meet processing demands. This capability is referred to as automatic memory management. With this memory management method, the database also dynamically tunes the sizes of the individual SGA components and the sizes of the individual PGAs.

    To achieve this, two new parameters have been introduced named MEMORY_MAX_TARGET and MEMORY_TARGET. To do so (on most platforms), you set only a target memory size initialization parameter (MEMORY_TARGET) and optionally a maximum memory size initialization parameter (MEMORY_MAX_TARGET).


    If you create your database with Database Configuration Assistant (DBCA) and choose the basic installation option, automatic memory management is enabled. If you choose advanced installation, Database Configuration Assistant (DBCA) enables you to select automatic memory management.

    Switching to Automatic Memory Management

    1)Check the current values configured for SGA_TARGET and PGA_AGGREGATE_TARGET.

    SQL>SHOW PARAMETER TARGET
    NAME TYPE VALUE
    ------------------------------ ----------- ----------------
    archive_lag_target integer 0
    db_flashback_retention_target integer 1440
    fast_start_io_target integer 0
    fast_start_mttr_target integer 0
    memory_max_target big integer 0
    memory_target big integer 0
    pga_aggregate_target big integer 200M
    sga_target big integer 500M

    Add the values of pga_aggregate_target and sga_target. In our case it would sum to 700 M

    2)Decide on a maximum amount of memory that you would want to allocate to the database which will determine the maximum value for the sum of the SGA and instance PGA sizes. In our case we decide to set to 808M

    3)Change the parameter in initialization parameter file.

    Using Spfile
    ========

    SQL>ALTER SYSTEM SET MEMORY_MAX_TARGET = 808M SCOPE = SPFILE;
    SQL>ALTER SYSTEM SET MEMORY_TARGET = 808M SCOPE = SPFILE;
    SQL>ALTER SYSTEM SET SGA_TARGET =0 SCOPE = SPFILE;
    SQL>ALTER SYSTEM SET PGA_AGGREGATE_TARGET = 0 SCOPE = SPFILE;

    Using Pfile
    =======
    If you have started the instance with Pfile, then edit the pfile and set the parameters manually


    MEMORY_MAX_TARGET = 808M
    MEMORY_TARGET = 808M
    SGA_TARGET =0
    PGA_AGGREGATE_TARGET = 0


    In case you do not specify any value for MEMORY_MAX_TARGET and only use MEMORY_TARGET then database automatically sets MEMORY_MAX_TARGET to the value of MEMORY_TARGET.

    If you omit the line for MEMORY_TARGET and include a value for MEMORY_MAX_TARGET, the MEMORY_TARGET parameter defaults to zero. After startup, you can then dynamically change MEMORY_TARGET to a non-zero value, provided that it does not exceed the value of MEMORY_MAX_TARGET.

    MEMORY_MAX_TARGET is a static parameter i.e it cannot be changed Dynamically and Instance has to be bounced for modifying the value. So ensure that you have set it to appropriate value.

    4)Shutdown and startup the database

    SQL> shutdown immediate;
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup mount
    ORACLE instance started.

    Total System Global Area  845348864 bytes
    Fixed Size                  1303188 bytes
    Variable Size             469765484 bytes
    Database Buffers          369098752 bytes
    Redo Buffers                5181440 bytes
    SQL> show parameter target

    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ------------------------------
    archive_lag_target                   integer     0
    db_flashback_retention_target        integer     1440
    fast_start_io_target                 integer     0
    fast_start_mttr_target               integer     0
    memory_max_target                    big integer 808M
    memory_target                        big integer 808M
    pga_aggregate_target                 big integer 0
    sga_target                           big integer 0


    The preceding steps instruct you to set SGA_TARGET and PGA_AGGREGATE_TARGET to zero so that the sizes of the SGA and instance PGA are tuned up and down as required, without restrictions. You can omit the statements that set these parameter values to zero and leave either or both of the values as positive numbers. In this case, the values act as minimum values for the sizes of the SGA or instance PGA.


    Note : - In case you set any parameter value to value which is higher then MEMORY_TARGET, then you get the following error. E.g Setting SGA_MAX_SIZE to value of 900 M results in following

    SQL> startup
    ORA-00844: Parameter not taking MEMORY_TARGET into account, see alert log for more information

    00844, 00000, "Parameter not taking MEMORY_TARGET into account, see alert log for more information"
    // *Cause: The parameter was larger than MEMORY_TARGET.
    // *Action: Set the parameter to a lower value than MEMORY_TARGET.

    Monitoring and Tuning Automatic Memory Management

    The dynamic performance view V$MEMORY_DYNAMIC_COMPONENTS shows the current sizes of all dynamically tuned memory components, including the total sizes of the SGA and instance PGA.
    The view V$MEMORY_TARGET_ADVICE provides tuning advice for the MEMORY_TARGET initialization parameter.

    SQL> select * from v$memory_target_advice order by memory_size;

    You can also use V$MEMORY_RESIZE_OPS which has a circular history buffer of the last 800 SGA resize requests.

    2. Automatic Shared Memory Management – For the SGA

    If you want to exercise more direct control over the size of the SGA, you can disable automatic memory management and enable automatic shared memory management.This feature was introduced in 10g with a parameter known as SGA_TARGET. When automatic SGA memory management is enabled, the sizes of the different SGA components are flexible and can adapt to the needs of current workload without requiring any additional configuration.

    Please refer to following document for setting SGA_TARGET

    Note 295626.1 - How To Use Automatic Shared Memory Management (ASMM) In Oracle10g

    In case you have enabled Automatic Memory Management , then to switch to Automatic Shared Memory Management , please follow below procedure

    SQL>Alter system set MEMORY_TARGET=0 scope=both;
    SQL>Alter system set SGA_TARGET=500M scope=both;

    3. Manual Shared Memory Management – For the SGA


    If you want complete control of individual SGA component sizes, you can disable both automatic memory management and automatic shared memory management. In this mode, you need to set the sizes of several individual SGA components, thereby determining the overall SGA size. You then manually tune these individual SGA components on an ongoing basis.

    In this case you set SGA_TARGET and MEMORY_TARGET to 0 and set value for other SGA components upto value of SGA_MAX_SIZE

    Please refer to Note 148495.1 - Dynamic SGA

    4. Automatic PGA Memory Management – For the Instance PGA


    While using Automatic memory management , PGA memory is allocated based upon value of MEMORY_TARGET. In case you  enable automatic shared memory management or manual shared memory management, you also implicitly enable automatic PGA memory management.

    Automatic/Manual PGA memory management is decided by  initialization parameter WORKAREA_SIZE_POLICY  which is a session- and system-level parameter that can take only two values: MANUAL or AUTO. The default is AUTO.


    With automatic PGA memory management, you set a target size for the instance PGA by defining value for parameter named PGA_AGGREGATE_TARGET and sizing of SQL work areas is automatic and all *_AREA_SIZE initialization parameters are ignored for these sessions. This feature is available from 9i.

    At any given time, the total amount of PGA memory available to active work areas on the instance is automatically derived from the parameter PGA_AGGREGATE_TARGET. This amount is set to the value of PGA_AGGREGATE_TARGET minus the PGA memory allocated for other purposes (for example, session memory). The resulting PGA memory is then allotted to individual active work areas based on their specific memory requirements.

    Please refer to following document for more details on Automatic PGA Memory Management

    Note 223730.1 - Automatic PGA Memory Managment in 9i and 10g

    5.Manual PGA Memory Management – For the Instance PGA


    In case you wish to manually specify the maximum work area size for each type of SQL operator (such as sort or hash-join) then you can enable Manual PGA Memory management.

    Set WORKAREA_SIZE_POLICY value to MANUAL and also specify values for *_area_size such as SORT_AREA_SIZE, HASH_AREA_SIZE, BITMAP_MERGE_AREA_SIZE, and CREATE_BITMAP_AREA_SIZE, etc.


    Although the Oracle Database 11g supports this manual PGA memory management method, Oracle strongly recommends that you leave automatic PGA memory management enabled.

    Table below summarizes the various memory management methods

    Memory Management Mode
    For
    You Set
    Oracle Database Automatically Tunes
    Automatic memory management(AMM)


    SGA and PGA
    - Total memory target size for the Oracle instance (Memory_Target)

    - (Optional) Maximum memory size for the Oracle instance(Memory_max_Target)
    - Total SGA size
    - SGA component sizes
    - Instance PGA size
    - Individual PGA sizes

    Automatic shared memory management(ASMM)

    (AMM disabled)


    SGA
    - SGA target size (SGA_TARGET)
    - (Optional) SGA maximum size
    (SGA_MAX_SIZE)
    SGA component sizes
    Manual shared memory management

    (AMM and ASMM disabled)


    SGA
    - Shared pool size (SHARED_POOL_SIZE)

    -Buffer cache size
    (DB_CACHE_SIZE or DB_BLOCK_BUFFERS)

    -Java pool size
    (JAVA_POOL_SIZE)

    -Large pool size
    (LARGE_POOL_SIZE)

    - (Optional) SGA maximum size
    (SGA_MAX_SIZE)


    None
    Automatic PGA memory management

    PGA
    Instance PGA target size (PGA_AGGREGATE_TARGET)
    Individual PGA sizes
    Manual PGA memory management

    (not recommended)


    PGA
    Maximum work area size for each type of SQL operator

    None


    The Automatic  Memory Management (AMM) feature uses background process named Memory Manager (MMAN). This process was introduced in 10g which assisted in Automatic Shared Memory Management (ASMM) using SGA_TARGET. MMAN serves as the SGA Memory Broker and coordinates the sizing of the memory components. The SGA Memory Broker keeps track of the sizes of the components and pending resize operations

    Oracle 11.2.0.2 patch set review

    posted Jul 7, 2011, 6:36 AM by Sachchida Ojha

    Starting with the 11.2.0.2 patch set, Oracle Database patch sets are full installations of the Oracle Database software. This means that you do not need to install Oracle Database 11g Release 2 (11.2.0.1) before installing Oracle Database 11g Release 2 (11.2.0.2).

    Note the following changes with the new patch set packaging:

    1.     New installations consist of installing the most recent patch set, rather than installing a base release and then upgrading to a patch release.
    2.     Direct upgrades from previous releases to the most recent patch set are supported.
    3.     Out-of-place patch set upgrades recommended, in which you install the patch set into a new, separate Oracle home. In-place upgrades are supported, but not recommended.

    In-Place Upgrade

    This feature enables you to upgrade an existing installation of Oracle database to Oracle Database 11g Release 2 (11.2.0.2) into the same Oracle home directory by replacing the existing database files. The patch set application requires more downtime and is not recommended. This upgrade type requires less disk space.

    Note:

    If you select In-place upgrade to upgrade Oracle database to the latest version, then you must:

    • Review "Optionally Performing an In-Place Upgrade (Into the Same Oracle Home)" in Oracle Database Upgrade Guide.
    • Take a complete backup of your Oracle home directory before you perform the upgrade.

    Out-of-Place Upgrade

    This feature enables you to install Oracle Database 11g Release 2 (11.2.0.2) in a new, separate Oracle home directory. After the installation is complete, the data from the database in the old Oracle home directory is migrated to the database in the new Oracle home directory.

    Out-of-place upgrade process is similar to Oracle database upgrade from one version to another and is the recommended type. The benefits of this upgrade type is that the patch set application requires much less downtime, and is safer when the patch set application must be aborted. This upgrade type requires more disk space to accommodate two Oracle home directories simultaneously.

    Automatic Memory Management (AMM) in Oracle Database 11g

    posted Oct 4, 2010, 1:53 PM by Sachchida Ojha

    Oracle 11g allows you to allocate one chunk of memory, which Oracle uses to dynamically manage both the SGA and PGA.

    Automatic memory management is configured using two new initialization parameters:

    MEMORY_TARGET: The amount of shared memory available for Oracle to use when dynamically controlling the SGA and PGA. This parameter is dynamic, so the total amount of memory available to Oracle can be increased or decreased, provided it does not exceed the MEMORY_MAX_TARGET limit. The default value is "0".

    MEMORY_MAX_TARGET: This defines the maximum size the MEMORY_TARGET can be increased to without an instance restart. If the MEMORY_MAX_TARGET is not specified, it defaults to MEMORY_TARGET setting.

    Enabling automatic memory management:


    Assuming you want to use a similar amount of memory to your current settings you will need to use the following calculation.

    MEMORY_TARGET = SGA_TARGET + GREATEST(PGA_AGGREGATE_TARGET, "maximum PGA allocated")

    The following queries show you how to display the relevant information and how to combine it in a single statement to calculate the required value.

    SELECT name, value
    FROM   v$parameter
    WHERE  name IN ('pga_aggregate_target', 'sga_target')
    UNION
    SELECT 'maximum PGA allocated' AS name, TO_CHAR(value) AS value
    FROM   v$pgastat
    WHERE  name = 'maximum PGA allocated';

    -- Calculate MEMORY_TARGET
    SELECT sga.value + GREATEST(pga.value, max_pga.value) AS memory_target
    FROM (SELECT TO_NUMBER(value) AS value FROM v$parameter WHERE name = 'sga_target') sga,
    (SELECT TO_NUMBER(value) AS value FROM v$parameter WHERE name = 'pga_aggregate_target') pga,
    (SELECT value FROM v$pgastat WHERE name = 'maximum PGA allocated') max_pga;

    Assuming our required setting was 15G, we might issue the following statements.

    CONN / AS SYSDBA
    -- Set the static parameter. Leave some room for possible future growth without restart.
    ALTER SYSTEM SET MEMORY_MAX_TARGET=16G SCOPE=SPFILE;

    -- Set the dynamic parameters. Assuming Oracle has full control.
    ALTER SYSTEM SET MEMORY_TARGET=15G SCOPE=SPFILE;
    ALTER SYSTEM SET PGA_AGGREGATE_TARGET=0 SCOPE=SPFILE;
    ALTER SYSTEM SET SGA_TARGET=0 SCOPE=SPFILE;

    -- Restart instance.
    SHUTDOWN IMMEDIATE;
    STARTUP;

    Once the database is restarted the MEMORY_TARGET parameter can be amended as required without an instance restart.

    ALTER SYSTEM SET MEMORY_TARGET=14G SCOPE=SPFILE;

    1-10 of 44