posted Aug 5, 2011, 6:32 AM by Sachchida Ojha
Data Pump Components:
- Direct Path API (DPAPI):
Oracle Database 10g supports a direct path API interface that minimizes
data conversion and parsing at both unload and load time.
- External Table Services: Data Pump uses the new ORACLE_DATAPUMP
access driver that provides external tables write and read access to
files containing binary streams.
- The DBMS_METADATA package is used by worker processes for all
metadata unloading and loading. Database object definitions are stored
using XML rather than SQL.
- The DBMS_DATAPUMP package embodies the API for high-speed export and import utilities for bulk data and metadata movement.
- The SQL*Loader client has been integrated with external tables,
thereby providing automatic migration of loader control files to
external table access parameters.
- The expdp and impdp clients are thin layers that make calls to
the DBMS_DATAPUMP package to initiate and monitor Data Pump operations.
Master Table
- During the operation, a master
table is maintained in the schema of the user who initiated the Data
Pump export. The master table has the same name as the name of the Data
Pump job. This table maintains one row per object with status
information. In the event of a failure, Data Pump uses the information
in this table to restart the job. The master table is the heart of every
Data Pump operation; it maintains all the information about the job.
Data Pump uses the master
table to restart a failed or suspended job.
The master table is dropped (by default) when the Data Pump job
finishes successfully.
- The master table is written to the dump file set as the last
step of the export dump operation and is removed from the user’s schema.
For the import dump operation, the master table is loaded from the dump
file set to the user’s schema as the first step and is used to sequence
the objects being imported.
Data Pump Processes
All Data Pump work is done
though jobs. Data Pump jobs, unlike DBMS jobs, are merely server
processes that process the data on behalf of the main process. The main
process, known as a master control process, coordinates this effort via
Advanced Queuing; it does so through a special table created at runtime
known as a master table.
- Client process: This process
is initiated by the client utility: expdp,impdp, or other clients to
make calls to the Data Pump API. Since the Data Pump is completely
integrated to the database, once the Data Pump job is initiated, this
process is not necessary for the progress of the job.
- Shadow process: When a client logs into the Oracle database, a
foreground process is created (a standard feature of Oracle). This
shadow process services the client data dump API requests. This process
creates the master table and creates Advanced Queuing (AQ) queues used
for communication. Once the client process is ended, the shadow process
also goes away.
- Master control process (MCP): Master control process controls
the execution of the Data Pump job; there is one MCP per job. MCP
divides the Data Pump job into various metadata and data load or unload
jobs and hands them over to the worker processes. The MCP has a process
name of the format ORACLE_SID_DMnn_PROCESS_ID. It maintains the job
state, job description, restart, and file information in the master
table.
- Worker process: The MCP creates the worker processes based on
the value of the PARALLEL parameter. The worker process performs the
tasks requested by MCP, mainly loading or unloading data and metadata.
The worker processes have the format ORACLE_SID_DWnn_PROCESS_ID. The
worker processes maintain the current status in the master table that
can beused to restart a failed job.
- Parallel Query (PQ) processes: The worker processes can
initiate parallel query processes if external table is used as the data
access method for loading or unloading. These are standardparallel query
slaves of the parallel execution architecture.
Data Pump Benefits
- Data access methods:
– Direct path – External tables
- Detachment from and reattachment to long-running jobs
- Restarting of Data Pump jobs
- Fine-grained object and data selection
- Explicit database version specification
- Parallel execution
- Estimation of export job space consumption
- Network mode in a distributed environment: the data
pump operations can be performed from one database to another without
writing to a dump file, using the network method.
- Remapping capabilities during import
- Data sampling and metadata compression
Create Data Pump Directory
- Since the Data Pump is server based, directory objects must be created in the database where the
Data Pump files will be stored.
- The user executing Data Pump must have been granted permissions
on the directory. READ permission is required to perform import, and
WRITE permission is required to perform export and to create log files
or SQL files.
- The user (who owns the software installation and database
files) must have READ and WRITE operating system privileges on the
directory. BUT the database user does NOT need any operating system
privileges on the directory for Data Pump to succeed.
SQL> SELECT * FROM dba_directories;
SQL> CREATE OR REPLACE DIRECTORY data_pump_dir AS 'c:\temp';
-- Set up default directory data_pump_dir (The name of the default directory must be DATA_PUMP_DIR)
SQL> CREATE OR REPLACE DIRECTORY dpdata AS '';
SQL> GRANT READ, WRITE ON DIRECTORY dpdataTO scott;
Export Modes
1. Full
expdp \'/ as sysdba\' DIRECTORY=dpdata DUMPFILE=exp%U.dmp FULL=y LOGFILE=exp.log JOB_NAME=expdp PARALLEL=30
2. User (Owner)
expdp \'/ as sysdba\' DIRECTORY=dpdata DUMPFILE=scott.dmp SCHEMAS=scott LOGFILE=exp.log JOB_NAME=expdp
3. Table
expdp \'/ as sysdba\' DIRECTORY=dpdata DUMPFILE=scott.dmp TABLES=scott.emp,blake.dept LOGFILE=exp.log JOB_NAME=expdp
4. Tablespace
expdp
\'/ as sysdba\' DIRECTORY=dpdata DUMPFILE=exp.dmp TABLESPACES=users,
tools TRANSPORT_FULL_CHECK=y LOGFILE=exp.log JOB_NAME=expdp
5. Transportable Tablespace Export
ALTER TABLESPACE users READ ONLY;
ALTER TABLESPACE example READ ONLY;
expdp \'/ as sysdba\' DIRECTORY=dpdata DUMPFILE=exp_tbs.dmp
TRANSPORT_TABLESPACES=users,example TRANSPORT_FULL_CHECK=y LOGFILE=exp_tbs.log
ALTER TABLESPACE users READ WRITE;
ALTER TABLESPACE example READ WRITE;
6. Export metadata
expdp \'/ as sysdba\' SCHEMAS=scott DIRECTORY=dpdata DUMPFILE=meta.dmp CONTENT=metadata_only
Import Modes
1. Full
impdp \'/ as sysdba\' DIRECTORY=dpdata DUMPFILE=exp.dmp FULL=y LOGFILE=imp.log JOB_NAME=impdp
impdp \'/ as sysdba\' DIRECTORY=dpdata DUMPFILE=exp%U.dmp FULL=y LOGFILE=imp.log JOB_NAME=DMFV_impdp PARALLEL=30
2. User (Owner)
impdp \'/ as sysdba\' DIRECTORY=dpdata DUMPFILE=scott.dmp SCHEMAS=scott LOGFILE=imp.log JOB_NAME=impdp
3. Table
impdp \'/ as sysdba\' DIRECTORY=dpdata DUMPFILE=scott.dmp TABLES=scott.emp,blake.dept LOGFILE=imp.log JOB_NAME=impdp
4. Tablespace
5. Transportable Tablespace
Other Features
REMAP_DATAFILE
REMAP_SCHEMA
REMAP_TABLESPACE
TRANSFORM
NETWORK_LINK
Data Pump Monitoring
1. Script:
@dpstatus.sql dpstatus.sql
TTITLE 'Currently Active DataPump Operations' COL owner_name FORMAT A06 HEADING 'Owner' COL job_name FORMAT A20 HEADING 'JobName' COL operation FORMAT A12 HEADING 'Operation' COL job_mode FORMAT A12 HEADING 'JobMode' COL state FORMAT A12 HEADING 'State' COL degree FORMAT 9999 HEADING 'Degr' COL attached_sessions FORMAT 9999 HEADING 'Sess'
SELECT owner_name ,job_name ,operation ,job_mode ,state ,degree ,attached_sessions FROM dba_datapump_jobs ;
TTITLE 'Currently Active DataPump Sessions' COL owner_name FORMAT A06 HEADING 'Owner' COL job_name FORMAT A20 HEADING 'Job' COL osuser FORMAT A12 HEADING 'UserID'
SELECT DPS.owner_name ,DPS.job_name ,S.osuser ,S.sid ,S.serial# ,S.status FROM dba_datapump_sessions DPS ,v$session S WHERE S.saddr = DPS.saddr ;
2. DBA_DATAPUMP_JOBS
how many worker processes (column DEGREE) are working on the job.
3. DBA_DATAPUMP_SESSIONS
when joined with the previous view and V$SESSION gives the SID of the session of the main foreground process.
SQL> select sid, serial# from v$session s, dba_datapump_sessions d where s.saddr = d.saddr;
4. Alert log
When the process starts up, the MCP and the worker processes are shown in the alert log as follows: kupprdp: master process DM00 started with pid=23, OS id=20530 to execute - SYS.KUPM$MCP.MAIN('CASES_EXPORT', 'ANANDA'); kupprdp:
worker process DW01 started with worker id=1, pid=24, OS id=20532 to
execute - SYS.KUPW$WORKER.MAIN('CASES_EXPORT', 'ANANDA');
kupprdp:
worker process DW03 started with worker id=2, pid=25, OS id=20534 to
execute - SYS.KUPW$WORKER.MAIN('CASES_EXPORT', 'ANANDA');
5. V$SESSION_LONGOPS
predict the time it will take to complete the job.
select sid, serial#, sofar, totalwork from v$session_longops where sofar != totalwork;
Controlling Data Pump Jobs
1. Switching Between Logging and Interactive Client Mode:
expdp system/oracle parfile=c:\rmancmd\longexport.dpectl
1.1 switch this job to interactive client mode by typing CTRL-C 1.2 switch from interactive client mode back to logging mode
Export> CONTINUE_CLIENT
2.Viewing Job Status:
From interactive mode
Export> status
3. Closing and Reattaching To A Job
3.1 detach from a client session, but leave a running job still executing
Export> exit_client
3.2 reattach to a running job after closing the client connection, reopen a session by
$ expdp system/oracle attach=longexport
4. Halt the execution of the job:
Export> stop_job
restart the job:
Export> START_JOB
5. End the job
Export> kill_job
|
posted Aug 5, 2011, 6:09 AM by Sachchida Ojha
[
updated Nov 14, 2014, 3:01 PM
]
posted Aug 5, 2011, 5:02 AM by Sachchida Ojha
Data Pump Export accepts original Export parameters when they map to a corresponding Data Pump parameter. Table 4-1
describes how Data Pump Export interprets original Export parameters.
Parameters that have the same name and functionality in both original
Export and Data Pump Export are not included in this table.
Table 4-1 How Data Pump Export Handles Original Export Parameters
Original Export Parameter |
Action Taken by Data Pump Export Parameter |
BUFFER
|
This parameter is ignored.
|
COMPRESS
|
This parameter is ignored. In original Export, the COMPRESS parameter affected how the initial extent was managed. Setting COMPRESS=n caused original Export to use current storage parameters for the initial and next extent.
The Data Pump Export COMPRESSION parameter is used to specify how data is compressed in the dump file, and is not related to the original Export COMPRESS parameter.
|
CONSISTENT
|
Data Pump Export determines the current time and uses FLASHBACK_TIME .
|
CONSTRAINTS
|
If original Export used CONSTRAINTS=n , then Data Pump Export uses EXCLUDE=CONSTRAINTS .
The default behavior is to include constraints as part of the export.
|
DIRECT
|
This parameter is ignored. Data Pump Export automatically chooses the best export method.
|
FEEDBACK
|
The Data Pump Export STATUS=30 command is used. Note that this is not a direct mapping because the STATUS command returns the status of the export job, as well as the rows being processed.
In original Export, feedback was given after a certain number of rows, as specified with the FEEDBACK command. In Data Pump Export, the status is given every so many seconds, as specified by STATUS .
|
FILE
|
Data Pump Export attempts to determine the path that was specified or defaulted to for the FILE parameter, and also to determine whether a directory object exists to which the schema has read and write access.
See "Management of File Locations in Data Pump Legacy Mode" for more information about how Data Pump handles the original Export FILE parameter.
|
GRANTS
|
If original Export used GRANTS=n , then Data Pump Export uses EXCLUDE=GRANT .
If original Export used GRANTS=y , then the parameter is ignored and does not need to be remapped because that is the Data Pump Export default behavior.
|
INDEXES
|
If original Export used INDEXES=n , then Data Pump Export uses the EXCLUDE=INDEX parameter.
If original Export used INDEXES=y , then the parameter is ignored and does not need to be remapped because that is the Data Pump Export default behavior.
|
LOG
|
Data Pump Export attempts to determine the path that was specified or defaulted to for the LOG parameter, and also to determine whether a directory object exists to which the schema has read and write access.
See "Management of File Locations in Data Pump Legacy Mode" for more information about how Data Pump handles the original Export LOG parameter.
The contents of the log file will be those of a Data Pump Export operation. See "Log Files" for information about log file location and content.
|
OBJECT_CONSISTENT
|
This parameter is ignored because Data Pump Export processing ensures
that each object is in a consistent state when being exported.
|
OWNER
|
The Data Pump SCHEMAS parameter is used.
|
RECORDLENGTH
|
This parameter is ignored because Data Pump Export automatically takes care of buffer sizing.
|
RESUMABLE
|
This parameter is ignored because Data Pump Export automatically provides this functionality to users who have been granted the EXP_FULL_DATABASE role.
|
RESUMABLE_NAME
|
This parameter is ignored because Data Pump Export automatically provides this functionality to users who have been granted the EXP_FULL_DATABASE role.
|
RESUMABLE_TIMEOUT
|
This parameter is ignored because Data Pump Export automatically provides this functionality to users who have been granted the EXP_FULL_DATABASE role.
|
ROWS
|
If original Export used ROWS=y , then Data Pump Export uses the CONTENT=ALL parameter.
If original Export used ROWS=n , then Data Pump Export uses the CONTENT=METADATA_ONLY parameter.
|
STATISTICS
|
This parameter is ignored because statistics are always saved for tables as part of a Data Pump export operation.
|
TABLESPACES
|
If original Export also specified TRANSPORT_TABLESPACE=n , then Data Pump Export ignores the TABLESPACES parameter.
If original Export also specified TRANSPORT_TABLESPACE=y , then Data Pump Export takes the names listed for the TABLESPACES parameter and uses them on the Data Pump Export TRANSPORT_TABLESPACES parameter.
|
TRANSPORT_TABLESPACE
|
If original Export used TRANSPORT_TABLESPACE=n (the default), then Data Pump Export uses the TABLESPACES parameter.
If original Export used TRANSPORT_TABLESPACE=y , then Data Pump Export uses the TRANSPORT_TABLESPACES parameter and only the metadata is exported.
|
TRIGGERS
|
If original Export used TRIGGERS=n , then Data Pump Export uses the EXCLUDE=TRIGGER parameter.
If original Export used TRIGGERS=y , then the parameter is ignored and does not need to be remapped because that is the Data Pump Export default behavior.
|
TTS_FULL_CHECK
|
If original Export used TTS_FULL_CHECK=y , then Data Pump Export uses the TRANSPORT_FULL_CHECK parameter.
If original Export used TTS_FULL_CHECK=y , then the parameter is ignored and does not need to be remapped because that is the Data Pump Export default behavior.
|
VOLSIZE
|
When the original Export VOLSIZE parameter is used, it
means the location specified for the dump file is a tape device. The
Data Pump Export dump file format does not support tape devices.
Therefore, this operation terminates with an error.
|
Using Original Import Parameters with Data Pump
Data Pump Import accepts original Import parameters when they map to a corresponding Data Pump parameter. Table 4-2
describes how Data Pump Import interprets original Import parameters.
Parameters that have the same name and functionality in both original
Import and Data Pump Import are not included in this table.
Table 4-2 How Data Pump Import Handles Original Import Parameters
Original Import Parameter |
Action Taken by Data Pump Import Parameter |
BUFFER
|
This parameter is ignored.
|
CHARSET
|
This parameter was desupported several releases ago and should no
longer be used. It will cause the Data Pump Import operation to abort.
|
COMMIT
|
This parameter is ignored. Data Pump Import automatically performs a commit after each table is processed.
|
COMPILE
|
This parameter is ignored. Data Pump Import compiles procedures after
they are created. A recompile can be executed if necessary for
dependency reasons.
|
CONSTRAINTS
|
If original Import used CONSTRAINTS=n , then Data Pump Import uses the EXCLUDE=CONSTRAINT parameter.
If original Import used CONSTRAINTS=y , then the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior.
|
DATAFILES
|
The Data Pump Import TRANSPORT_DATAFILES parameter is used.
|
DESTROY
|
If original Import used DESTROY=y , then Data Pump Import uses the REUSE_DATAFILES =y parameter.
If original Import used DESTROY=n , then the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior.
|
FEEDBACK
|
The Data Pump Import STATUS=30 command is used. Note that this is not a direct mapping because the STATUS command returns the status of the import job, as well as the rows being processed.
In original Import, feedback was given after a certain number of rows, as specified with the FEEDBACK command. In Data Pump Import, the status is given every so many seconds, as specified by STATUS.
|
FILE
|
Data Pump Import attempts to determine the path that was specified or defaulted to for the FILE parameter, and also to determine whether a directory object exists to which the schema has read and write access.
See "Management of File Locations in Data Pump Legacy Mode" for more information about how Data Pump handles the original Import FILE parameter.
|
FILESIZE
|
This parameter is ignored because the information is already contained in the Data Pump dump file set.
|
FROMUSER
|
The Data Pump Import SCHEMAS parameter is used. If FROMUSER was used without TOUSER also being used, then import schemas that have the IMP_FULL_DATABASE
role cause Data Pump Import to attempt to create the schema and then
import that schema's objects. Import schemas that do not have the IMP_FULL_DATABASE role can only import their own schema from the dump file set.
|
GRANTS
|
If original Import used GRANTS=n , then Data Pump Import uses the EXCLUDE=OBJECT_GRANT parameter.
If original Import used GRANTS=y , then the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior.
|
IGNORE
|
If original Import used IGNORE=y , then Data Pump Import uses the TABLE_EXISTS_ACTION=APPEND parameter. This causes the processing of table data to continue.
If original Import used IGNORE=n , then the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior.
|
INDEXES
|
If original Import used INDEXES=n , then Data Pump Import uses the EXCLUDE=INDEX parameter.
If original Import used INDEXES=y , then the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior.
|
INDEXFILE
|
The Data Pump Import SQLFILE={directory-object:}filename and INCLUDE=INDEX parameters are used.
The same method and attempts made when looking for a directory object described for the FILE parameter also take place for the INDEXFILE parameter.
If no directory object was specified on the original Import, then Data Pump Import uses the directory object specified with the DIRECTORY parameter.
|
LOG
|
Data Pump Import attempts to determine the path that was specified or defaulted to for the LOG parameter, and also to determine whether a directory object exists to which the schema has read and write access.
See "Management of File Locations in Data Pump Legacy Mode" for more information about how Data Pump handles the original Import LOG parameter.
The contents of the log file will be those of a Data Pump Import operation. See "Log Files" for information about log file location and content.
|
RECORDLENGTH
|
This parameter is ignored because Data Pump handles issues about record length internally.
|
RESUMABLE
|
This parameter is ignored because this functionality is automatically provided for users who have been granted the IMP_FULL_DATABASE role.
|
RESUMABLE_NAME
|
This parameter is ignored because this functionality is automatically provided for users who have been granted the IMP_FULL_DATABASE role.
|
RESUMABLE_TIMEOUT
|
This parameter is ignored because this functionality is automatically provided for users who have been granted the IMP_FULL_DATABASE role.
|
ROWS=N
|
If original Import used ROWS=n , then Data Pump Import uses the CONTENT=METADATA_ONLY parameter.
If original Import used ROWS=y , then Data Pump Import uses the CONTENT=ALL parameter.
|
SHOW
|
If SHOW=y is specified, then the Data Pump Import SQLFILE=[directory_object:]file_name
parameter is used to write the DDL for the import operation to a file.
Only the DDL (not the entire contents of the dump file) is written to
the specified file. (Note that the output is not shown on the screen as
it was in original Import.)
The name of the file will be the file name specified on the DUMPFILE parameter (or on the original Import FILE parameter, which is remapped to DUMPFILE ).
If multiple dump file names are listed, then the first file name in the
list is used. The file will be located in the directory object location
specified on the DIRECTORY parameter or the directory object included on the DUMPFILE parameter. (Directory objects specified on the DUMPFILE parameter take precedence.)
|
STATISTICS
|
This parameter is ignored because statistics are always saved for tables as part of a Data Pump Import operation.
|
STREAMS_CONFIGURATION
|
This parameter is ignored because Data Pump Import automatically determines it; it does not need to be specified.
|
STREAMS_INSTANTIATION
|
This parameter is ignored because Data Pump Import automatically determines it; it does not need to be specified
|
TABLESPACES
|
If original Import also specified TRANSPORT_TABLESPACE=n (the default), then Data Pump Import ignores the TABLESPACES parameter.
If original Import also specified TRANSPORT_TABLESPACE=y , then Data Pump Import takes the names supplied for this TABLESPACES parameter and applies them to the Data Pump Import TRANSPORT_TABLESPACES parameter.
|
TOID_NOVALIDATE
|
This parameter is ignored. OIDs are no longer used for type validation.
|
TOUSER
|
The Data Pump Import REMAP_SCHEMA parameter is used.
There may be more objects imported than with original Import. Also, Data
Pump Import may create the target schema if it does not already exist.
The FROMUSER parameter must also have been specified in original Import or the operation will fail.
|
TRANSPORT_TABLESPACE
|
The TRANSPORT_TABLESPACE parameter is ignored, but if you also specified the DATAFILES parameter, then the import job continues to load the metadata. If the DATAFILES parameter is not specified, then an ORA-39002:invalid operation error message is returned.
|
TTS_OWNERS
|
This parameter is ignored because this information is automatically stored in the Data Pump dump file set.
|
VOLSIZE
|
When the original Import VOLSIZE parameter is used, it
means the location specified for the dump file is a tape device. The
Data Pump Import dump file format does not support tape devices.
Therefore, this operation terminates with an error.
|
Management of File Locations in Data Pump Legacy Mode
Original Export and Import and Data Pump Export and Import differ on
where dump files and log files can be written to and read from because
the original version is client-based and Data Pump is server-based.
Original Export and Import use the FILE and LOG
parameters to specify dump file and log file names, respectively. These
file names always refer to files local to the client system and they
may also contain a path specification.
Data Pump Export and Import use the DUMPFILE and LOGFILE
parameters to specify dump file and log file names, respectively. These
file names always refer to files local to the server system and cannot
contain any path information. Instead, a directory object is used to
indirectly specify path information. The path value defined by the
directory object must be accessible to the server. The directory object
is specified for a Data Pump job through the DIRECTORY parameter. It is also possible to prepend a directory object to the file names passed to the DUMPFILE and LOGFILE
parameters. For privileged users, Data Pump supports the use of a
default directory object if one is not specified on the command line.
This default directory object, DATA_PUMP_DIR , is set up at installation time.
If Data Pump legacy mode is enabled and the original Export FILE= filespec parameter and/or LOG= filespec parameter are present on the command line, then the following rules of precedence are used to determine a file's location:
Note:
If the FILE parameter and LOG parameter are both present on the command line, then the rules of precedence are applied separately to each parameter.
Also, when a mix of original Export/Import and Data Pump
Export/Import parameters are used, separate rules apply to them. For
example, suppose you have the following command:
expdp system FILE=/user/disk/foo.dmp LOGFILE=foo.log DIRECTORY=dpump_dir
The Data Pump legacy mode file management rules, as explained in this section, would apply to the FILE parameter. The normal (that is, non-legacy mode) Data Pump file management rules, as described in "Default Locations for Dump, Log, and SQL Files", would apply to the LOGFILE parameter.
-
If a path location is specified as part of the file specification,
then Data Pump attempts to look for a directory object accessible to the
schema executing the export job whose path location matches the path
location of the file specification. If such a directory object cannot be
found, then an error is returned. For example, assume that a
server-based directory object named USER_DUMP_FILES has been defined with a path value of '/disk1/user1/dumpfiles/' and that read and write access to this directory object has been granted to the hr schema. The following command causes Data Pump to look for a server-based directory object whose path value contains '/disk1/user1/dumpfiles/' and to which the hr schema has been granted read and write access:
expdp hr FILE=/disk1/user1/dumpfiles/hrdata.dmp
In this case, Data Pump uses the directory object USER_DUMP_FILES . The path value, in this example '/disk1/user1/dumpfiles/' , must refer to a path on the server system that is accessible to the Oracle Database.
If a path location is specified as part of the file specification, then any directory object provided using the DIRECTORY parameter is ignored. For example, if the following command is issued, then Data Pump does not use the DPUMP_DIR directory object for the file parameter, but instead looks for a server-based directory object whose path value contains '/disk1/user1/dumpfiles/' and to which the hr schema has been granted read and write access:
expdp hr FILE=/disk1/user1/dumpfiles/hrdata.dmp DIRECTORY=dpump_dir
-
If no path location is specified as part of the file specification, then the directory object named by the DIRECTORY parameter is used. For example, if the following command is issued, then Data Pump applies the path location defined for the DPUMP_DIR directory object to the hrdata.dmp file:
expdp hr FILE=hrdata.dmp DIRECTORY=dpump_dir
-
If no path location is specified as part of the file specification and no directory object is named by the DIRECTORY parameter, then Data Pump does the following, in the order shown:
-
Data Pump looks for the existence of a directory object of the form DATA_PUMP_DIR_ schema_name , where schema_name
is the schema that is executing the Data Pump job. For example, the
following command would cause Data Pump to look for the existence of a
server-based directory object named DATA_PUMP_DIR_HR :
expdp hr FILE=hrdata.dmp
The hr schema also must have been granted read and write
access to this directory object. If such a directory object does not
exist, then the process moves to step b.
-
Data Pump looks for the existence of the client-based environment variable DATA_PUMP_DIR . For instance, assume that a server-based directory object named DUMP_FILES1 has been defined and the hr schema has been granted read and write access to it. Then on the client system, the environment variable DATA_PUMP_DIR can be set to point to DUMP_FILES1 as follows:
setenv DATA_PUMP_DIR DUMP_FILES1
expdp hr FILE=hrdata.dmp
Data Pump then uses the served-based directory object DUMP_FILES1 for the hrdata.dmp file.
If a client-based environment variable DATA_PUMP_DIR does not exist, then the process moves to step c.
-
If the schema that is executing the Data Pump job has DBA privileges, then the default Data Pump directory object, DATA_PUMP_DIR ,
is used. This default directory object is established at installation
time. For example, the following command causes Data Pump to attempt to
use the default DATA_PUMP_DIR directory object, assuming that system has DBA privileges:
expdp system FILE=hrdata.dmp
Adjusting Existing Scripts for Data Pump Log Files and Errors
Data Pump legacy mode requires that you review and update your
existing scripts written for original Export and Import because of
differences in file format and error reporting.
Log Files
Data Pump Export and Import do not generate log files in the same
format as those created by original Export and Import. Any scripts you
have that parse the output of original Export and Import must be updated
to handle the log file format used by Data Pump Export and Import. For
example, the message Successfully Terminated does not appear in Data Pump log files.
Error Cases
Data Pump Export and Import may not produce the same errors as those
generated by original Export and Import. For example, if a parameter
that is ignored by Data Pump Export would have had an out-of-range value
in original Export, then an informational message is written to the log
file stating that the parameter is being ignored. No value checking is
performed, therefore no error message is generated.
Exit Status
Data Pump Export and Import have enhanced exit status values to allow
scripts to better determine the success of failure of export and import
jobs. Any scripts that look at the exit status should be reviewed and
updated, if necessary.
|
posted Aug 5, 2011, 4:59 AM by Sachchida Ojha
The following is a list of new features or enhancements provided with Oracle Database 11g Release 2 (11.2.0.1):
New Oracle Grid Infrastructure Installation Option
Oracle Database 11g Release 2 introduces
the Oracle Grid Infrastructure installation. For single instance
databases, Oracle Grid Infrastructure includes Oracle Automatic Storage
Management (Oracle ASM), the listener, and Oracle Restart. Oracle
Restart is a new feature that provides the ability to monitor, manage,
and automatically restart if the Oracle Database environment including
the Oracle Database instance, Oracle Automatic Storage Management
instance, and listeners fails. In a clustered environment, Oracle Grid
Infrastructure includes Oracle Clusterware, Oracle ASM, and the
listener.
To use Oracle Grid Infrastructure for a standalone server, you must
install the Oracle software from the Oracle Grid Infrastructure media
before you install the database.
New Desktop and Server Class Options
Oracle Database 11g Release 2 introduces a
new option that enables you to specify the type of system on which the
database is installed. If you are installing on a laptop or a desktop,
then select the Desktop Class option; otherwise, select the Server Class
option to install on a server. These options are available on the
System Class screen.
There is no difference in the software that gets installed after you
select any one option but the Desktop Class option installs a single
instance database without the advanced configuration options.
Daylight Saving Time Upgrade of Timestamp with Timezone Data Type
When time zone version files are updated due to daylight saving time
changes, TIMESTAMP WITH TIMEZONE (TSTZ) data could become stale. In
previous releases, database administrators ran the SQL script utltzuv2.sql
to detect TSTZ data affected by the time zone version changes and then
had to perform extensive manual procedures to update the TSTZ data.
With this release, TSTZ data is updated transparently with minimal
manual procedures using newly provided DBMS_DST PL/SQL packages. In
addition, there is no longer a need for clients to patch their time zone
data files.
SYSASM Privilege
Starting with Oracle Database 11g Release 2 (11.2), Oracle ASM administration must be done with the SYSASM privilege. The SYSASM privilege also can be granted using password authentication on the Oracle ASM instance.
You can designate OPERATOR privileges (a subset of the SYSASM privileges, including starting and stopping Oracle ASM) to members of the OSOPER for an Oracle ASM group.
Using the SYSASM privilege for ASM administration
creates a clear division of responsibility between ASM administration
and database administration. It also provides the optional capability to
prevent different databases using the same storage from accidentally
overwriting each other's files.
Fixup Scripts and Prerequisite Checks
Starting with Oracle Database 11g Release
2 (11.2), the Oracle Universal Installer (OUI) detects when minimum
requirements for an installation are not completed, and creates scripts,
called fixup scripts, to resolve many incomplete system configuration
requirements. If OUI detects an incomplete task, then click the Fix
& Check Again button to generate the fixup script.
For Oracle Clusterware, you also can have Cluster Verification Utility (CVU) generate fixup scripts before the installation.
The fixup script is generated during the installation. You are
prompted to run the script as root in a separate terminal session. When
you run the script, it sets some of the system parameters to
Oracle-recommended values, if necessary, and completes other operating
system configuration tasks.
Database Smart Flash Cache
Database Smart Flash Cache feature is a transparent extension of the
database buffer cache using solid state device (SSD) technology. The SSD
acts as a Level 2 cache to the (Level 1) Oracle system global area
(SGA).
SSD storage is faster than disk storage, and cheaper than RAM.
Database Smart Flash Cache with SSD storage gives you the ability to
greatly improve the performance of your Oracle databases by reducing the
amount of disk I/O at a lower cost than adding an equivalent amount of
RAM.
Database Smart Flash Cache is supported on Oracle Solaris and Oracle Linux only.
New Tool to Configure Custom Installation Options
Oracle Universal Installer no longer provides the custom installation option of individual components. Use the chopt tool, a command-line utility that is located in the ORACLE_HOME \bin directory, to configure the database options.
Deinstallation Tool
Use the new deinstallation tool (deinstall ) available as
an Oracle Technology Network download (before installation) and in the
Oracle home directory (after installation) to remove Oracle Database
software.
See "Removing Oracle Software Using the Deinstallation Tool"
Intelligent Data Placement
The Intelligent Data Placement feature enables you to specify disk
regions on Oracle ASM disks to ensure that frequently accessed data is
placed on the outermost (hot) tracks which provide higher performance.
Oracle Automatic Storage Management Cluster File System (Oracle ACFS)
Oracle Automatic Storage Management Cluster File System (Oracle ACFS)
is a new multiplatform, scalable file system, and storage management
design that extends Oracle ASM technology in both single instance and
cluster configurations. Additionally, Oracle ACFS provides snapshot
functionality for a point in time copy of an Oracle ACFS system.
The software required for Oracle Automatic Storage Management Cluster
File System is installed with the Oracle Grid Infrastructure
installation.
Oracle Data Pump Export and Oracle Data Pump Import
Data Pump provides a legacy mode in which you can use original export
and import parameters when performing Oracle Data Pump export and
import operations.
Use Oracle Restart to Automatically Restart Your Database
Oracle Restart is a new feature included in this release to enhance
the availability of Oracle databases in a single-instance environment.
If you install Oracle Restart, and there is a temporary failure of any
part of the Oracle Database software stack, including the database,
listener, and Oracle ASM instance, Oracle Restart automatically restarts
the failed component. In addition, Oracle Restart starts all these
components when the database host computer is restarted. The components
are started in the proper order, taking into consideration the
dependencies among components.
New Method of Installing Oracle Automatic Storage Management
In past releases, Oracle ASM was installed as part of the Oracle Database installation. With Oracle Database 11g
Release 2 (11.2), Oracle ASM is installed when you install the Oracle
Grid Infrastructure components and shares an Oracle home with Oracle
Clusterware when installed in a cluster such as with Oracle Real
Application Cluster (Oracle RAC) or with Oracle Restart on a single
instance database.
If you want to upgrade an existing Oracle ASM, then you must upgrade
Oracle ASM by running an Oracle Grid Infrastructure upgrade. If you do
not have Oracle ASM installed and you want to use Oracle ASM as your
storage option, then you must complete an Oracle Grid Infrastructure
installation before you start your Oracle Database installation.
SRVCTL Support for a Single-Instance Database in a Cluster
SRVCTL was enhanced to support single-instance databases with Oracle
Restart on standalone servers and on clusters with Oracle Clusterware.
SRVCTL is a command-line interface used to manage Oracle processes
(database instance, listener, Oracle ASM instance) when using Oracle
Restart. With SRVCTL, you can manage the Oracle Restart configuration,
see the status of processes managed by Oracle Restart, and start or stop
processes such as the Oracle Database.
Deprecated in Oracle Database 11g Release 2 (11.2)
The following are not supported or not available anymore with Oracle Database 11g Release 2:
|
posted Aug 5, 2011, 4:56 AM by Sachchida Ojha
Oracle Data Pump
was introduced in Oracle 10g. This article provides an overview of the
main Data Pump enhancements in Oracle Database 11g Release 1, including:
Related articles.
COMPRESSION
The COMPRESSION parameter allows you to decide what, if anything, you wish to compress in your export. The syntax is shown below.
COMPRESSION={ALL | DATA_ONLY | METADATA_ONLY | NONE}
The available options are:
ALL : Both metadata and data are compressed.DATA_ONLY : Only data is compressed.METADATA_ONLY : Only metadata is compressed. This is the default setting.NONE : Nothing is compressed.
Here is an example of the COMPRESSION parameter being used.
expdp test/test schemas=TEST directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
compression=all
The COMPATIBLE initialization parameter should be set to "11.0.0" or higher to use these options, except for the METADATA_ONLY option, which is available with a COMPATIBLE setting of "10.2".
Encryption Parameters
Data pump encryption is an Enterprise Edition feature, so the parameters
described below are only relevant for Enterprise Edition installations.
In addition, the COMPATIBLE initialisation parameter must be set to "11.0.0" or higher to use these features.
ENCRYPTION and ENCRYPTION_PASSWORD
The use of encryption is controlled by a combination of the ENCRYPTION or ENCRYPTION_PASSWORD parameters. The syntax for the ENCRYPTION parameter is shown below.
ENCRYPTION = {ALL | DATA_ONLY | ENCRYPTED_COLUMNS_ONLY | METADATA_ONLY | NONE}
The available options are:
ALL : Both metadata and data are encrypted.DATA_ONLY : Only data is encrypted.ENCRYPTED_COLUMNS_ONLY : Only encrypted columns are written to the dump file in an encrypted format.METADATA_ONLY : Only metadata is encrypted.NONE : Nothing is encrypted.
If neither the ENCRYPTION or ENCRYPTION_PASSWORD parameters are set, it is assumed the required level of encryption is NONE . If only the ENCRYPTION_PASSWORD parameter is specified, it is assumed the required level of encryption is ALL . Here is an example of these parameters being used.
expdp test/test schemas=TEST directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
encryption=all encryption_password=password
ENCRYPTION_ALGORITHM
The ENCRYPTION_ALGORITHM parameter specifies the encryption
algorithm to be used during the export, with the default being
"AES128". The syntax is shown below.
ENCRYPTION_ALGORITHM = { AES128 | AES192 | AES256 }
The ENCRYPTION_ALGORITHM parameter must be used in conjunction with the ENCRYPTION or ENCRYPTION_PASSWORD parameters, as shown below.
expdp test/test schemas=TEST directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
encryption=all encryption_password=password encryption_algorithm=AES256
ENCRYPTION_MODE
The ENCRYPTION_MODE parameter specifies the type of security used during export and import operations. The syntax is shown below.
ENCRYPTION_MODE = { DUAL | PASSWORD | TRANSPARENT }
The allowable values and their default settings are explained below:
DUAL : This mode creates a dump file that can be imported using an Oracle Encryption Wallet, or the the ENCRYPTION_PASSWORD specified during the export operation. This is the default setting if the ENCRYPTION_PASSWORD parameter is set and there is an open wallet.PASSWORD : This mode creates a dump file that can only be imported using the ENCRYPTION_PASSWORD specified during the export operation. This is the default setting if the ENCRYPTION_PASSWORD parameter is set and there isn't an open wallet.TRANSPARENT : This mode creates an encrypted dump file using and open Oracle Encryption Wallet. If the ENCRYPTION_PASSWORD is specified while using this mode and error is produced. This is the default setting of only the ENCRYPTION parameter is set.
Wallet setup is described here.
The ENCRYPTION_MODE requires either the ENCRYPTION or ENCRYPTION_PASSWORD parameter to be specified.
expdp test/test schemas=TEST directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
encryption=all encryption_password=password encryption_mode=password
TRANSPORTABLE
The TRANSPORTABLE parameter is similar to the TRANSPORT_TABLESPACES
parameter available previously in that it only exports and imports
metadata about a table, relying on you to manually transfer the relevent
tablespace datafiles. The export operation lists the tablespaces that
must be transfered. The syntax is shown below.
TRANSPORTABLE = {ALWAYS | NEVER}
The value ALWAYS turns on the transportable mode, while the default value of NEVER indicates this is a regular export/import.
The following restrictions apply during exports using the TRANSPORTABLE parameter:
- This parameter is only applicable during table-level exports.
- The user performing the operation must have the EXP_FULL_DATABASE privilege.
- Tablespaces containing the source objects must be read-only.
- The COMPATIBLE initialization parameter must be set to 11.0.0 or higher.
- The default tablespace of the user performing the export must not be the same as any of the tablespaces being transported.
Some extra restictions apply during import operations:
- The
NETWORK_LINK parameter must be specified during the import operation. This parameter is set to a valid database link to the source schema. - The schema performing the import must have both EXP_FULL_DATABASE and IMP_FULL_DATABASE privileges.
- The
TRANSPORT_DATAFILES parameter is used to identify the datafiles holding the table data.
Examples of the export and import operations are shown below.
expdp system tables=TEST1.TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
transportable=ALWAYS
impdp system tables=TEST1.TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=impdpTEST.log
transportable=ALWAYS network_link=DB11G transport_datafiles='/u01/oradata/DB11G/test01.dbf'
PARTITION_OPTIONS
The PARTITION_OPTIONS parameter determines how partitions will be handled during export and import operations. The syntax is shown below.
PARTITION_OPTIONS={none | departition | merge}
The allowable values are:
NONE : The partitions are created exactly as they were on the system the export was taken from.DEPARTITION : Each partition and sub-partition is
created as a separate table, named using a combination of the table and
(sub-)partition name.MERGE : Combines all partitions into a single table.
The NONE and MERGE options are not possible if the export was done using the TRANSPORTABLE
parameter with a partition or subpartition filter. If there are any
grants on objects being departitioned, an error message is generated and
the objects are not loaded.
expdp test/test directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log tables=test.tab1
partition_options=merge
REUSE_DUMPFILES
The REUSE_DUMPFILES parameter can be used to prevent errors being issued if the export attempts to write to a dump file that already exists.
REUSE_DUMPFILES={Y | N}
When set to "Y", any existing dumpfiles will be overwritten. When the
default values of "N" is used, an error is issued if the dump file
already exists.
expdp test/test schemas=TEST directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
reuse_dumpfiles=y
REMAP_TABLE
This parameter allows a table to be renamed during the import operations performed using the TRANSPORTABLE method. It can also be used to alter the base table name used during PARTITION_OPTIONS imports. The syntax is shown below.
REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablename
An example is shown below.
impdp test/test tables=TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=impdpTEST.log
remap_table=TEST.TAB1:TAB2
Existing tables are not renamed, only tables created by the import.
DATA_OPTIONS
SKIP_CONSTRAINT_ERRORS
During import operations using the external table acces method, setting the DATA_OPTIONS parameter to SKIP_CONSTRAINT_ERRORS
allows load operations to continue through non-deferred constraint
violations, with any violations logged for future reference. Without
this, the default action would be to roll back the whole operation. The
syntax is shown below.
DATA_OPTIONS=SKIP_CONSTRAINT_ERRORS
An example is shown below.
impdp test/test tables=TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=impdpTEST.log
data_options=SKIP_CONSTRAINT_ERRORS
This parameter has no impact on deferred constraints, which still cause
the operation to be rolled back once a violation is detected. If the
object being loaded has existing unique indexes or constraints, the APPEND hint will not be used, which may adversely affect performance.
XML_CLOBS
During an export, if XMLTYPE columns are currently stored as CLOBs, they
will automatically be exported as uncompressed CLOBs. If on the other
hand they are currently stored as any combination of object-relational,
binary or CLOB formats, they will be exported in compressed format by
default. Setting the DATA_OPTIONS parameter to XML_CLOBS
specifies that all XMLTYPE columns should be exported as uncompressed
CLOBs, regardless of the default action. The syntax is shown below.
DATA_OPTIONS=XML_CLOBS
An example is shown below.
expdp test/test tables=TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
version=11.1 data_options=XML_CLOBS
Both the export and import must use the same XML schema and the job version must be set to 11.0.0 or higher.
REMAP_DATA
During export and import operations, the REMAP_DATA
parameter allows you to associate a remap packaged function that will
accept the column value as a parameter and return a modified version of
the data. The syntax is shown below.
REMAP_DATA=[schema.]tablename.column_name:[schema.]pkg.function
This can be used to mask sensitive data during export and import
operations by replacing the original data with random alternatives. The
mapping is done on a column-by-column basis, as shown below.
expdp test/test tables=TAB1 directory=TEST_DIR dumpfile=TEST.dmp logfile=expdpTEST.log
remap_data:tab1.col1:remap_pkg.remap_col1
remap_data:tab1.col2:remap_pkg.remap_col2
The remapping function must return the same datatype as the source column and it must not perform a commit or rollback. |
posted Aug 5, 2011, 4:49 AM by Sachchida Ojha
[
updated Jan 28, 2012, 7:01 AM
]
posted Aug 2, 2011, 5:19 PM by Sachchida Ojha
Many times a DBA wants to know where a SQL statement is in its execution plan and where the time is being spent. There are a few ways to find out this information, but an 11g new feature makes gathering this information extremely easy. Oracle 11g Real-Time SQL Monitoring allows you to monitor the performance of SQL statements while they are executing as well as see the breakdown of time and resources used for recently completed statements. It is on by default when STATISTICS_LEVEL is set to to ALL or TYPICAL (the default value) and monitors statements that consume more than 5 seconds of CPU or IO time, as well as any parallel execution (PQ, PDML, PDDL). One can override the default actions by using the MONITOR or NO_MONITOR hint. The 11g Documentation has a text version of a SQL Monitor Report but the report output can be html, text or xml. |
posted Jul 26, 2011, 8:55 AM by Sachchida Ojha
Automatic Memory Management is being introduced in 11g. This enables
automatic tuning of PGA and SGA with use of two new parameters named
MEMORY_MAX_TARGET and MEMORY_TARGET. The most important SGA components are the following:
Component
|
Description
|
Parameter
|
Database Buffer Cache |
The database buffer cache is
the portion of the SGA that holds copies of data blocks read from
datafiles. All users concurrently connected to the instance share access
to the database buffer cache.
|
DB_CACHE_SIZE DB_KEEP_CACHE_SIZE DB_RECYCLE_CACHE_SIZE DB_nK_CACHE_SIZE
|
Redo Log Buffer
|
The redo log buffer is a
circular buffer in the SGA that holds information about changes made to
the database. This information is stored in redo entries. Redo entries
contain the information necessary to reconstruct, or redo, changes made
to the database by INSERT, UPDATE, DELETE, CREATE, ALTER, or DROP
operations. Redo entries are used for database recovery, if necessary.
|
LOG_BUFFER
|
Shared Pool
|
The shared pool portion of
the SGA contains the library cache, the dictionary cache, the result
cache, buffers for parallel execution messages, and control structures.
|
SHARED_POOL_SIZE SHARED_POOL_RESERVED_SIZE RESULT_CACHE_SIZE *
|
Large Pool
|
Used for allocating session memory for shared server, Oracle XA, or parallel query buffers or for RMAN.
|
LARGE_POOL_SIZE
|
Java Pool
|
Java pool memory is used in server memory for all session-specific Java code and data within the JVM.
|
JAVA_POOL_SIZE
|
Streams Pool
|
The streams pool is used
exclusively by Oracle Streams. The Streams pool stores buffered queue
messages, and it provides memory for Oracle Streams capture processes
and apply processes.
|
STREAMS_POOL_SIZE
|
* RESULT_CACHE_MAX_SIZE is new
component which has been introduced as part of 11g Memory architecture.
The result cache is composed of the SQL query result cache and PL/SQL
function result cache, which share the same infrastructure.Results of
queries and query fragments can be cached in memory in the SQL query
result cache. The database can then use cached results to answer future
executions of these queries and query fragments. Similarly PL/SQL
Function Result can also be cached.
You have to use
RESULT_CACHE_MODE initialization parameter which determines the SQL
query result cache behavior. The possible initialization parameter
values are MANUAL and FORCE.
Oracle Database 11g
supports various memory management methods, which are chosen by
initialization parameter settings. Oracle recommends that you enable the
automatic memory management method.
- Automatic Memory Management – For Both the SGA and Instance PGA
- Automatic Shared Memory Management – For the SGA
- Manual Shared Memory Management – For the SGA
- Automatic PGA Memory Management – For the Instance PGA
- Manual PGA Memory Management – For the Instance PGA
1.Automatic Memory Management – For Both the SGA and Instance PGABeginning
with Oracle Database 11g, Oracle Database can manage the SGA memory and
instance PGA memory completely automatically. You designate only the
total memory size to be used by the instance, and Oracle Database
dynamically exchanges memory between the SGA and the instance PGA as
needed to meet processing demands. This capability is referred to as
automatic memory management. With this memory management method, the
database also dynamically tunes the sizes of the individual SGA
components and the sizes of the individual PGAs. To achieve this, two new parameters have been introduced named MEMORY_MAX_TARGET and MEMORY_TARGET.
To do so (on most platforms), you set only a target memory size
initialization parameter (MEMORY_TARGET) and optionally a maximum memory
size initialization parameter (MEMORY_MAX_TARGET).
If you create your database with Database
Configuration Assistant (DBCA) and choose the basic installation option,
automatic memory management is enabled. If you choose
advanced installation, Database Configuration Assistant (DBCA) enables
you to select automatic memory management.
Switching to Automatic Memory Management 1)Check the current values configured for SGA_TARGET and PGA_AGGREGATE_TARGET.
SQL>SHOW PARAMETER TARGET NAME TYPE VALUE ------------------------------ ----------- ---------------- archive_lag_target integer 0 db_flashback_retention_target integer 1440 fast_start_io_target integer 0 fast_start_mttr_target integer 0 memory_max_target big integer 0 memory_target big integer 0 pga_aggregate_target big integer 200M sga_target big integer 500M
Add the values of pga_aggregate_target and sga_target. In our case it would sum to 700 M 2)Decide
on a maximum amount of memory that you would want to allocate to the
database which will determine the maximum value for the sum of the SGA
and instance PGA sizes. In our case we decide to set to 808M 3)Change the parameter in initialization parameter file. Using Spfile ========
SQL>ALTER SYSTEM SET MEMORY_MAX_TARGET = 808M SCOPE = SPFILE; SQL>ALTER SYSTEM SET MEMORY_TARGET = 808M SCOPE = SPFILE; SQL>ALTER SYSTEM SET SGA_TARGET =0 SCOPE = SPFILE; SQL>ALTER SYSTEM SET PGA_AGGREGATE_TARGET = 0 SCOPE = SPFILE;
Using Pfile ======= If you have started the instance with Pfile, then edit the pfile and set the parameters manually
MEMORY_MAX_TARGET = 808M MEMORY_TARGET = 808M SGA_TARGET =0 PGA_AGGREGATE_TARGET = 0
In
case you do not specify any value for MEMORY_MAX_TARGET and only use
MEMORY_TARGET then database automatically sets MEMORY_MAX_TARGET to the
value of MEMORY_TARGET. If you omit the line for MEMORY_TARGET
and include a value for MEMORY_MAX_TARGET, the MEMORY_TARGET parameter
defaults to zero. After startup, you can then dynamically change
MEMORY_TARGET to a non-zero value, provided that it does not exceed the
value of MEMORY_MAX_TARGET.
MEMORY_MAX_TARGET is a static parameter i.e it
cannot be changed Dynamically and Instance has to be bounced for
modifying the value. So ensure that you have set it to appropriate
value. 4)Shutdown and startup the database
SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount ORACLE instance started.
Total System Global Area 845348864 bytes Fixed Size 1303188 bytes Variable Size 469765484 bytes Database Buffers 369098752 bytes Redo Buffers 5181440 bytes SQL> show parameter target
NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ archive_lag_target integer 0 db_flashback_retention_target integer 1440 fast_start_io_target integer 0 fast_start_mttr_target integer 0 memory_max_target big integer 808M memory_target big integer 808M pga_aggregate_target big integer 0 sga_target big integer 0
The
preceding steps instruct you to set SGA_TARGET and PGA_AGGREGATE_TARGET
to zero so that the sizes of the SGA and instance PGA are tuned up and
down as required, without restrictions. You can omit the statements that
set these parameter values to zero and leave either or both of the
values as positive numbers. In this case, the values act as minimum
values for the sizes of the SGA or instance PGA.
Note : - In case you set any
parameter value to value which is higher then MEMORY_TARGET, then you
get the following error. E.g Setting SGA_MAX_SIZE to value of 900 M
results in following
SQL> startup ORA-00844: Parameter not taking MEMORY_TARGET into account, see alert log for more information
00844, 00000, "Parameter not taking MEMORY_TARGET into account, see alert log for more information" // *Cause: The parameter was larger than MEMORY_TARGET. // *Action: Set the parameter to a lower value than MEMORY_TARGET. Monitoring and Tuning Automatic Memory ManagementThe
dynamic performance view V$MEMORY_DYNAMIC_COMPONENTS shows the current
sizes of all dynamically tuned memory components, including the total
sizes of the SGA and instance PGA. The view V$MEMORY_TARGET_ADVICE provides tuning advice for the MEMORY_TARGET initialization parameter.
SQL> select * from v$memory_target_advice order by memory_size;
You can also use V$MEMORY_RESIZE_OPS which has a circular history buffer of the last 800 SGA resize requests.
2. Automatic Shared Memory Management – For the SGAIf
you want to exercise more direct control over the size of the SGA, you
can disable automatic memory management and enable automatic shared
memory management.This feature was introduced in 10g with a parameter
known as SGA_TARGET. When automatic SGA memory
management is enabled, the sizes of the different SGA components are
flexible and can adapt to the needs of current workload without
requiring any additional configuration. Please refer to following document for setting SGA_TARGET Note 295626.1 - How To Use Automatic Shared Memory Management (ASMM) In Oracle10g In
case you have enabled Automatic Memory Management , then to switch to
Automatic Shared Memory Management , please follow below procedure
SQL>Alter system set MEMORY_TARGET=0 scope=both; SQL>Alter system set SGA_TARGET=500M scope=both;
3. Manual Shared Memory Management – For the SGAIf
you want complete control of individual SGA component sizes, you can
disable both automatic memory management and automatic shared memory
management. In this mode, you need to set the sizes of several
individual SGA components, thereby determining the overall SGA size. You
then manually tune these individual SGA components on an ongoing basis. In this case you set SGA_TARGET and MEMORY_TARGET to 0 and set value for other SGA components upto value of SGA_MAX_SIZE Please refer to Note 148495.1 - Dynamic SGA
4. Automatic PGA Memory Management – For the Instance PGAWhile
using Automatic memory management , PGA memory is allocated based upon
value of MEMORY_TARGET. In case you enable automatic shared memory
management or manual shared memory management, you also implicitly
enable automatic PGA memory management.
Automatic/Manual PGA memory management is decided by initialization parameter WORKAREA_SIZE_POLICY which is a session- and system-level parameter that can take only two values: MANUAL or AUTO. The default is AUTO. With automatic PGA memory management, you set a target size for the instance PGA by defining value for parameter named PGA_AGGREGATE_TARGET
and sizing of SQL work areas is automatic and all *_AREA_SIZE
initialization parameters are ignored for these sessions. This feature
is available from 9i. At any given time, the total amount of PGA
memory available to active work areas on the instance is automatically
derived from the parameter PGA_AGGREGATE_TARGET. This amount is set to
the value of PGA_AGGREGATE_TARGET minus the PGA memory allocated for
other purposes (for example, session memory). The resulting PGA memory
is then allotted to individual active work areas based on their specific
memory requirements. Please refer to following document for more details on Automatic PGA Memory Management Note 223730.1 - Automatic PGA Memory Managment in 9i and 10g
5.Manual PGA Memory Management – For the Instance PGAIn
case you wish to manually specify the maximum work area size for each
type of SQL operator (such as sort or hash-join) then you can enable
Manual PGA Memory management. Set WORKAREA_SIZE_POLICY value to
MANUAL and also specify values for *_area_size such as SORT_AREA_SIZE,
HASH_AREA_SIZE, BITMAP_MERGE_AREA_SIZE, and CREATE_BITMAP_AREA_SIZE,
etc. Although the Oracle Database 11g supports this manual
PGA memory management method, Oracle strongly recommends that you leave
automatic PGA memory management enabled. Table below summarizes the various memory management methods
Memory Management Mode
|
For
|
You Set
|
Oracle Database Automatically Tunes
|
Automatic memory management(AMM)
|
SGA and PGA |
- Total memory target size for the Oracle instance (Memory_Target)
- (Optional) Maximum memory size for the Oracle instance(Memory_max_Target)
|
- Total SGA size - SGA component sizes - Instance PGA size - Individual PGA sizes
|
Automatic shared memory management(ASMM)
(AMM disabled)
|
SGA
|
- SGA target size (SGA_TARGET) - (Optional) SGA maximum size (SGA_MAX_SIZE)
|
SGA component sizes
|
Manual shared memory management
(AMM and ASMM disabled)
|
SGA
|
- Shared pool size (SHARED_POOL_SIZE)
-Buffer cache size (DB_CACHE_SIZE or DB_BLOCK_BUFFERS)
-Java pool size (JAVA_POOL_SIZE)
-Large pool size (LARGE_POOL_SIZE)
- (Optional) SGA maximum size (SGA_MAX_SIZE)
|
None
|
Automatic PGA memory management
|
PGA
|
Instance PGA target size (PGA_AGGREGATE_TARGET)
|
Individual PGA sizes
|
Manual PGA memory management
(not recommended)
|
PGA
|
Maximum work area size for each type of SQL operator
|
None
|
The Automatic Memory Management (AMM) feature uses background process named Memory Manager (MMAN).
This process was introduced in 10g which assisted in Automatic Shared
Memory Management (ASMM) using SGA_TARGET. MMAN serves as the SGA Memory
Broker and coordinates the sizing of the memory components. The SGA
Memory Broker keeps track of the sizes of the components and pending
resize operations |
posted Jul 7, 2011, 6:36 AM by Sachchida Ojha
Starting with the 11.2.0.2 patch set, Oracle Database patch sets are full installations of the Oracle Database software. This means that you do not need to install Oracle Database 11g Release 2 (11.2.0.1) before installing Oracle Database 11g Release 2 (11.2.0.2). Note the following changes with the new patch set packaging: - New installations consist of installing the most recent patch set, rather than installing a base release and then upgrading to a patch release.
- Direct upgrades from previous releases to the most recent patch set are supported.
- Out-of-place patch set upgrades recommended, in which you install the patch set into a new, separate Oracle home. In-place upgrades are supported, but not recommended.
In-Place UpgradeThis feature enables you to upgrade an existing installation of Oracle database to Oracle Database 11g Release 2 (11.2.0.2) into the same Oracle home directory by replacing the existing database files. The patch set application requires more downtime and is not recommended. This upgrade type requires less disk space. Note: If you select In-place upgrade to upgrade Oracle database to the latest version, then you must: - Review "Optionally Performing an In-Place Upgrade (Into the Same Oracle Home)" in Oracle Database Upgrade Guide.
- Take a complete backup of your Oracle home directory before you perform the upgrade.
Out-of-Place UpgradeThis feature enables you to install Oracle Database 11g Release 2 (11.2.0.2) in a new, separate Oracle home directory. After the installation is complete, the data from the database in the old Oracle home directory is migrated to the database in the new Oracle home directory. Out-of-place upgrade process is similar to Oracle database upgrade from one version to another and is the recommended type. The benefits of this upgrade type is that the patch set application requires much less downtime, and is safer when the patch set application must be aborted. This upgrade type requires more disk space to accommodate two Oracle home directories simultaneously. |
posted Oct 4, 2010, 1:53 PM by Sachchida Ojha
Oracle 11g allows you to allocate one
chunk of memory, which Oracle uses to dynamically manage both the SGA
and PGA.
Automatic memory management is configured using two new initialization parameters:
MEMORY_TARGET: The amount of shared memory available for Oracle to use when dynamically controlling the SGA and PGA. This parameter is dynamic, so the total amount of memory available to Oracle can be increased or decreased, provided it does not exceed the MEMORY_MAX_TARGET limit. The default value is "0".
MEMORY_MAX_TARGET: This defines the maximum size the MEMORY_TARGET can be increased to without an instance restart. If the MEMORY_MAX_TARGET is not specified, it defaults to MEMORY_TARGET setting.
Enabling automatic memory management:
Assuming you want to use a similar amount of memory to your current settings you will need to use the following calculation.
MEMORY_TARGET = SGA_TARGET + GREATEST(PGA_AGGREGATE_TARGET, "maximum PGA allocated")
The following queries show you how to display the relevant information and how to combine it in a single statement to calculate the required value.
SELECT name, value FROM v$parameter WHERE name IN ('pga_aggregate_target', 'sga_target') UNION SELECT 'maximum PGA allocated' AS name, TO_CHAR(value) AS value FROM v$pgastat WHERE name = 'maximum PGA allocated';
-- Calculate MEMORY_TARGET SELECT sga.value + GREATEST(pga.value, max_pga.value) AS memory_target FROM (SELECT TO_NUMBER(value) AS value FROM v$parameter WHERE name = 'sga_target') sga, (SELECT TO_NUMBER(value) AS value FROM v$parameter WHERE name = 'pga_aggregate_target') pga, (SELECT value FROM v$pgastat WHERE name = 'maximum PGA allocated') max_pga;
Assuming our required setting was 15G, we might issue the following statements.
CONN / AS SYSDBA -- Set the static parameter. Leave some room for possible future growth without restart. ALTER SYSTEM SET MEMORY_MAX_TARGET=16G SCOPE=SPFILE;
-- Set the dynamic parameters. Assuming Oracle has full control. ALTER SYSTEM SET MEMORY_TARGET=15G SCOPE=SPFILE; ALTER SYSTEM SET PGA_AGGREGATE_TARGET=0 SCOPE=SPFILE; ALTER SYSTEM SET SGA_TARGET=0 SCOPE=SPFILE;
-- Restart instance. SHUTDOWN IMMEDIATE; STARTUP;
Once the database is restarted the MEMORY_TARGET parameter can be amended as required without an instance restart.
ALTER SYSTEM SET MEMORY_TARGET=14G SCOPE=SPFILE;
|
|