Oracle DBA FAQ

How to check if oracle instance is leaking memory

posted Sep 23, 2010, 11:02 AM by Sachchida Ojha   [ updated Sep 23, 2010, 1:57 PM ]

1] Use session statistics (to measure)

The Oracle server maintains statistics that record the PGA and UGA memory consumption on a per session basis. If you suspect a memory leak then use SQL similar to that presented below to verify the amount of memory in use and re-issue it several times and monitor to confirm that memory is indeed growing.
select sid, substr(name,1,30), value , value/(1024*1024) "MB"
from v$statname a, v$sesstat b
where a.statistic# = b.statistic#
and name like '%a memory'
order by sid, name;

Note that if you do not see one of these statistics increase yet the process still seems to increase in memory consumption from the operating system standpoint then perhaps the problem lies with the your application code where memory is allocated but not released/

2] You may also experience ora-4031 / ora-4030 errors running your sql statements. Please note that getting these errors does not automatically indicate that you have a memory leak, because you may not have configured your system appropriately. Remember that old l0ogic truth - a bus is a motor vehicle, but not all motor vehicles are buses.

--additional Info

select b.sid, substr(name,1,30), value , value/(1024*1024) "MB", c.serial#,c.status,c.username,c.schemaname,c.osuser,c.machine,c.terminal,c.program,c.module,state,logon_time
from v$statname a, v$sesstat b,v$session c
where a.statistic# = b.statistic#
and name like '%a memory'
and b.sid=c.sid
and osuser!='oracle'
order by status,MB desc,sid, name;

If you want to the SQL also join v$sqlarea table- This will give you how much memory is utilized by package/procedure and a simple/complex SQL text

select b.sid, substr(name,1,30), value , value/(1024*1024) "MB", s.serial#,s.sid ssid, s.status,s.username,s.schemaname,s.osuser,s.machine,s.terminal,s.program,s.module,state,logon_time,substr(p.spid,1,8) spid, substr(sa.sql_text,1,2000) txt
from v$statname a, v$sesstat b,v$session s, v$process p ,v$sqlarea sa
where a.statistic# = b.statistic#
and name like '%a memory'
and b.sid=s.sid
and osuser!='oracle'
and p.addr = s.paddr
and s.sql_address = sa.address(+)
and s.sql_hash_value = sa.hash_value(+)
order by status,MB desc,sid, name;

Archive location filled - How to set the archive log location to new location?

posted Sep 21, 2010, 9:15 AM by Sachchida Ojha   [ updated Mar 22, 2011, 6:51 AM ]

conn "sys/pass as sysdba"
show parameter log_archive_dest
alter system set log_archive_dest_1="new loc" or
alter system set db_recovery_file_dest="new loc";
alter system set log_archive_dest_1='location=/backup/DUPRPTDB/archive';
alter system switch logfile

How to tar and untar files in UNIX?

posted Sep 11, 2010, 6:34 AM by Sachchida Ojha

To pack and compress (one step at a time):
tar -cf packed_files.tar file_to_pack1 file_to_pack2 ...

gzip packed_files.tar

 To pack and compress all at once:
tar -cf - file_to_pack1 file_to_pack2 ... | gzip -c > packed_files.tar.gz

 To create a tar from a directory and its subdirectories:
tar -cvf packed_files.tar dir_to_pack

To unpack tar files, use the following commands:
for an uncompressed tar file:
tar -xvf file_to_unpack.tar

To decompress and unpack one step at a time:
gunzip packed_files.tar.gz
tar -xf packed_files.tar

 To decompress and unpack all at once:
gunzip -c packed_files.tar.gz | tar -xf -

 To list the contents of a tar file, use the following command:
tar -tvf file_to_list.tar
To use bzip2 instead of gzip, simply replace the commands above with bzip2 where gzip is used and bunzip2 where gunzip is used.

Compression options
BSD and GNU tar have a compression flag feature making it easier to archive and compress gzipped, bzipped or compressed tarballs in one go. The following commands can be used to take advantage of this:

To pack and compress:
using gzip:
tar -czf packed_files.tgz file_to_pack1 file_to_pack2 ...

 using bzip2:
tar -cjf packed_files.tbz2 file_to_pack1 file_to_pack2 ...

 using compress:
tar -cZf packed_files.tar.Z file_to_pack1 file_to_pack2 ...

 using some other arbitrary compression utility that works as a filter:
tar --use-compress-program=name_of_program -cf packed_files.tar.XXX file_to_pack1 file_to_pack2 ...

 To uncompress and unpack:
a gzip compressed tar file:
tar -xzf file_to_unpack.tar.gz
a bzip2 compressed tar file:
tar -xjf file_to_unpack.tar.bz2
a compress compressed tar file:
tar -xZf file_to_unpack.tar.Z
an arbitrary-compression-utility-compressed tar file:
tar --use-compress-program=name_of_program -xf file_to_unpack.tar.XXX
Some versions of tar use the -y switch to invoke bzip2 rather than -j.

Historical tricks
The following syntax (not related to archiving) was used almost universally before the -d, -R, -p and -a options were added to the cp command.

To copy directories precisely:
tar -cf - one_directory | (cd another_directory && tar -xpf - )

How Network Address Translation Works?

posted Sep 10, 2010, 2:20 PM by Sachchida Ojha

There is a range for each of the three classes of IP addresses used for networking:
    * Range 1: Class A - through
    * Range 2: Class B - through
    * Range 3: Class C - through
Although each range is in a different class, your are not required to use any particular range for your internal network. It is a good practice, though, because it greatly diminishes the chance of an IP address conflict.
As businesses rely more and more on the Internet, having multiple points of connection to the Internet is fast becoming an integral part of their network strategy. Multiple connections, known as multi-homing, reduces the chance of a potentially catastrophic shutdown if one of the connections should fail.
In addition to maintaining a reliable connection, multi-homing allows a company to perform load-balancing by lowering the number of computers connecting to the Internet through any single connection. Distributing the load through multiple connections optimizes the performance and can significantly decrease wait times.
Multi-homed networks are often connected to several different ISPs (Internet Service Providers). Each ISP assigns an IP address (or range of IP addresses) to the company. Routers use BGP (Border Gateway Protocol), a part of the TCP/IP protocol suite, to route between networks using different protocols. In a multi-homed network, the router utilizes IBGP (Internal Border Gateway Protocol) on the stub domain side, and EBGP (External Border Gateway Protocol) to communicate with other routers.

What is the Advantage/Disadvantage to have mutiple database in a same host/or different host INSTEAD of having mutiple schema's in a SINGLE database?

posted Sep 10, 2010, 2:09 PM by Sachchida Ojha

a.) Multiple databases under a single binary and host require more resources and could lead to contention for these resources, which may pique performance problems, if you don't have enough resources on the host to distribute evenly or share.

b.) Multiple databases, under single binary and host translate to a single point of failure. If you need to apply a patch, for example, or there is a problem with the binary.

c.) Multiple databases with multiple binaries under a single or multiple host would require more planning, administration, maintenance, resources etc and could lead to performance problems if you don't have enough resources to distribute evenly. However, this guide against single point of failure in case you have problem with a single binary or host.

d.) Multiple schemas in a single database, obviously under a single binary and host, makes management and administration tasks relatively cheaper and easier and would accord the database or schemas with enough resources which may translate to performance improvements.

e.) Multiple schemas in a single database however translate to a single point of failure, that is, if a schema in the database has problems that means the rest of the schemas, by default, would be affected. To perform restore and recovery on a single schema, would most likely affect the rest of the schemas depending on the type of backup you have.

f.) Multiple schemas in a single database could save you on Oracle licences and also makes it easier to have a RAC-enabled environment which will translate your application to Higher Availability.

All in all, choosing the best method can't be made in silo, and will require bringing the architecture, the application(s), project scope, budget and business needs into the full picture.

What are the various ways to manage a Data Guard environment

posted Sep 10, 2010, 2:06 PM by Sachchida Ojha

Oracle provides three ways to manage a Data Guard environment:

1. SQL*Plus and SQL Statements

Using SQL*Plus and SQL commands to manage Data Guard environment.The following SQL statement initiates a switchover operation:

SQL> alter database commit to switchover to physical standby;

2. Data Guard Broker GUI Interface (Data Guard Manager)

Data Guard Manger is a GUI version of Data Guard broker interface that allows you to automate many of the tasks involved in configuring and monitoring a Data Guard environment.

3. Data Guard Broker Command-Line Interface (CLI)

It is an alternative interface to using the Data Guard Manger. It is useful if you want to use the broker from batch programs or scripts. You can perform most of the activities required to manage and monitor the Data Guard environment using the CLI.

What is the main difference between physical and logical standby databases

posted Sep 10, 2010, 2:03 PM by Sachchida Ojha

The main difference between physical and logical standby databases is the manner in which log apply services apply the archived redo data.

For physical standby databases, Data Guard uses Redo Apply technology, which applies redo data on the standby database using standard recovery techniques of an Oracle database.

For logical standby databases, Data Guard uses SQL Apply technology, which first transforms the received redo data into SQL statements and then executes the generated SQL statements on the logical standby database

What are Data Guard Protection Modes?

posted Sep 10, 2010, 1:35 PM by Sachchida Ojha

In Oracle 10g we see an exciting new approach to Data Guard management whereby we write the current redo log to a "standby redo log", allowing complete recovery in cases of catastrophic instance failure.
Maximum Protection—This mode offers the highest level of data protection. Data is synchronously transmitted to the standby database from the primary database and transactions are not committed on the primary database unless the redo data is available on at least one standby database configured in this mode. If the last standby database configured in this mode becomes unavailable, processing stops on the primary database. This mode ensures no-data-loss.

Maximum Availability—This mode is similar to the maximum protection mode, including zero data loss. However, if a standby database becomes unavailable (for example, because of network connectivity problems), processing continues on the primary database. When the fault is corrected, the standby database is automatically resynchronized with the primary database.

Maximum Performance—This mode offers slightly less data protection on the primary database, but higher performance than maximum availability mode. In this mode, as the primary database processes transactions, redo data is asynchronously shipped to the standby database. The commit operation of the primary database does not wait for the standby database to acknowledge receipt of redo data before completing write operations on the primary database. If any standby destination becomes unavailable, processing continues on the primary database and there is little effect on primary database performance.

How DBA can generate static HTML pages from SQL*Plus

posted Sep 10, 2010, 1:33 PM by Sachchida Ojha

One can generate static HTML pages from SQL*Plus (8.1.6 and above) by setting the MARKUP option to HTML ON. This can be done by specifying -MARKUP "HTML ON" from command line, or with the "SET MARKUP HTML ON" command. Look at this example SQL Script:
set markup HTML on
spool index.html
select * from tab;
spool off
set markup HTML off

How can DBA pass operating system variables to sqlplus?

posted Sep 10, 2010, 1:31 PM by Sachchida Ojha

One can pass operating system variables to sqlplus using this syntax:
sqlplus username/password @cmdfile.sql var1 var2 var3
Parameter var1 will be mapped to SQL*Plus variable &1, var2 to &2, etc. Look at this example:
sqlplus scott/tiger @x.sql  '"test parameter"' dual
Where x.sql consists of:
select '&1' from &2;
exit 5;

1-10 of 14