set asm profile ********************************************************************** export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/grid export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:/usr/bin:/etc:/usr/lbin:/usr/bin/X11:/usr/local/bin:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/java14/jre/bin:/usr/java14/bin:/usr/java131/jre/bin:/usr/java131/bin:/usr/vacpp/bin:./ export LIBPATH=$ORACLE_HOME/lib export ORACLE_SID=+ASM dbaref:+ASM:/home/oracle $sqlplus / as sysasm SQL*Plus: Release 11.2.0.1.0 Production on Thu Mar 10 15:01:53 2011 Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Automatic Storage Management option SQL> show parameter ASM_DISKSTRING; ASM will only scan for disks that match that ASM search string. There are two forms of ASM disk discovery; shallow and deep. Shallow discovery is simply ASM scanning (essentially an ‘ls-l’ listing) of all the devices that are eligible to opened; i.e; these disk devices have the appropriate permissions. Deep discovery is physical open of those eligible disk devices. NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ asm_diskstring string /dev/rhdisk* SQL> desc V$ASM_DISK; Name Null? Type ----------------------------------------- -------- ---------------------------- GROUP_NUMBER NUMBER DISK_NUMBER NUMBER COMPOUND_INDEX NUMBER INCARNATION NUMBER MOUNT_STATUS VARCHAR2(7) HEADER_STATUS VARCHAR2(12) MODE_STATUS VARCHAR2(7) STATE VARCHAR2(8) REDUNDANCY VARCHAR2(7) LIBRARY VARCHAR2(64) OS_MB NUMBER TOTAL_MB NUMBER FREE_MB NUMBER HOT_USED_MB NUMBER COLD_USED_MB NUMBER NAME VARCHAR2(30) FAILGROUP VARCHAR2(30) LABEL VARCHAR2(31) PATH VARCHAR2(256) UDID VARCHAR2(64) PRODUCT VARCHAR2(32) CREATE_DATE DATE MOUNT_DATE DATE REPAIR_TIMER NUMBER READS NUMBER WRITES NUMBER READ_ERRS NUMBER WRITE_ERRS NUMBER READ_TIME NUMBER WRITE_TIME NUMBER BYTES_READ NUMBER BYTES_WRITTEN NUMBER PREFERRED_READ VARCHAR2(1) HASH_VALUE NUMBER HOT_READS NUMBER HOT_WRITES NUMBER HOT_BYTES_READ NUMBER HOT_BYTES_WRITTEN NUMBER COLD_READS NUMBER COLD_WRITES NUMBER COLD_BYTES_READ NUMBER COLD_BYTES_WRITTEN NUMBER VOTING_FILE VARCHAR2(1) SECTOR_SIZE NUMBER FAILGROUP_TYPE VARCHAR2(7) SQL> col NAME format a32 SQL> col PATH format a32 SQL> set linesize 200 SQL>select NAME,MOUNT_STATUS,PATH,LABEL,FAILGROUP,REDUNDANCY from V$ASM_DISK order by name / SQL> / NAME MOUNT_S PATH LABEL FAILGROUP REDUNDA -------------------------------- ------- -------------------------------- ------------------------------- ------------------------------ ------- _DROPPED_0001_QA_FLASH MISSING QA_FLASH_0001 UNKNOWN DEV_DATA_0000 CACHED /dev/rhdisk10 DEV_DATA_0000 UNKNOWN DEV_DATA_0001 CACHED /dev/rhdisk11 DEV_DATA_0001 UNKNOWN DEV_DATA_0002 CACHED /dev/rhdisk12 DEV_DATA_0002 UNKNOWN DEV_DATA_0003 CACHED /dev/rhdisk13 DEV_DATA_0003 UNKNOWN QA_FLASH_0000 CACHED /dev/rhdisk14 QA_FLASH_0000 UNKNOWN DEV_DATA_0012 CACHED /dev/rhdisk15 DEV_DATA_0012 UNKNOWN DEV_DATA_0009 CACHED /dev/rhdisk16 DEV_DATA_0009 UNKNOWN DEV_DATA_0010 CACHED /dev/rhdisk17 DEV_DATA_0010 UNKNOWN DEV_DATA_0011 CACHED /dev/rhdisk18 DEV_DATA_0011 UNKNOWN DEV_DATA_19 CACHED /dev/rhdisk19 DEV_DATA_19 UNKNOWN NAME MOUNT_S PATH LABEL FAILGROUP REDUNDA -------------------------------- ------- -------------------------------- ------------------------------- ------------------------------ ------- DEV_DATA_0014 CACHED /dev/rhdisk20 DEV_DATA_0014 UNKNOWN DEV_DATA_0015 CACHED /dev/rhdisk21 DEV_DATA_0015 UNKNOWN DEV_DATA_0016 CACHED /dev/rhdisk22 DEV_DATA_0016 UNKNOWN DEV_DATA_0017 CACHED /dev/rhdisk23 DEV_DATA_0017 UNKNOWN DEV_DATA_24 CACHED /dev/rhdisk24 DEV_DATA_24 UNKNOWN DEV_DATA_0004 CACHED /dev/rhdisk5 DEV_DATA_0004 UNKNOWN DEV_DATA_0005 CACHED /dev/rhdisk6 DEV_DATA_0005 UNKNOWN DEV_DATA_0006 CACHED /dev/rhdisk7 DEV_DATA_0006 UNKNOWN DEV_DATA_0007 CACHED /dev/rhdisk8 DEV_DATA_0007 UNKNOWN DEV_DATA_0008 CACHED /dev/rhdisk9 DEV_DATA_0008 UNKNOWN 21 rows selected. SQL> select name, path, group_number from v$asm_disk; NAME PATH GROUP_NUMBER -------------------------------- -------------------------------- ------------ _DROPPED_0001_QA_FLASH 2 DEV_DATA_0000 /dev/rhdisk10 1 DEV_DATA_0001 /dev/rhdisk11 1 DEV_DATA_0002 /dev/rhdisk12 1 DEV_DATA_0003 /dev/rhdisk13 1 QA_FLASH_0000 /dev/rhdisk14 2 DEV_DATA_0012 /dev/rhdisk15 1 DEV_DATA_0009 /dev/rhdisk16 1 DEV_DATA_0010 /dev/rhdisk17 1 DEV_DATA_0011 /dev/rhdisk18 1 DEV_DATA_19 /dev/rhdisk19 1 NAME PATH GROUP_NUMBER -------------------------------- -------------------------------- ------------ DEV_DATA_0014 /dev/rhdisk20 1 DEV_DATA_0015 /dev/rhdisk21 1 DEV_DATA_0016 /dev/rhdisk22 1 DEV_DATA_0017 /dev/rhdisk23 1 DEV_DATA_24 /dev/rhdisk24 1 DEV_DATA_0004 /dev/rhdisk5 1 DEV_DATA_0005 /dev/rhdisk6 1 DEV_DATA_0006 /dev/rhdisk7 1 DEV_DATA_0007 /dev/rhdisk8 1 DEV_DATA_0008 /dev/rhdisk9 1 21 rows selected. Disks have various header statuses that reflect its membership state with a diskgroup. Disks can have the following header statuses: o Former - This state declares that the disk was formerly part of a diskgroup o Candidate - When a disk is in this state, it indicates that it is available to be added to a diskgroup. o Member - This state indicates that a disk is already part of a diskgroup. o Provisioned - This state is similar to candidate, in that its available to diskgroups. However, the provisioned state indicates that this disk has been configured or made available using ASMLIB. ASM and Multipathing An I/O path generally consists of an initiator port, fabric port, target port, and LUN. Each permutation of this I/O path is considered an independent path. Dynamic Multi-pathing/failover tools aggregate these independent paths into a single logical path. This path abstraction provides I/O load balancing across the host bus adapters (HBA), as well as non-disruptive failovers on I/O path failures. Multi-pathing (MP) software requires all the required disks to be visible on each available and eligible HBA. A MP driver will detect multi-paths by performing a SCSI inquiry command7. Multi-pathing software also provides multi-path software drivers. Most multi-pathing drivers support multipath services for fibre channel attached SCSI-3 devices. These drivers receive naming and transport services from one or more physical HBA devices. To support multi-pathing, a physical HBA driver must comply with the multi-pathing services provided by this driver. Multipathing tools provides the following benefits: o Provide a single block device interface for a multi-pathed LUN o Detect any component failures in the I/O path; e.g., fabric port, channel adapter, or HBA. o When a loss of path occurs, ensure that I/Os are re-routed to the available paths, with no process disruption. o Reconfigure the multipaths automatically when events occur. o Ensure that failed paths get revalidated as soon as possible and provide auto-failback capabilities. o Configure the multi-paths to maximize performance using various load balancing methods; e.g., round robin, least I/Os queued, or least service time. When a given disk has several paths defined, each one will be presented as a unique path name at the OS level; e.g.; /dev/rdsk/c3t19d1s4 and /dev/rdsk/c7t22d1s4 could be pointing to same disk device. ASM,however, can only tolerate the discovery of one unique device path per disk. For example, if the asm_diskstring is ‘/dev/rdsk/*’, then several paths to the same device will be discovered, and ASM will produce an error message stating this. When using a multipath driver, which sits above this SCSI-block layer, the driver will generally produce a pseudo device that virtualizes the sub-paths. For example, in the case of EMC’s PowerPath, you can use the following asm_diskstring setting of ‘/dev/rdsk /emcpower*’. When I/O is issued to this disk device, the multipath driver will intercept it and provide the necessary load balancing to the underlying subpaths. Examples of multi-pathing software include EMC PowerPath, Veritas DMP, Sun Traffic Manager, Hitachi HDLM, and IBM SDDPCM. Linux 2.6 has a kernel based multipathing driver called Device Mapper. Additionally, some HBA vendors also provide multipathing solutions. Oracle Corporation does not certify or qualify these multipathing tools. Although ASM does not provide multi-pathing capabilities, ASM does leverage multi-pathing tools, as long the path or device produced by the Multi-pathing tool returns a successful return code from an fstat system call. Metalink Note 294869.1 provides more details on ASM and Multipathing. DiskGroups SQL> desc v$asm_diskgroup; Name Null? Type ----------------------------------------------------------------------------------------------------------------- -------- ---------------------------------------------------------------------------- GROUP_NUMBER NUMBER NAME VARCHAR2(30) SECTOR_SIZE NUMBER BLOCK_SIZE NUMBER ALLOCATION_UNIT_SIZE NUMBER STATE VARCHAR2(11) TYPE VARCHAR2(6) TOTAL_MB NUMBER FREE_MB NUMBER HOT_USED_MB NUMBER COLD_USED_MB NUMBER REQUIRED_MIRROR_FREE_MB NUMBER USABLE_FILE_MB NUMBER OFFLINE_DISKS NUMBER COMPATIBILITY VARCHAR2(60) DATABASE_COMPATIBILITY VARCHAR2(60) VOTING_FILES VARCHAR2(1) SQL> SQL> select name, state, type, total_mb, free_mb from v$asm_diskgroup; NAME STATE TYPE TOTAL_MB FREE_MB -------------------------------- ----------- ------ ---------- ---------- DEV_DATA MOUNTED EXTERN 2128019 88785 QA_FLASH MOUNTED NORMAL 51200 35813 SQL> select name, path, mode_status, state, disk_number from v$asm_disk; NAME PATH MODE_ST STATE DISK_NUMBER -------------------------------- -------------------------------- ------- -------- ----------- _DROPPED_0001_QA_FLASH OFFLINE FORCING 1 DEV_DATA_0000 /dev/rhdisk10 ONLINE NORMAL 0 DEV_DATA_0001 /dev/rhdisk11 ONLINE NORMAL 1 DEV_DATA_0002 /dev/rhdisk12 ONLINE NORMAL 2 DEV_DATA_0003 /dev/rhdisk13 ONLINE NORMAL 3 QA_FLASH_0000 /dev/rhdisk14 ONLINE NORMAL 0 DEV_DATA_0012 /dev/rhdisk15 ONLINE NORMAL 12 DEV_DATA_0009 /dev/rhdisk16 ONLINE NORMAL 9 DEV_DATA_0010 /dev/rhdisk17 ONLINE NORMAL 10 DEV_DATA_0011 /dev/rhdisk18 ONLINE NORMAL 11 DEV_DATA_19 /dev/rhdisk19 ONLINE NORMAL 13 NAME PATH MODE_ST STATE DISK_NUMBER -------------------------------- -------------------------------- ------- -------- ----------- DEV_DATA_0014 /dev/rhdisk20 ONLINE NORMAL 14 DEV_DATA_0015 /dev/rhdisk21 ONLINE NORMAL 15 DEV_DATA_0016 /dev/rhdisk22 ONLINE NORMAL 16 DEV_DATA_0017 /dev/rhdisk23 ONLINE NORMAL 17 DEV_DATA_24 /dev/rhdisk24 ONLINE NORMAL 18 DEV_DATA_0004 /dev/rhdisk5 ONLINE NORMAL 4 DEV_DATA_0005 /dev/rhdisk6 ONLINE NORMAL 5 DEV_DATA_0006 /dev/rhdisk7 ONLINE NORMAL 6 DEV_DATA_0007 /dev/rhdisk8 ONLINE NORMAL 7 DEV_DATA_0008 /dev/rhdisk9 ONLINE NORMAL 8 21 rows selected. SQL> create diskgroup DATA external redundancy disk '/dev/rdsk/c3t19d5s4', '/dev/rdsk/c3t19d16s4', '/dev/rdsk/c3t19d17s4', '/dev/rdsk/c3t19d18s4'; SQL> select name, path, mode_status, state, disk_number from v$asm_disk NAME PATH MODE_ST STATE DISK_NUMBER ------------ ---------------------- ------- -------- ----------- DATA_0000 /dev/rdsk/c3t19d5s4 ONLINE NORMAL 0 DATA_0001 /dev/rdsk/c3t19d16s4 ONLINE NORMAL 1 DATA_0002 /dev/rdsk/c3t19d17s4 ONLINE NORMAL 2 DATA_0003 /dev/rdsk/c3t19d18s4 ONLINE NORMAL 3 In the example above, a DATA diskgroup is created using four disks, which reside in a storage array, with the redundancy being handled externally via the storage array. Once the disks are discovered, a diskgroup can be created that will encapsulate one or more of these disks. A diskgroup, which is the highest-level data structure in ASM, is comparable to a LVM’s volume group. However, there are several differentiators between typical LVM volume groups and ASM diskgroups: o An ASM filesystem layer is implicitly created within a diskgroup. This filesystem is transparent to users and only accessible through ASM, interfacing databases, and 10.2 ASM command line tool. o There are inherent automatic file-level striping and mirroring capabilities. A database file created within an ASM diskgroup will have its file extents (not to be confused with its database extents) distributed equally across all online disks in the diskgroup, which provides an even IO load. The creation of a diskgroup involves the validation of the disks to be added. These disks must have the following attributes: o Cannot already be in use by another diskgroup o Must not have a pre-existing ASM header o Cannot have an Oracle file header (from a Oracle raw device datafile). This check and validation prevents ASM from destroying any in use data device. Disks with a valid header status, which include candidate, former, or provisioned, are the only ones allowed to be diskgroup members Diskgroups and databases A diskgroup can contain files from many different 10g Oracle databases, and these databases can be either from the same server or can reside on different servers. However, a disk and a database file can only be part of one diskgroup. Alternatively, one Oracle database may also store its files in multiple disk groups managed by the same ASM instance. Allowing multiple databases to share a diskgroup provides greater potential for improved disk utilization and greater overall throughput. To reduce the complexity of managing ASM and its diskgroups, Oracle recommends that generally no more than two diskgroups be maintained and managed per RAC cluster or single ASM instance10. o Database area: This is where active database files such as datafiles, control files, online redo logs, and change tracking files used in incremental backups are stored o Flash recovery area: Where recovery-related files are created, such as multiplexed copies of the current control file and online redo logs, archived redo logs, backup sets, and flashback log files To provide higher availability for the database, when a Flash Recovery Area is chosen at database creation time, an active copy of the controlfile and one member set of the redo log group is stored in the Flash Recovery Area. Note, additional copies of the controlfile or extra logfiles can be created and placed in either diskgroup as desired. SQL> desc V$asm_client; Name Null? Type ----------------------------------------------------------------------------------------------------------------- -------- ---------------------------------------------------------------------------- GROUP_NUMBER NUMBER INSTANCE_NAME VARCHAR2(64) DB_NAME VARCHAR2(8) STATUS VARCHAR2(12) SOFTWARE_VERSION VARCHAR2(60) COMPATIBLE_VERSION VARCHAR2(60) SQL> select GROUP_NUMBER,INSTANCE_NAME,DB_NAME,STATUS,SOFTWARE_VERSION,COMPATIBLE_VERSION from V$asm_client; GROUP_NUMBER INSTANCE_NAME DB_NAME STATUS SOFTWARE_VERSION COMPATIBLE_VERSION ------------ -------------------------------- -------- ------------ -------------------------------- -------------------------------- 1 TXQADB2 TXQADB2 CONNECTED 11.2.0.1.0 10.2.0.4.0 1 TXQADB3 TXQADB3 CONNECTED 11.2.0.1.0 10.2.0.4.0 2 TXQADB3 TXQADB3 CONNECTED 11.2.0.1.0 10.2.0.4.0 1 TXQADB4 TXQADB4 CONNECTED 11.2.0.1.0 10.2.0.4.0 2 TXQADB4 TXQADB4 CONNECTED 11.2.0.1.0 10.2.0.4.0 1 TXQADB5 TXQADB5 CONNECTED 11.2.0.1.0 10.2.0.3.0 2 TXQADB5 TXQADB5 CONNECTED 11.2.0.1.0 10.2.0.3.0 1 STGRPTDB STGRPTDB CONNECTED 11.2.0.1.0 11.0.0.0.0 8 rows selected. Database Init.ora parameters to support ASM The SGA parameters for database instance needs slight modification to support ASM extent maps and other ASM information. Note if the 10g Automatic Memory Management feature is being used on the database instance, then the following sizing data can be treated as informational only or as supplemental data in gauging appropriate values for the SGA. Oracle highly recommends using the Automatic Memory Management feature. The following are guidelines for SGA sizing on the database instance: Processes = Add 16 Large_pool = Add additional 600k Shared_pool – Additional memory is required to store extent maps. Aggregate the values from the following queries to obtain current database storage size that is either already on ASM or will be stored in ASM. Then determine the redundancy type that is used (or will be used), and calculate the shared_pool, using the aggregated value as input. select sum(bytes)/(1024*1024*1024) from v$datafile; select sum(bytes)/(1024*1024*1024) from v$logfile a, v$log b where a.group#=b.group#; select sum(bytes)/(1024*1024*1024) from v$tempfile where status='ONLINE'; For diskgroups using external redundancy = (Every 100Gb of space needs 1Mb of extra shared pool) + 2M For diskgroups using Normal redundancy: (Every 50Gb of space needs 1Mb of extra shared pool) + 4M. For diskgroups using High redundancy: (Every 33Gb of space needs 1Mb of extra shared pool) + 6M. Rebalance and Redistribution With traditional volume managers, expansion/growth or shrinkage of striped filesystems has typically been difficult. With ASM, these disk/volume changes are now seamless redistribution (rebalancing) of the striped data and can be performed online. Any change in the storage configuration will trigger a rebalance. The main objective of the rebalance operation is to always provide an even distribution of file extents and space usage across all disks in the diskgroup. Rebalancing is performed on all database files on a per file basis20; however, some files may not require a rebalance. The Oracle background process, RBAL, from the ASM instance manages this rebalance. The Rebalance process examines each file extent map, and the new extents are re-plotted on to the new storage configuration. For example, consider an 8-disk diskgroup, with a datafile with 40 extents (each disk will house 5 extents). When 2 new drives of same size are added, that datafile is rebalanced and spread across 10 drives, with each drive containing 4 extents. Only 8 extents need to move to complete the rebalance; i.e., a complete redistribution of extents is not necessary, only the minimum number of extents are moved to reach equal distribution. The following is a typical process flow for ASM rebalancing: 1. On the ASM instance, a DBA adds (or drops) a disk to (from) a diskgroup. 2. This invokes the RBAL process to create the rebalance plan and then begin coordination of the redistribution 3. RBAL will calculate estimation time and work required to perform the task and then message the ARBx processes to actually handle the request. The number of ARBx processes invoked is directly determined by the asm_power_limit. 4. The Continuing Operations Directory (metadata) will be updated to reflect a rebalance activity. 5. Each extent to be relocated is assigned to an ARBx process. 6. ARBx performs rebalance on these extents. Each extent is locked, relocated, and unlocked. This is shown as Operation REBAL in V$ASM_OPERATION. Rebalancing involves physical movement of file extents. This impact is generally low because the rebalance is done one AU at a time; therefore, there will only be one outstanding I/O at any given time, per ARBx processes. This should not adversely affect online database activity. However, it is generally advisable to schedule the rebalance operation during off-peak hours. The init.ora parameter asm_power_limit is used to influence the throughput and speed of the rebalance operation21. The range of values for asm_power_limit are 0 to 11; where a value of 11 is full throttle and a value of 1 is low speed. A value of 0, which turns off automatic rebalance, should be used with caution. The power value can also be set for a specific rebalance activity using the alter diskgroup command. This value is only effective for the specific rebalance task. See Figure 8. A power value of 0 indicates that no rebalance should occur for this rebalance. This setting is particularly important when adding or removing storage (that has external redundancy), and then deferring the rebalance to a later scheduled time. “Session1 SQL”> alter diskgroup DATA add disk '/dev/rdsk/c3t19d39s4' rebalance power 11 20 A weighting factor, influenced by disk size and file size, affects rebalancing. A larger drive will consume more extents. This is done for even distribution based on overall size. 21 The asm_power_limit is specific to each ASM instance. From another session “Session2 SQL”> select * from v$asm_operation OPERA STAT POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES ----- ---- ---------- ---------- ---------- ---------- ---------- ----------- 1 REBAL WAIT 11 0 0 0 0 0 1 DSCV WAIT 11 0 0 0 0 0 (time passes…………..) OPERA STAT POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES ------------ ----- ---- ---------- ---------- ---------- ---------- ---------- 1 REBAL REAP 11 2 25 219 485 0 If removing or adding several disks, for best practices, it is best to add or remove drives all at once, this will reduce the number rebalance operations that are needed for storage changes. An ASM diskgroup rebalance is an asynchronous operation, in that the control is returned immediately to DBA after the operation is sent in the background, with the status of the ongoing operation query-able from V$ASM_OPERATION. However, there are situations when the diskgroup operation needs to synchronous; i.e., wait until rebalance is completed. In Oracle Database 10g Release 2, ASM alter diskgroup commands which result in a rebalance now have the option of specifying the option to wait. This allows for accurate [sequential] scripting that may rely on the space change from a rebalance completing before any subsequent action is taken. For instance, if you add 100GB of storage to a completely full disk group, you won't be able to use all 100GB of storage until the rebalance completes. If a new rebalance command is entered while one is already in progress in wait mode, the prior command will not return until the diskgroup is in a balanced state or the rebalance operation encounters an error. An example SQL script below illustrates how the WAIT option can be used in SQL scripting. The script below adds a new disk, /dev/raw/raw6, and waits until the add and rebalance operations complete, returning the control back to the script. The subsequent step adds a large tablespace. #An example script to test WAIT option alter diskgroup data add disk '/dev/raw/raw6' rebalance power 2 wait; #login into database and create a tablespace for the next month’s Order Entry data sqlplus oe_dba/oe1@proddb << EOF create BIGFILE tablespace May_OE datafile size 800 Gb << EOF With external redundancy, the dropping and adding of disks in the diskgroup is very seamless, and becomes more of an exercise of scheduling the rebalancing. However, with failure groups, some planning and forethought maybe required with respect to how disks are removed and added. SQL> desc v$asm_file; Name Null? Type ----------------------------------------------------------------------------------------------------------------- -------- ---------------------------------------------------------------------------- GROUP_NUMBER NUMBER FILE_NUMBER NUMBER COMPOUND_INDEX NUMBER INCARNATION NUMBER BLOCK_SIZE NUMBER BLOCKS NUMBER BYTES NUMBER SPACE NUMBER TYPE VARCHAR2(64) REDUNDANCY VARCHAR2(6) STRIPED VARCHAR2(6) CREATION_DATE DATE MODIFICATION_DATE DATE REDUNDANCY_LOWERED VARCHAR2(1) PERMISSIONS VARCHAR2(16) USER_NUMBER NUMBER USER_INCARNATION NUMBER USERGROUP_NUMBER NUMBER USERGROUP_INCARNATION NUMBER PRIMARY_REGION VARCHAR2(4) MIRROR_REGION VARCHAR2(4) HOT_READS NUMBER HOT_WRITES NUMBER HOT_BYTES_READ NUMBER HOT_BYTES_WRITTEN NUMBER COLD_READS NUMBER COLD_WRITES NUMBER COLD_BYTES_READ NUMBER COLD_BYTES_WRITTEN NUMBER +ASM (ASM instance) SQL> select file_number , sum(bytes)/(1024*1024) from v$asm_file group by file_number FILE_NUMBER SUM(BYTES)/(1024*1024) ----------- ---------------------- 256 360.007813 257 35.0078125 258 450.007813 261 .002441406 262 150.007813 263 23.0078125 264 10.0004883 265 5.0078125 266 10.0004883 267 10.0004883 268 2.2109375 ORCL (database instance) SQL> select name from v$datafile NAME ---------------------------------------- +DATA/orcl/datafile/sysaux.256.3 +DATA/orcl/datafile/system.258.3 +DATA/orcl/datafile/undotbs1.257.3 +DATA/orcl/datafile/users.265.3 +DATA/orcl/datafile/nitin.263.3 /u01/oradata/orcl/ishan01.dbf SQL> select member from v$logfile; MEMBER ------------------------------------------------------------------------------- - +DATA/orcl/onlinelog/group_3.264.3 +DATA/orcl/onlinelog/group_2.266.3 The file number in the ASM instance can be used to correlate and determine filenames in database instance. The illustration below shows the relationship between a database instance’s file identity and ASM’s. Note, the file number from V$ASM_FILE is embedded in the file name. The filename notation described thus far is called the Fully Qualified FileName (FQFN) notation. FQFN are generally long and awkward, therefore, to make file-naming convention easier to remember the ASM Alias name format was introduced. ASM Aliases are essentially in hierarchical directory format, similar to the filesystem hierarchy: /u01/oradata/dbname/datafile_name Alias names specify a disk group name, but instead of a file and incarnation number, a user-friendly string name is used. Alias ASM filenames are distinguished from fully qualified or numeric names because they do not end in a dotted pair of numbers. The following is an example of an ASM Alias:23: #Create ASM directory SQL> alter diskgroup DATA add directory '+DATA/oradata/orcl; #Now create the alias SQL> alter diskgroup DATA add alias '+DATA/oradata/orcl/nitin01.dbf' for +DATA/orcl/datafile/nitin.263.3; For best practices, every database should implement the Oracle Managed File (OMF) feature. OMF can be enabled by setting the DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST parameters. OMF will allow simpler file naming and also provides better file handling. For example, if a tablespace is dropped, ASM will delete the file automatically only if it is an OMF datafile. If OMF is not used, then the DBA must drop the file manually by connecting to the ASM instance. OMF files are located in the DB_CREATE_FILE_DEST directory24. Please review OMF documentation on usage and implementation. Note, OMF is not used anytime file names are provided in create/alter tablespace add datafile commands. Aliases are particularly useful when dealing with controlfiles and spfiles; e.g., an Alias ASM filename is normally used in the CONTROL_FILES and SPFILE initialization parameter. In Figure 10 below, SPFILE and Control_files parameter are set to the alias, and the db_create_file_dest and db_recovery_file_dest are set to the appropriate OMF destinations. spfile = +DATA/orcl/spfileorcl.ora control_files = +DATA/orcl/controlfile/control_01.ctl db_create_file_dest = +DATA db_recovery_file_dest = +FLASH The following table illustrates the various used and their relationship with the instance type. ASM related V$ Views ASM Instance Database Instance V$ASM_DISKGROUP row/diskgroup discovered Row/diskgroup available to the local database instance. V$ASM_CLIENT row/client connected row for local ASM instance if DB contains open ASM files V$ASM_DISK Row/disk discovered, across all diskgroups as well as disks that are not in disk groups Row/disk, across all diskgroups used by the local database instance. V$ASM_FILE Row/file allocated; across all client instances-diskgroups N/A V$ASM_TEMPLATE Row/template Row/template w/associated-attached diskgroup V$ASM_ALIAS Row/file alias N/A V$ASM_OPERATION Row/per active ASM operation (could be more than 1 operation per invocation) |
ASM QUICK REF >