Automatic Storage Management (ASM)

With each new release of Oracle it becomes more evident that it's a product that can operate independently, the Automatic Storage Management feature is an example of this.

With each new release of Oracle it becomes more evident that it’s a product that can operate independently, the Automatic Storage Management feature is an example of this.

Using Oracle’s ASM feature makes it possible to manage a disk regardless of its file system (such as FAT32, NTFS, EXT2, EXT3, etc).

Excluding intermediary file system results in better performance of block level read/write operations and load balancing.

There are 3 fault tolerance options available to DBAs:

1. External: Duties of fault tolerance are left to the DBA, in effect telling the DBMS that there is a Fault Tolerant RAID architecture in place and that the DBMS should use the disks for performance (RAID 0 striped).

2. Normal: There is a storage structure similar to RAID 1 (mirror) in use, however only the extents are copied, instead of creating a copy of the entire disk.

3. High: The same data is mirrored (not a RAID mirror) on 3 separate disks. This, of course, requires 3 separate disks. There won’t be a loss of data even if there is a failure on 2 disks. In this situation, it is highly recommended that the faulty disks are changed as soon as possible.

A partition is created and mounted directly on to Oracle, datafies are written directly on to this partition followed by the datafile no longer having a name.

Differences between the ASM and file system are explained more clearly below:
Server using the file system

Server using ASM

Setup
The system is prepared for setting up Oracle (please refer to the Oracle 10g setup (RHEL) article)
Use fdisk to make the partitions on the disk, they should be formatted before being mounted (please refer to the Oracle 10g setup (RHEL) article)
Install the ASM packages (rpm)

Add the following lines to /etc/rc.local

Configure oracleasm

(as the root)the prompted questions should be answered in the following sequence:

Naming the disk groups

To delete

At this point Oracle Setup begins and the disks can be viewed using the <em>config asm</em> command.
To add a disk after creating a database or a database instance
1. Physically install the disk on to the hardware, mount and create partitions using fdisk.

2. Add the following lines to the /etc/rc.local file

3.
The following command, as a root user, mounts the disk to the ASM

4.

or

or

or

In our example scenario there were 2 groups, 1. Data and 2. Recovery. The name of the Oracle instance is orcl.
A 3Gb disk was added and included in the data group using the above syntax (4).
Note: ALTER DISKGROUP DATA DROP DISK D3 will remove or drop the disk.
Setting disks, located on separate SCSI controllers, as each others failgroups prevents problems that can arise due to controller failure.

Commands to be entered as the root user

sentences to be added to rc.local

create diskgroup Disk_Group_A normal redundancy

Creating an ASM user for an ASM instance makes administration easier
As a root user
Create an asm user by adding a user to the oinstall, dba and asmadmin groups.

Assigning a password to an asm user.

Add the following lines to the /home/asm/.bash_profile file

Connecting to an ASM instance and starting up

Controlling the CSSD status

Disk group information

Disk usage

Creating disk groups

Adding a disk to a group

Dropping (deleting) a diskgroup

Checking the situation of the balance operation

Mounting/Dismounting disk groups

Checking the consistency of the disk groups

Running tests (on databases that aren’t in archive mode)
The examples here show some of ASM’s capabilities:
3 disks: D1 (5Gb capacity), D2 (5Gb) and D3 (2Gb) are about to be included in the DATA disk group.
The type of disk group is set to normal (redundancy), i.e. the same data is written on to 2 separate disks.
Disk D3 (with a 2Gb capacity) gets deleted (physically removed from the server), the db continues to be operational.

The resulting messages that appear on +ASM’s alertlog:

Errors such as those in the following lines are due to the ASM instance not starting

Create the TableSpace for the Datafiles within the ASM

To create the spfile:

The ASM connection

NODE-02

Retrieving information

The remaining space on the ASM

The percentage and file count

Mount disk groups

Note: It may be necessary to mount all diskgroups individually in all of the nodes.
***** The resources and ASM may not be visible in the first node or those started later.
/u01/11.2.0/grid/bin/crsctl start cluster -allwhen triggered using the above command

As a grid user:

All nodes are visible.
Shutting down the RAC
All commands should be run as an Oracle user.

Database stop

ASM stop

All application stop

CRS STOP
on all nodes as the root user

Verify voting disks configuration

Testing Failover Capabilities

Cluster registry integrity check succeeded
8. In order to fix this you just run ocrconfig, then you can check /u01/app/crs/log/rac1/crsd/crsd.log for information about the new OCR mirror (actions taken during replacement).

Adding a Group/Disk ID 361468.1
script hugepages_settings.sh in Document 401749.1

The parameters will be set by default on:
Oracle Linux with oracle-validated package (See Document 437743.1) installed.
Oracle Exadata DB compute nodes

Adding a shared disk
Accessing the ESX server through SSH (secure shell)

Mounting a Solaris ASM disk
1-

2-

3-

Defining a link as a grid user.

To change the group owner.

The SPFILE within the ASM

RHEL 5

RHEL 6

Create it in /dev/ASM/*

To completely remove a currently active setup

The solution to: the exclusive mode cluster start failed perl -I rootcrs.pl execution failed 11.2.0.3</em>
can be found in ID 1050908.1

Delete

Notes: Create the files and folders
1. Create OS groups using the command below. Enter these commands as the ‘root’ user:

2. Create the users that will own the Oracle software using the commands:

Tips on Installing and Using ASMLib on Linux [ID 394953.1]