Partition and Mount Disk on LINUX in an existing Oracle Database

Here the draft that I followed in my post:

Partition EMC LUNs to allocate more disk space for Oracle



1. Partition the presented disks

"EMC was able to create the 3x20Gb storage "

Say for example the presented disks are:

# fdisk -l
/dev/hda
/dev/hdb
/dev/hdc

# fdisk /dev/hda
n --new patition
p --partition
1 --partition 1
<Enter>
+1024M --size of the partition
n --new partition
p --partition
2 -- partition 2
<Enter>
+1024M --size of the partition
n --new partition
p --partition
<Enter> --default 4th partition
<Enter>
p --print the partitions I made
w --write the new partition table and save

--see my drawing for the details.


2. Format the partitioned devices

# mkfs -t ext3 /dev/hda1
# mkfs -t ext3 /dev/hda2
# mkfs -t ext3 /dev/hda3


3. Create a TEMPORARY mount point for the new devices

# mkdir /mnt/bdump
# mkdir /mnt/cdump
# mkdir /mnt/udump

--set necessary permissions to the oracle user


4. Disable the cluster services on the node you are working on

$CRS_HOME/bin/crsctl disable crs

5. Shut down the instance, ASM, services, and nodeapps on the node you are working on
-- srvctl stop instance -d database_name -i instance_name
-- srvctl stop asm -n hostname
-- srvctl stop services -n hostname
-- srvctl stop nodeapps -n hostname

6. Reboot the node
-- after reboot, wait around 5-10 minutes
-- issue ps -ef | grep crs
-- $CRS_HOME/bin/crs_stat -t
to ensure that no Oracle binaries are running

7. Mount the new partitions into their temporary mount points:
-- mount /dev/hda1 /mnt/bdump
-- mount /dev/hda2 /mnt/cdump
-- mount /dev/hda3 /mnt/udump

8. Navigate to the $ORACLE_BASE directory, and identify the files/directories to be copied, copy those files to the new devices
-- cp -rp $ORACLE_BASE/admin/SID/bdump/* /mnt/bdump
-- cp -rp $ORACLE_BASE/admin/SID/cdump/* /mnt/cdump
-- cp -rp $ORACLE_BASE/admin/SID/udump/* /mnt/udump

9. As a way to be able to rollback from errors, simply rename the old dump directories, rather than deleting them
-- mv $ORACLE_BASE/admin/SID/bdump $ORACLE_BASE/admin/SID/bdump-old
-- mv $ORACLE_BASE/admin/SID/cdump $ORACLE_BASE/admin/SID/cdump-old
-- mv $ORACLE_BASE/admin/SID/udump $ORACLE_BASE/admin/SID/udump-old

10. Create the dump directories that will serve as the PERMANENT MOUNTPOINTS of the new drives
-- mkdir $ORACLE_BASE/admin/SID/bdump
-- mkdir $ORACLE_BASE/admin/SID/cdump
-- mkdir $ORACLE_BASE/admin/SID/cdump

11. Test the mounting of the new devices
-- umount /mnt/bdump
-- umount /mnt/cdump
-- umount /mnt/udump
-- mount /dev/hda1 $ORACLE_BASE/admin/SID/bdump
-- mount /dev/hda2 $ORACLE_BASE/admin/SID/cdump
-- mount /dev/hda3 $ORACLE_BASE/admin/SID/udump

12. Verify mounting
-- df -h should output something similar to this:


Filesystem            Size  Used Avail Use% Mounted on
/dev/hda1             1024M   16K  1023M   1% /u01/app/oracle/admin/SID/bdump
/dev/hda2             1024M   16K  1023M   1% /u01/app/oracle/admin/SID/cdump
/dev/hda3             1024M   16K  1023M   1% /u01/app/oracle/admin/SID/udump


13. Check the navigation. You should not be able to notice any difference when attempting to access the files such as the alert log. It should seem that they are still in the same location.
-- cat $ORACLE_BASE/admin/SID/bdump/alert_SID.log

14. Once these are good, create the entries in the fstab to permanently mount the new devices

15. Reboot the machine, and check for persistence.

16, If the mountpoints are now solid, and you have no more issues, enable the CRS
-- $CRS_HOME/bin/crsctl enable crs

17. Reboot the machine, move on to the next node, repeat.

Comments

Popular posts from this blog

RMAN Restoration to New Server with Different Directory and New Database Name

[Script] Tablespace Usage Alert

[Script] ASM Diskgroup Space Usage Alert