Vous êtes sur la page 1sur 64

SCAN info:http://www.databasejournal.com/features/oracle/article.php/3915026/Single-Client -Access-Name-SCAN-for-Oracle-Database-11g-RAC.

htm ________________________________________________________________________________ _ 11gR2 RAC steps:________________________________________________________________________________ _ Perform the below steps as required on RAC and NON-RAC machines:-------------------------------------------------------------------------------STEP 1:- (Check Kernel Versions) -------------------------------------------------------------------------------vi /etc/profile export PS1="\u@\H:\t:\w > \n$" export PS1="\u@\H:\@:\$PWD > \n$" PS1="[$LOGNAME@`uname -n`:\$PWD]> " As root user execute the commands:rpm -qa | grep kernel & rpm -qa | grep glibc (not required) rpm -qi esc strace pid pmap pid (to see what ports are opened) netstat -nap | grep -i 5902 OR lsof -i -n -P OR iptables -L iptables -F uname a (All OS info) uname r or uname mr (To check the kernel version) arch (To check the architecture of the OS) echo $HOSTTYPE(To check the architecture of the OS) echo $LOGNAME To check the amount of memory you have, type:cat /proc/meminfo | grep MemTotal OR free -m TO check CPU info:-

cat /proc/cpuinfo To check the amount of swap you have allocated, type:cat /proc/meminfo | grep SwapTotal OR swapon -s (Check the Redhat release :-) cat /etc/redhat-release OR cat /etc/issue (Check the Partitions in the System :-) cat /proc/partitions (for host information) cat /etc/sysconfig/network (NIC configuration:-) vi /etc/sysconfig/network-scripts/ifcfg-eth0 # Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] DEVICE=eth0 BOOTPROTO=static BROADCAST=192.168.159.255 HWADDR=00:0c:29:eb:a2:e1 IPADDR=192.168.159.21 NETMASK=255.255.255.0 NETWORK=192.168.159.0 ONBOOT=yes GATEWAY=192.168.159.2 TYPE=Ethernet USERCTL=no IPV6INIT=no PEERDNS=yes vi /etc/sysconfig/network-scripts/ifcfg-eth1 # Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] DEVICE=eth1 BOOTPROTO=static BROADCAST=172.168.159.255 HWADDR=00:0c:29:eb:a2:eb IPADDR=172.168.159.21 NETMASK=255.255.255.0 NETWORK=172.168.159.0 ONBOOT=yes TYPE=Ethernet USERCTL=no IPV6INIT=no PEERDNS=yes vi /etc/sysconfig/network-scripts/ifcfg-eth2 # Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] DEVICE=eth2 BOOTPROTO=static

BROADCAST=61.168.159.255 HWADDR=00:0c:29:eb:a2:f5 IPADDR=61.168.159.21 NETMASK=255.255.255.0 NETWORK=61.168.159.0 ONBOOT=yes TYPE=Ethernet USERCTL=no IPV6INIT=no PEERDNS=yes dmesg | grep Attached ->check attached luns to a node netstat -i netstat -a --------------------------------------------------------------------------------STEP 2:- (Set Kernel Parameters) --------------------------------------------------------------------------------Add the following lines to the vi /etc/sysctl.conf file:#kernel.shmmax = 4294967295(by default present in RHEL 5) #kernel.shmall = 268435456(by default present in RHEL 5) kernel.shmmni = 4096 # semaphores: semmsl, semmni, semmns, semopm kernel.sem = 250 32000 100 128 # 512 * PROCESSES (the parameter for init.ora) fs.file-max = 6815744 # not required fs.aio-max-nr=1048576 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default=4194304 net.core.rmem_max=4194304 net.core.wmem_default=262144 net.core.wmem_max=1048576 Run the following command to change the current kernel parameters:(Machine reboot not required) /sbin/sysctl -p /sbin/sysctl -A (m->for shared memory s->for semaphores) ipcs -lms --------------------------------------------------------------------------------STEP 3:- Maximum Number of Open File Descriptors and Processes for Shell Session --------------------------------------------------------------------------------vi /etc/security/limits.conf #depending on size of db, these may need to be large #for file descriptors - for grid user

grid grid

soft hard

nofile 1024 nofile 65536

#for file descriptors - for oracle user oracle soft nofile 1024 oracle hard nofile 65536 #for processes - grid user grid soft nproc 2047 grid hard nproc 16384 #for processes - oracle user oracle soft nproc 2047 oracle hard nproc 16384 vi /etc/pam.d/login session session required required /lib/security/pam_limits.so # (RHEL 4) pam_limits.so # (RHEL 5)

onsctl ping ----------------------------------------------------------------------------------------------------------------------------------------------------------------STEP 4:- SELinux (Security Enhanced linux disable) --------------------------------------------------------------------------------Disable secure linux by editing the vi /etc/selinux/config file, making sure the SELINUX flag is set as follows:SELINUX=disabled ----------------------------------------------------------------------------------------------------------------------------------------------------------------STEP 5:- Install the following Packages --------------------------------------------------------------------------------cat /proc/filesystems | grep iso mkdir -p /mydata mkdir -p /mydata/cdrom0 ls /dev/cd* ls /dev/dvd* mount -t iso9660 /dev/dvd /mydata/cdrom0 cd /mydata/cdrom0/Server cd / umount /mydata/cdrom0 (Packages required for Oracle11gR2 installation on RHEL 5 :-)

cd /media/RHEL_5.4\ x86_64\ DVD/Server/ OR cd /media/RHEL_5.4\ i386\ DVD/Server/ rpm rpm rpm rpm rpm rpm rpm rpm rpm rpm rpm rpm rpm rpm rpm rpm rpm rpm rpm rpm rpm -Uvh -Uvh -Uvh -Uvh -Uvh -Uvh -Uvh -Uvh -Uvh -Uvh -Uvh -Uvh -Uvh -Uvh -Uvh -Uvh -Uvh -Uvh -Uvh -Uvh -Uvh binutils-2.* elfutils-libelf-0.* glibc-common-2.* libaio-0.* libgcc-4.* libstdc++-4.* make-3.* compat-libstdc++-33* elfutils-libelf-devel-0.* glibc-devel-2.* gcc-4.* gcc-c++-4.* libaio-devel-0.* libstdc++-devel-4.* unixODBC-2.* unixODBC-devel-2.* sysstat-7.* libXp-* libXp-devel* pdksh* openmotif*

--------------------------------------------------------------------------------STEP 5:- Create Groups, User and directories and assign permission and ownership --------------------------------------------------------------------------------As root user:/usr/sbin/groupadd /usr/sbin/groupadd /usr/sbin/groupadd /usr/sbin/groupadd /usr/sbin/groupadd /usr/sbin/groupadd Confirm:tail -8 /etc/group -g -g -g -g -g -g 701 oinstall 702 dba 703 oper 1102 asmdba 1103 asmadmin 1104 asmoper

Grid Infrastructure User:/usr/sbin/useradd -u 903 -m -d /home/grid -g 701 -G 1102,1103,1104,702 -s /bin/b ash -c "Grid Infrastructure 11gR2 Owner" grid OR /usr/sbin/useradd -u 903 -m -d /home/grid -g oinstall -G asmdba,asmadmin,asmoper

,dba -s /bin/bash -c "Grid Infrastructure 11gR2 Owner" grid passwd grid Oracle Home User:/usr/sbin/useradd -u 902 -m -d /home/oracle -g 701 -G 702,703,1102 -s /bin/bash -c "Oracle Home 11gR2 Owner" oracle OR /usr/sbin/useradd -u 902 -m -d /home/oracle -g oinstall -G dba,oper,asmdba -s /b in/bash -c "Oracle Home 11gR2 Owner" oracle passwd oracle Confirm:tail -4 /etc/passwd;tail -4 /etc/shadow id -a grid;id -a oracle groups grid;groups oracle As oracle and grid user:-(Hard limit and Soft limit for process and file descrip tors) $ ulimit -Sa or ulimit -Sn $ ulimit -Ha or ulimit -Hn Creation of Directories:mkdir /u002 /u003 /u004 /u007 mkdir -p /u002/app/grid mkdir -p /u002/app/oracle mkdir -p /u003/app/oracle chown chown chown chown -R -R -R -R grid:oinstall oracle:oinstall oracle:oinstall oracle:oinstall /u002 /u003 /u004 /u007

ls -ld /u002 /u003 /u004 /u007 chmod 0755 /u002 (not required) chmod 0755 /u003 (not required) mkdir -p /grid/app mkdir -p /oracle/product mkdir -p /grid/product/11.2.0.2.0 /grid/product/11.2.0.3.0

mkdir -p /oracle/product/10.2.0.5.0 mkdir -p /oracle/product/11.2.0.2.0 /oracle/product/11.2.0.3.0 mkdir -p /oracle/FRA mkdir -p /gridnfsHome/app mkdir -p /oranfsHome/product mkdir -p /gridnfsHome/product/11.2.0.2.0 /gridnfsHome/product/11.2.0.3.0 mkdir -p /oranfsHome/product/10.2.0.5.0 mkdir -p /oranfsHome/product/11.2.0.2.0 /oranfsHome/product/11.2.0.3.0 chown -R grid:oinstall /grid /gridnfsHome chown -R oracle:oinstall /oracle /oranfsHome /oradata1 /oradata2 /oraredo1 /orar edo2 /oranfsdata01 /oranfsdata02 chown -R oracle:oinstall /oracle /oranfsHome /oradata3 /oradata4 /oraredo3 /orar edo4 /oranfsdata01 /oranfsdata02 the avove permissions are already applied by the default umast value 0022 --------------------------------------------------------------------------------STEP 6:- Oracle profile Settings --------------------------------------------------------------------------------vi /home/grid/.bash_profile #Grid 11gR2 Settings umask 022 TMP=/tmp; export TMP TMPDIR=$TMP; export TMPDIR ORACLE_BASE=/u002/app/grid; export ORACLE_BASE ORACLE_HOME=/u002/app/oracle/product/11.2.0/grid_1; export ORACLE_HOME ORACLE_SID=+ASM1; export ORACLE_SID ORACLE_OWNER=oracle; export ORACLE_OWNER ORACLE_TERM=xterm; export ORACLE_TERM ORACLE_HOSTNAME=rac1.sipu.com; export ORACLE_HOSTNAME TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN NLS_DATE_FORMAT="yyyy-mm-dd:hh24:mi:ss"; export NLS_DATE_FORMAT ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11 PATH=/usr/X11R6/bin:/usr/sbin:/usr/bin:/usr/local/bin:/bin:$PATH; export PATH PATH=$ORACLE_HOME/OPatch:$ORACLE_HOME/bin:$PATH; export PATH LD_LIBRARY_PATH=$ORACLE_HOME/network/lib:$ORACLE_HOME/lib:$ORACLE_HOME/rdbms/lib :/lib:/usr/lib:/usr/local/lib; export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib; e xport CLASSPATH

THREADS_FLAG=native; export THREADS_FLAG export EDITOR=vi echo echo echo echo echo echo "ORACLE_SID "ORACLE_BASE "ORACLE_HOME "TNS_ADMIN "HOSTNAME "USERNAME = = = = = = $ORACLE_SID" $ORACLE_BASE" $ORACLE_HOME" $TNS_ADMIN" $ORACLE_HOSTNAME" $LOGNAME and GROUPS ARE" & groups

vi /home/oracle/.bash_profile #Oracle 11gR2 Settings umask 022 TMP=/tmp; export TMP TMPDIR=$TMP; export TMPDIR ORACLE_BASE=/u003/app/oracle; export ORACLE_BASE ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1; export ORACLE_HOME ORACLE_SID=SIPU1; export ORACLE_SID ORACLE_OWNER=oracle; export ORACLE_OWNER ORACLE_TERM=xterm; export ORACLE_TERM ORACLE_HOSTNAME=rac1.sipu.com; export ORACLE_HOSTNAME ORACLE_UNQNAME=PROD; export ORACLE_UNQNAME TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN NLS_DATE_FORMAT="yyyy-mm-dd:hh24:mi:ss"; export NLS_DATE_FORMAT ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11 PATH=/usr/X11R6/bin:/usr/sbin:/usr/bin:/usr/local/bin:/bin:$PATH; export PATH PATH=$ORACLE_HOME/OPatch:$ORACLE_HOME/bin:$PATH; export PATH LD_LIBRARY_PATH=$ORACLE_HOME/network/lib:$ORACLE_HOME/lib:$ORACLE_HOME/rdbms/lib :/lib:/usr/lib:/usr/local/lib; export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib; e xport CLASSPATH THREADS_FLAG=native; export THREADS_FLAG export EDITOR=vi echo echo echo echo echo echo "ORACLE_SID "ORACLE_BASE "ORACLE_HOME "TNS_ADMIN "HOSTNAME "USERNAME = = = = = = $ORACLE_SID" $ORACLE_BASE" $ORACLE_HOME" $TNS_ADMIN" $ORACLE_HOSTNAME" $LOGNAME and GROUPS ARE" & groups

vi /etc/profile if [ "$USER" = "grid" -o "$USER" = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384

ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi (run the below command to reflect the changes immediately:-) . /etc/profile

--------------------------------------------------------------------------------STEP 7 :-( run these two scripts as root when asked during installation) --------------------------------------------------------------------------------ls -l /etc/ntp.conf # Drop root to id 'ntp:ntp' by default. OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid" # Set to 'yes' to sync hw clock after successful ntpdate SYNC_HWCLOCK=no # Additional options for ntpdate NTPDATE_OPTIONS="" chkconfig --list ntpd start the ntpd daemon at runlevel 3 and 5. chkconfig ntpd off /etc/init.d/ntpd /etc/init.d/ntpd /etc/init.d/ntpd /etc/init.d/ntpd OR service service service service ntpd ntpd ntpd ntpd status stop start restart status stop start restart

To use CTSS instead of ntp mv /etc/ntp.conf /etc/ntp.conf.original Also remove the following file: rm /var/run/ntpd.pid This file maintains the pid for the NTP daemon. When the installer finds that the NTP protocol is not active, the Cluster Time S ynchronization Service is automatically installed in active mode and synchronizes the time across the nodes. If NTP is found configured, then the Cluster Time Synchronization Service is started in observer mode,

and no active time synchronization is performed by Oracle Clusterware within the cluster. To confirm that ctssd is active after installation, enter the following command as the Grid installation owner (grid): server 61.17.203.224 crsctl check ctss (perform the above file if you want to use CTSS of 11gR2 instead of NTP) --------------------------------------------------------------------------------After Grid Infrastructure Installation:/u002/app/oraInventory/orainstRoot.sh /u002/app/oracle/product/11.2.0/grid_1/root.sh --------------------------------------------------------------------------------After Oracle Installation:/u003/app/oracle/product/11.2.0/db_1/root.sh dbca/emca ----------------------------------------------------------------------------------------------------------------------------------------------------------------STEP 8 :-( some important steps should be followed under mentioned below) --------------------------------------------------------------------------------DBCA is able to see the ASM diskgroup, only if the oracle user has to be in the asmdba group. User: grid Groups: oinstall, asmdba, asmadmin, asmoper, dba User oracle Groups: oinstall, dba, oper, asmdba And the disks have to belong to grid:asmadmin. Note:-In an Oracle Restart environment (Grid Software for standalone server), th e grid user has to be in the dba group as well, otherwise the database cannot be started. The oracle user must be the member of the osdba (asmdba) group for asm to create , access, and manage files in an asm disk Group. Use:#usermod -g -a asmdba oracle Cause-Grid infrastructure for a cluster installations assign root ownership to all

parent directories of the Grid home location. As a result, ownership of all named direc tories in the software location path are changed to root, creating permissions errors for all subsequent installations into the same Oracle base. Action-Specify software location outside of an Oracle base directory for grid infr astructure for a cluster installations. Note:-olr.loc keep Grid PATH and OLR File (Oracle Local Registry) on a local nod e Linux: /etc/oracle PATH Solaris: /var/opt/oracle PATH ---------------------------------------------------------------------------------

--------------------------------------------------------------------------------ASM configuratin steps:----------------------------------------------------------------------------------------------------------------------------------------------------------------1) Create file system on HDD Partitions --------------------------------------------------------------------------------fdisk -l | grep -i ^disk create file systems for storage of oracle binaries:examples:mkfs.ext3 mkfs.ext3 mkfs.ext3 mkfs.ext3 OR mke2fs mke2fs mke2fs mke2fs OR -j -j -j -j -b -b -b -b 4096 4096 4096 4096 /dev/sdd12 /dev/sdd13 /dev/sdd14 /dev/sdd15 -j -j -j -j -b -b -b -b 4096 4096 4096 4096 /dev/sdd12 /dev/sdd13 /dev/sdd14 /dev/sdd15

mkfs mkfs mkfs mkfs

-t -t -t -t

ext3 ext3 ext3 ext3

-m -m -m -m

1 1 1 1

-v -v -v -v

/dev/sdd12 /dev/sdd13 /dev/sdd14 /dev/sdd15

mkdir -p /u002/ /u003/ /u008/ /orabackup/ mount mount mount mount df -h OR mount mount mount mount OR (quick re-mount in read only) mount -o remount,ro /dev/sdd12 mount -o remount,ro /dev/sdd13 mount -o remount,ro /dev/sdd14 mount -o remount,ro /dev/sdd15 df -h mount mount -v cat /proc/partitions vi /etc/fstab /dev/sdd12 /dev/sdd13 /dev/sdd14 /dev/sdd15 /u002 /u003 /u008 /orabackup ext3 ext3 ext3 ext3 defaults defaults defaults defaults 0 0 0 0 0 0 0 0 -t -t -t -t ext3 ext3 ext3 ext3 /dev/sdd12 /dev/sdd13 /dev/sdd14 /dev/sdd15 /u002 /u003 /u008 /orabackup /dev/sdd12 /dev/sdd13 /dev/sdd14 /dev/sdd15 /u002 /u003 /u008 /orabackup

to unmount LVM partitions:umount umount umount umount df -h mount -a (reads /etc/fstab and mounts the filesystems to its mount points) /dev/sdd12 /dev/sdd13 /dev/sdd14 /dev/sdd15 /u002 /u003 /u008 /orabackup

Partition label and auto-mounting:(use either of above or below procedure) e2label - Change the label on an ext2/ext3 filesystem e2label e2label e2label e2label /dev/sdd12 /dev/sdd13 /dev/sdd14 /dev/sdd15 GRIDHOME ORACLEHOME ORACLEDBFILES ORACLEBACKUP

Entry in vi /etc/fstab LABEL=GRIDHOME LABEL=ORACLEHOME LABEL=ORACLEDBFILES LABEL=ORACLEBACKUP /u002 /u003 /u008 /orabackup ext3 ext3 ext3 ext3 defaults defaults defaults defaults 0 0 0 0 0 0 0 0

It is also possible to set the filesystem label using the -L option of tune2fs, enter: # tune2fs -L usbstroage /dev/sdb2 # /sbin/dumpe2fs /dev/sdd12 | grep -i 'Block size' # /sbin/dumpe2fs /dev/sdd13 | grep -i 'Block size' # /sbin/dumpe2fs /dev/sdd14 | grep -i 'Block size' # /sbin/dumpe2fs /dev/sdd15 | grep -i 'Block size' --------------------------------------------------------------------------------2) ASMLib Configuration:--------------------------------------------------------------------------------rpm -Uvh oracleasm-support-2.1.3-1.el5.x86_64.rpm \ oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm \ oracleasmlib-2.0.4-1.el5.x86_64.rpm Or rpm -ivh oracleasm-support-2.1.3-1.el5.i386.rpm rpm -ivh oracleasm-2.6.18-164.el5PAE-2.0.5-1.el5.i686.rpm rpm -ivh oracleasmlib-2.0.4-1.el5.i386.rpm point:-stripping happens within disks inside failgroups but mirroring happens across failgroups. redundancy level:high->min 3 fg->oracle knows about mirroring. normal->min 2fg->oracle knows about mirroring. external->min 1fg but mirroring may or may not de done at H/W raid level and ora cle does not know abount it. --------------------------------------------------------------------------------now create asm disks to be used within disk groups:--------------------------------------------------------------------------------

-service oracleasm configure -i OR /etc/init.d/oracleasm configure -i for DATA->50gb service oracleasm createdisk DATA_FG1_DISK1 /dev/sdc1 service oracleasm createdisk DATA_FG1_DISK2 /dev/sdc2 service oracleasm createdisk DATA_FG1_DISK3 /dev/sdc3 service oracleasm createdisk DATA_FG2_DISK1 /dev/sdc5 service oracleasm createdisk DATA_FG2_DISK2 /dev/sdc6 service oracleasm createdisk DATA_FG2_DISK3 /dev/sdc7 service oracleasm createdisk DATA_FG3_DISK1 /dev/sdc8 service oracleasm createdisk DATA_FG3_DISK2 /dev/sdc9 service oracleasm createdisk DATA_FG3_DISK3 /dev/sdc10 --------------------------------------------------------------------------------for ARCH1->5gb service oracleasm createdisk ARCH1_FG1_DISK1 /dev/sdd1 service oracleasm createdisk ARCH1_FG1_DISK2 /dev/sdd2 service oracleasm createdisk ARCH1_FG2_DISK1 /dev/sdd3 service oracleasm createdisk ARCH1_FG2_DISK2 /dev/sdd5 for ARCH2->5gb service oracleasm createdisk ARCH2_FG1_DISK1 /dev/sdd6 service oracleasm createdisk ARCH2_FG1_DISK2 /dev/sdd7 service oracleasm createdisk ARCH2_FG2_DISK1 /dev/sdd8 service oracleasm createdisk ARCH2_FG2_DISK2 /dev/sdd9 --------------------------------------------------------------------------------for FRA->10gb service oracleasm createdisk FRA_FG1_DISK1 /dev/sdc11 service oracleasm createdisk FRA_FG1_DISK2 /dev/sdc12 service oracleasm createdisk FRA_FG2_DISK1 /dev/sdc13 service oracleasm createdisk FRA_FG2_DISK2 /dev/sdc14 --------------------------------------------------------------------------------for ACFSDB->20gb service oracleasm createdisk ACFSDB_FG1_DISK1 /dev/sdd10 service oracleasm createdisk ACFSDB_FG2_DISK1 /dev/sdd11 for SPFILE->10gb

service oracleasm createdisk SPFILE_FG1_DISK1 /dev/sdc15 --------------------------------------------------------------------------------service service service service service service service service service service service service service service service service service service service service service service service service service service service service service service service service service service service service service service service service service service service service service service service service oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk deletedisk deletedisk deletedisk deletedisk deletedisk deletedisk deletedisk deletedisk deletedisk deletedisk deletedisk deletedisk deletedisk deletedisk deletedisk deletedisk deletedisk deletedisk deletedisk deletedisk deletedisk deletedisk deletedisk deletedisk DATA_FG1_DISK1 /dev/sdc1 DATA_FG1_DISK2 /dev/sdc2 DATA_FG1_DISK3 /dev/sdc3 DATA_FG2_DISK1 /dev/sdc5 DATA_FG2_DISK2 /dev/sdc6 DATA_FG2_DISK3 /dev/sdc7 DATA_FG3_DISK1 /dev/sdc8 DATA_FG3_DISK2 /dev/sdc9 DATA_FG3_DISK3 /dev/sdc10 ARCH1_FG1_DISK1 /dev/sdd1 ARCH1_FG1_DISK2 /dev/sdd2 ARCH1_FG2_DISK1 /dev/sdd3 ARCH1_FG2_DISK2 /dev/sdd5 ARCH2_FG1_DISK1 /dev/sdd6 ARCH2_FG1_DISK2 /dev/sdd7 ARCH2_FG2_DISK1 /dev/sdd8 ARCH2_FG2_DISK2 /dev/sdd9 FRA_FG1_DISK1 /dev/sdc11 FRA_FG1_DISK2 /dev/sdc12 FRA_FG2_DISK1 /dev/sdc13 FRA_FG2_DISK2 /dev/sdc14 ACFSDB_FG1_DISK1 /dev/sdd10 ACFSDB_FG2_DISK1 /dev/sdd11 SPFILE_FG1_DISK1 /dev/sdc15 DATA_FG1_DISK1 DATA_FG1_DISK2 DATA_FG1_DISK3 DATA_FG2_DISK1 DATA_FG2_DISK2 DATA_FG2_DISK3 DATA_FG3_DISK1 DATA_FG3_DISK2 DATA_FG3_DISK3 ARCH1_FG1_DISK1 ARCH1_FG1_DISK2 ARCH1_FG2_DISK1 ARCH1_FG2_DISK2 ARCH2_FG1_DISK1 ARCH2_FG1_DISK2 ARCH2_FG2_DISK1 ARCH2_FG2_DISK2 FRA_FG1_DISK1 FRA_FG1_DISK2 FRA_FG2_DISK1 FRA_FG2_DISK2 ACFSDB_FG1_DISK1 ACFSDB_FG2_DISK1 SPFILE_FG1_DISK1

chkconfig --list oracleasm

service service service service service service service service service service OR

oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm

listdisks scandisks status start restart stop ASMDISKNAME deletedisk ASMDISKNAME enable disable

/etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm /etc/init.d/oracleasm

listdisks scandisks status start restart stop querydisk ASMDISKNAME deletedisk ASMDISKNAME enable disable

ls -l /dev/oracleasm ls -l /dev/oracleasm/disks (disk permissions should be 0660) ps -ef | grep -i oracle ps -ef | grep -i grid --------------------------------------------------------------------------------STEP 2:export ORACLE_SID=+ASM directory creation and pfile creation su - grid mkdir -p /u01/app/grid/admin/BUS/adump mkdir -p /u01/app/grid/admin/BUS/pfile pfile creation:export ORACLE_SID=+ASM echo $ORACLE_SID ls -ltr $ORACLE_HOME/dbs/ vi $ORACLE_HOME/dbs/init+ASM.ora --------------------------------------------------------------------------------sample init.ora file for ASM instance #instance_type->to ASM or RDBMS depending on the instance type. The default is R DBMS.

#db_unique_name->Specifies a globally unique name for the database. This default s to +ASM but must be altered if you intend to run multiple ASM instances. #asm_diskgroups->The list of disk groups that should be mounted by an ASM instan ce during instance startup, or by the ALTER DISKGROUP ALL MOUNT statement. ASM configurati on changes are automatically reflected in this parameter. #asm_diskstring->Specifies a value that can be used to limit the disks considere d for discovery. Altering the default value may improve the speed of disk group mount time and th e speed of adding a disk to a disk group. Changing the parameter to a value which prevents the dis covery of already mounted disks results in an error. The default value is NULL allowing all suitab le disks to be considered. #asm_power_limit->The maximum power for a rebalancing operation on an ASM instan ce. The valid values range from 1 to 11, with 1 being the default. The higher the limit the more reso urces are allocated resulting in faster rebalancing operations. This value is also used as the defau lt when the POWER clause is omitted from a rebalance operation.

instance_type=asm instance_name='+ASM' db_unique_name='+MYASM' instance_number=1 asm_diskgroups='DATA1','DATA2','ARCH','FRA','SPFILE','ACFSDB' asm_diskstring='/dev/oracleasm/disks/*' asm_power_limit=1 audit_file_dest=/u002/app/grid/admin/ASM/adump diagnostic_dest=/u002/app/grid large_pool_size=12M local_listener='ASM' processes=40 sessions=100 remote_login_passwordfile='SHARED' Note:-Regarding the processes parameter Oracle follows the following rule: Increase the PROCESSES parameter in the ASM parameter file Processes = 25 + 15n, where n is the number of instances on the box using ASM for their storage. --------------------------------------------------------------------------------export ORACLE_SID=SIPU cd $ORACLE_HOME/dbs vi initSIPU.ora

--------------------------------------------------------------------------------sample init.ora file for RDBMS NON-ASM instance # Cache and I/O db_block_size=8192 db_cache_size=83886080 db_file_multiblock_read_count=16 log_buffer=3129344 # Database Identification db_name='TRUCK' db_unique_name='TRUCK' db_domain='myoracle.com' # Instance Identification instance_name=SIPU instance_type=rdbms #Default location for the flash recovery area db_recovery_file_dest='/u02/app/oracle/flash_recovery_area' #Hard limit on the total space to be used by target DG1base recovery files creat ed in the flash recovery db_recovery_file_dest_size=2147483648 # Cursors and Library Cache open_cursors=300 nls_territory='INDIA' # SGA Memory #sga_target=1218445312 memory_target=1024m memory_max_target=1200m # Sort, Hash Joins, Bitmap Indexes pga_aggregate_target=199229440 # Job Queues job_queue_processes=10 # Processes and Sessions processes=150 # Pools java_pool_size=31457280 large_pool_size=52428800 shared_pool_size=157286400 streams_pool_size=35m result_cache_max_size=2048K # System Managed Undo and Rollback Segments

undo_management='AUTO' undo_tablespace='UNDOTBS1' undo_retention=1800 # File Configuration control_files='/u02/app/oracle/oradata/TRUCK/control01.ctl', '/u02/app/oracle/oradata/TRUCK/control02.ctl'

# Diagnostics and Statistics diagnostic_dest='/u02/app/oracle' audit_file_dest='/u02/app/oracle/admin/TRUCK/adump' audit_trail='db' # Optimizer query_rewrite_enabled=TRUE star_transformation_enabled=TRUE timed_statistics=TRUE optimizer_dynamic_sampling=2 optimizer_mode='ALL_ROWS' skip_unusable_indexes=TRUE plsql_warnings='DISABLE:ALL' #new in 11gR1 ddl_lock_timeout=30 # Security and Auditing remote_login_passwordfile=EXCLUSIVE # MTS # Shared Server dispatchers='(PROTOCOL=TCP) (SERVICE=SIPUXDB)' shared_servers=2 max_shared_servers=4 local_listener='MYDB' # Redo Log and Recovery log_archive_dest_1='LOCATION=/u02/app/oracle/oradata/TRUCK/arc1 reopen=600 max_f ailure=10' log_archive_dest_2='LOCATION=/u02/app/oracle/oradata/TRUCK/arc2 reopen=600 max_f ailure=10' log_archive_dest_state_1=ENABLE log_archive_dest_state_2=ENABLE log_archive_format='arc_%t_%s_%r_%d.arc' log_archive_max_processes=4 log_archive_min_succeed_dest=2 log_checkpoints_to_alert=true fast_start_mttr_target=300 # Miscellaneous compatible='11.2.0.0.0'

--------------------------------------------------------------------------------

-sample init.ora file for RDBMS ASM instance ASM parameters:# Cache and I/O db_block_size=8192 db_cache_size=339738624 db_file_multiblock_read_count=16 log_buffer=3129344 # Database Identification db_name='BUS' db_unique_name='BUS' db_domain='myoracle.com' # Instance Identification instance_name=SAHU instance_type=rdbms #Default location for the flash recovery area db_recovery_file_dest='+FRA' #Hard limit on the total space to be used by target DG1base recovery files creat ed in the flash recovery db_recovery_file_dest_size=4070572032 # Cursors and Library Cache open_cursors=300 nls_territory='INDIA' memory_target=1024m memory_max_target=1200m # Sort, Hash Joins, Bitmap Indexes pga_aggregate_target=339738624 # Job Queues job_queue_processes=10 # Processes and Sessions processes=150 # Pools java_pool_size=31457280 large_pool_size=52428800 shared_pool_size=157286400 shared_pool_io_size=0 streams_pool_size=35m result_cache_max_size=2048K

# System Managed Undo and Rollback Segments undo_management='AUTO' undo_tablespace='UNDOTBS1' undo_retention=1800 # File Configuration control_files='+DATA/bus/controlfile/current.260.729807095','+FRA/bus/controlfil e/current.256.729807097' # Diagnostics and Statistics diagnostic_dest='/u02/app/oracle' audit_file_dest='/u02/app/oracle/admin/BUS/adump' audit_trail='db' db_create_file_dest='+DATA' # Optimizer query_rewrite_enabled=TRUE star_transformation_enabled=TRUE timed_statistics=TRUE optimizer_dynamic_sampling=2 optimizer_mode='ALL_ROWS' skip_unusable_indexes=TRUE plsql_warnings='DISABLE:ALL' #new in 11gR1 ddl_lock_timeout=30 # Security and Auditing remote_login_passwordfile=EXCLUSIVE # MTS # Shared Server dispatchers='(PROTOCOL=TCP) (SERVICE=SAHUXDB)' local_listener='BUS' # Redo Log and Recovery log_archive_dest_1='LOCATION=+ARCH' log_archive_format='%t_%s_%r_SAHU.arch' log_archive_dest_state_1=ENABLE log_archive_max_processes=4 log_archive_min_succeed_dest=2 log_checkpoints_to_alert=true fast_start_mttr_target=300 # Miscellaneous compatible='11.2.0.0.0' --------------------------------------------------------------------------------password file creation:orapwd file=$ORACLE_HOME/dbs/orapw+ASM password=oracle entries=5 FORCE=Y orapwd file=$ORACLE_HOME/dbs/orapwSIPU password=oracle entries=5 FORCE=Y --------------------------------------------------------------------------------

--

--------------------------------------------------------------------------------STEP 4:network file configuration cd $ORACLE_HOME/network/admin vi listener.ora --------------------------------------------------------------------------------LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.184.84)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.174.84)(PORT = 1521)) ) ) ) LISTENER1 = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.184.84)(PORT = 1522)) (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.174.84)(PORT = 1522)) ) ) ) DIRECT_HANDOFF_TTC_LISTENER = OFF SID_LIST_LISTENER= (SID_LIST= (SID_DESC= (GLOBAL_DBNAME=ASM) (ORACLE_HOME=/u002/app/oracle/product/11.2.0/grid_1) (SID_NAME=+ASM) ) (SID_DESC= (GLOBAL_DBNAME=MYDEV.oracle.com) (ORACLE_HOME=/u003/app/oracle/product/11.2.0/db_1) (SID_NAME=SIPU) ) ) SID_LIST_LISTENER1= (SID_LIST= (SID_DESC= (GLOBAL_DBNAME=ASM) (ORACLE_HOME=/u002/app/oracle/product/11.2.0/grid_1)

(SID_NAME=+ASM) ) (SID_DESC= (GLOBAL_DBNAME=MYDEV.oracle.com) (ORACLE_HOME=/u003/app/oracle/product/11.2.0/db_1) (SID_NAME=SIPU) ) ) ADR_BASE_LISTENER = /u002/app/grid ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER1=ON --------------------------------------------------------------------------------vi tnsnames.ora --------------------------------------------------------------------------------ASM1 = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.184.64)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.174.64)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.184.64)(PORT = 1522)) (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.174.64)(PORT = 1522)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = ASM) ) ) ASM2 = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.184.94)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.174.94)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.184.94)(PORT = 1522)) (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.174.94)(PORT = 1522)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = ASM) ) ) PROD = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.184.64)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.174.64)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.184.64)(PORT = 1522))

(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.174.64)(PORT = 1522)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = MYPROD.oracle.com) ) ) DEV = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.184.94)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.174.94)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.184.94)(PORT = 1522)) (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.174.94)(PORT = 1522)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = MYDEV.oracle.com) ) ) --------------------------------------------------------------------------------vi sqlnet.ora --------------------------------------------------------------------------------NAMES.DIRECTORY_PATH = (TNSNAMES, EZCONNECT ,HOSTNAME) ---------------------------------------------------------------------------------

There are two files listener.ora and endpoints_listener.ora. For each listener in listener.ora file there is entry with ENABLE_GLOBAL_DYNAMIC _ENDPOINT_<LISTENER_NAME>=on for each parameter with "on" value , there is corresponding entry in file. endpo ints_listener.ora with following format <LISTENER_NAME>_<NODE_NAME>= (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCPS) (HOST=gridcluster01-vip)(PORT=60801)))) . I have even tried to comment the line from endpoints_listener.ora , but strang ely when i tried to start the listener , Oracle clusterware automatically backed up the endpoints_listener.ora file and added the entry again in the file. Oracle Cluterware is managing the file using the code. Waiting for documentation to understand the same. If anyone find anything please share. Bug 8595653

The endpoints_listener.ora file is used to get endpoints of the default listener when data files of Oracle Database 11g Release 1 or Oracle Database 10g Release 2 are created on a release 11.2 ASM disk group. However, when the listener is modified (such as changing a port number using Net work Configuration Assistant), the endpoints_listener.ora file is not updated. Workaround: If a default listener is modified, edit the endpoints_listener.ora file manually . The file is stored in the $GRID_HOME/network/admin/ directory.

--------------------------------------------------------------------------------STEP 5:- DISKGROUP CREATION(RAW DEVICES) DISK SIZE->50GB DATA1 DISKGROUP:-(MIRRORING+STRIPPING) CREATE DISKGROUP DATA NORMAL REDUNDANCY FAILGROUP DATA_FG1 DISK 'ORCL:DATA_FG1_DISK1' NAME DATA_FG1_DISK1 SIZE 30G , 'ORCL:DATA_FG1_DISK2' NAME DATA_FG1_DISK2 SIZE 30G FAILGROUP DATA_FG2 DISK 'ORCL:DATA_FG2_DISK1' NAME DATA_FG2_DISK1 SIZE 30G , 'ORCL:DATA_FG2_DISK2' NAME DATA_FG2_DISK2 SIZE 30G FAILGROUP DATA_FG3 DISK 'ORCL:DATA_FG3_DISK1' NAME DATA_FG3_DISK1 SIZE 30G , 'ORCL:DATA_FG3_DISK2' NAME DATA_FG3_DISK2 SIZE 30G disks to be added later:'ORCL:DATA_FG1_DISK3' NAME DATA_FG1_DISK3 SIZE 30G 'ORCL:DATA_FG2_DISK3' NAME DATA_FG2_DISK3 SIZE 30G 'ORCL:DATA_FG3_DISK3' NAME DATA_FG3_DISK3 SIZE 30G now convert from NORMAL to HIGH redundancy now convert from NORMAL to HIGH redundancy --------------------------------------------------------------------------------DISK SIZE->20GB ARCH DISKGROUP:-(MIRRORING+STRIPPING) CREATE DISKGROUP ARCH NORMAL REDUNDANCY FAILGROUP ARCH1_FG1 DISK 'ORCL:ARCH1_FG1_DISK1' NAME ARCH1_FG1_DISK1 SIZE 10G ,

'ORCL:ARCH1_FG1_DISK2' NAME ARCH1_FG1_DISK2 SIZE 10G FAILGROUP ARCH_FG2 DISK 'ORCL:ARCH1_FG2_DISK1' NAME ARCH1_FG2_DISK1 SIZE 10G , 'ORCL:ARCH1_FG2_DISK2' NAME ARCH1_FG2_DISK2 SIZE 10G ATTRIBUTE 'compatible.asm'='11.2.0.0.0','au_size'='1M' CREATE DISKGROUP ARCH NORMAL REDUNDANCY FAILGROUP ARCH2_FG1 DISK 'ORCL:ARCH2_FG1_DISK1' NAME ARCH2_FG1_DISK1 SIZE 10G , 'ORCL:ARCH2_FG1_DISK2' NAME ARCH2_FG1_DISK2 SIZE 10G FAILGROUP ARCH_FG2 DISK 'ORCL:ARCH2_FG2_DISK1' NAME ARCH2_FG2_DISK1 SIZE 10G , 'ORCL:ARCH2_FG2_DISK2' NAME ARCH2_FG2_DISK2 SIZE 10G ATTRIBUTE 'compatible.asm'='11.2.0.0.0','au_size'='1M' --------------------------------------------------------------------------------DISK SIZE->20GB FRA DISKGROUP:-(MIRRORING+STRIPPING) CREATE DISKGROUP FRA NORMAL REDUNDANCY FAILGROUP FRA_FG1 DISK 'ORCL:FRA_FG1_DISK1' NAME FRA_FG1_DISK1 SIZE 10G , 'ORCL:FRA_FG1_DISK2' NAME FRA_FG1_DISK2 SIZE 10G FAILGROUP FRA_FG2 DISK 'ORCL:FRA_FG2_DISK1' NAME FRA_FG2_DISK1 SIZE 10G , 'ORCL:FRA_FG2_DISK2' NAME FRA_FG2_DISK2 SIZE 10G ATTRIBUTE 'compatible.asm'='11.2.0.0.0','au_size'='1M' --------------------------------------------------------------------------------DISK SIZE->40GB ACFSDB DISKGROUP:-(MIRRORING+STRIPPING) CREATE DISKGROUP ACFSDB NORMAL REDUNDANCY FAILGROUP ACFSDB_FG1 DISK 'ORCL:ACFSDB_FG1_DISK1' NAME ACFSDB_FG1_DISK1 SIZE 20G FAILGROUP ACFSDB_FG2 DISK 'ORCL:ACFSDB_FG2_DISK1' NAME ACFSDB_FG2_DISK1 SIZE 20G ATTRIBUTE 'compatible.asm'='11.2.0.0.0','au_size'='1M' --------------------------------------------------------------------------------as root user:-

acfs mount steps:# confirm that the proper

(ASM cluster file system) and ADVM(ASM dynamic volume manager) modules are now loaded # lsmod | grep oracle #/u01/app/oracle/product/11.2.0.1.0/grid_1/bin/acfsload start -s (to start the a cfs volume ADVM driver) (To start acfsload at boot up time):# vi /etc/init.d/acfsload # view /etc/init.d/acfsload #!/bin/sh #chkconfig: 2345 30 31 #description: Automatically load Oracle 11gR2 ACFS drivers during system reboot /u01/app/oracle/product/11.2.0.1.0/grid_1/bin/acfsload start -s # chmod u+x /etc/init.d/acfsload # chmod 775 /etc/init.d/acfsload # chkconfig --add acfsload # chkconfig --list acfsload acfsload 0:off 1:off 2:on 3:on 4:on 5:on 6:off

# ls -ltr /dev/asm/ total 0 brwxrwx--- 1 root asmadmin 252, 122881 Sep 22 11:22 acfs11gr2-240 as grid user:asmca -silent -createVolume -volumeName ACFS11gR2 -volumeDiskGroup ACFSDB -volum eSizeGB 100 Note:-after setting the compatibility to a higher value you cant reduce it to a lower version SQL>ALTER DISKGROUP ACFSDB SET ATTRIBUTE 'compatible.asm'='11.2.0.0.0'; SQL>ALTER DISKGROUP ACFSDB SET ATTRIBUTE 'compatible.rdbms'='11.2.0.0.0'; SQL>ALTER DISKGROUP ACFSDB SET ATTRIBUTE 'compatible.advm'='11.2.0.0.0';

as root user:# mkdir /myacfs (mount point for acfs) # ls -ld /myacfs/ # # # # mkfs.acfs /dev/asm/acfs11gr2-240 (format the volumes with ACFS filesystem) mount -t acfs /dev/asm/acfs11gr2-240 /myacfs && df -h umount /myacfs umount /dev/asm/acfs11gr2-240

# /sbin/mount.acfs -o all # /bin/umount -t acfs -a # chown -R oracle:oinstall /myacfs # ls -ld /myacfs/ Entry in vi /etc/fstab:# /dev/asm/acfs11gr2-240 # # # # acfsutil registry -l acfsutil info fs mount mount -v /myacfs acfs defaults 0 0

----------------------------------------------------------------------------------------------------------------------------------------------------------------STEP 7:ASM database creation --------------------------------------------------------------------------------CREATE DATABASE TRUCK USER SYS IDENTIFIED BY oracle USER SYSTEM IDENTIFIED BY oracle LOGFILE GROUP 1 ('+DATA/') SIZE 50m, GROUP 2 ('+DATA/') SIZE 50m, GROUP 3 ('+DATA/') SIZE 50m ARCHIVELOG MAXLOGFILES 16 MAXLOGMEMBERS 5 MAXLOGHISTORY 10 MAXDATAFILES 100 MAXINSTANCES 8 DATAFILE

'+DATA/' SIZE 150m REUSE, '+DATA/' SIZE 150m REUSE, '+DATA/' SIZE 150m REUSE

SYSAUX DATAFILE '+DATA/' SIZE 150m REUSE, '+DATA/' SIZE 150m REUSE, '+DATA/' SIZE 150m REUSE DEFAULT TABLESPACE userdata DATAFILE '+DATA/' SIZE 80m REUSE AUTOEXTEND ON NEXT 20m MAXSIZE 1G, '+DATA/' SIZE 80m REUSE AUTOEXTEND ON NEXT 20m MAXSIZE 1G LOGGING BLOCKSIZE 8k EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512k SEGMENT SPACE MANAGEMENT AUTO PERMANENT ONLINE DEFAULT TEMPORARY '+DATA/' SIZE 50m '+DATA/' SIZE 50m EXTENT MANAGEMENT TABLESPACE temptbs TEMPFILE REUSE, REUSE LOCAL UNIFORM SIZE 512K

undo TABLESPACE undotbs DATAFILE '+DATA/' SIZE 50m REUSE, '+DATA/' SIZE 50m REUSE EXTENT MANAGEMENT LOCAL SET TIME_ZONE ='ASIA/CALCUTTA' CHARACTER SET WE8ISO8859P1 --------------------------------------------------------------------------------STEP 7:NON-ASM database creation --------------------------------------------------------------------------------CREATE DATABASE BUS CONTROLFILE REUSE USER SYS IDENTIFIED BY oracle USER SYSTEM IDENTIFIED BY oracle

LOGFILE GROUP 1 ('/u008/oradata/BUS/redo1_a.log','/u008/oradata/BUS/redo1_b.lo g') SIZE 50m, GROUP 2 ('/u008/oradata/BUS/redo2_a.log','/u008/oradata/BUS/redo2_b.lo g') SIZE 50m ARCHIVELOG MAXLOGFILES 16 MAXLOGMEMBERS 5 MAXLOGHISTORY 10 MAXDATAFILES 100 MAXINSTANCES 8 DATAFILE '/u008/oradata/BUS/system01.dbf' SIZE 250m REUSE, '/u008/oradata/BUS/system02.dbf' SIZE 250m REUSE EXTENT MANAGEMENT LOCAL SYSAUX DATAFILE '/u008/oradata/BUS/sysaux01.dbf' SIZE 150m REUSE, '/u008/oradata/BUS/sysaux02.dbf' SIZE 150m REUSE EXTENT MANAGEMENT LOCAL DEFAULT TABLESPACE userdata DATAFILE '/u008/oradata/BUS/userdata01.dbf' SIZE 100m REUSE AUTOEXTEND ON NEXT 20m MAXSIZE 1G, '/u008/oradata/BUS/userdata02.dbf' SIZE 100m REUSE AUTOEXTEND ON NEXT 20m MAXSIZE 1G LOGGING BLOCKSIZE 8k EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO PERMANENT ONLINE DEFAULT TEMPORARY TABLESPACE temptbs TEMPFILE '/u008/oradata/BUS/temptbs01.tmp'SIZE 50m REUSE, '/u008/oradata/BUS/temptbs02.tmp'SIZE 50m REUSE EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K undo TABLESPACE undotbs01 DATAFILE '/u008/oradata/BUS/undotbs01.dbf' SIZE 100m REUSE, '/u008/oradata/BUS/undotbs02.dbf' SIZE 100m REUSE EXTENT MANAGEMENT LOCAL SET TIME_ZONE ='ASIA/CALCUTTA'

CHARACTER SET UTF8 --------------------------------------------------------------------------------STEP 7:RECOVERY CATALOG-ASM database creation --------------------------------------------------------------------------------CREATE DATABASE RVCAT USER SYS IDENTIFIED BY oracle USER SYSTEM IDENTIFIED BY oracle LOGFILE GROUP 1 ('/u01/app/oracle/oradata/RVCAT/redo1_1.rdo') GROUP 2 ('/u01/app/oracle/oradata/RVCAT/redo2_1.rdo') GROUP 3 ('/u01/app/oracle/oradata/RVCAT/redo3_1.rdo') GROUP 4 ('/u01/app/oracle/oradata/RVCAT/redo4_1.rdo') ARCHIVELOG MAXLOGFILES 16 MAXLOGMEMBERS 5 MAXLOGHISTORY 10 MAXDATAFILES 100 MAXINSTANCES 8 DATAFILE '/u01/app/oracle/oradata/RVCAT/system01.dbf' SIZE 100m REUSE AUTOEXTEND ON NEXT 20m MAXSIZE 1G, '/u01/app/oracle/oradata/RVCAT/system02.dbf' SIZE 100m REUSE AUTOEXTEND ON NEXT 20m MAXSIZE 1G SYSAUX DATAFILE '/u01/app/oracle/oradata/RVCAT/sysaux01.dbf' SIZE 80m REUSE AUTOEXTEND ON NEXT 20m MAXSIZE 1G, '/u01/app/oracle/oradata/RVCAT/sysaux02.dbf' SIZE 80m REUSE AUTOEXTEND ON NEXT 20m MAXSIZE 1G DEFAULT TABLESPACE userdata DATAFILE '/u01/app/oracle/oradata/RVCAT/userdata01.dbf' SIZE 50m REUSE AUTOEXTEND ON NEXT 20m MAXSIZE 1G, '/u01/app/oracle/oradata/RVCAT/userdata02.dbf' SIZE 50m REUSE AUTOEXTEND ON NEXT 20m MAXSIZE 1G LOGGING BLOCKSIZE 8k EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512k SEGMENT SPACE MANAGEMENT AUTO PERMANENT ONLINE

SIZE SIZE SIZE SIZE

50m, 50m, 50m, 50m

DEFAULT TEMPORARY TABLESPACE temptbs TEMPFILE '/u01/app/oracle/oradata/RVCAT/temptbs01.tmp'SIZE 50m REUSE, '/u01/app/oracle/oradata/RVCAT/temptbs02.tmp'SIZE 50m REUSE EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K undo TABLESPACE undotbs DATAFILE '/u01/app/oracle/oradata/RVCAT/undotbs01.dbf' SIZE 50m REUSE, '/u01/app/oracle/oradata/RVCAT/undotbs02.dbf' SIZE 50m REUSE EXTENT MANAGEMENT LOCAL SET TIME_ZONE ='ASIA/CALCUTTA' CHARACTER SET UTF8 --------------------------------------------------------------------------------RECOVERY CATALOG-ASM RMAN tablespace and user creation --------------------------------------------------------------------------------CREATE TABLESPACE RMAN DATAFILE '/u01/app/oracle/oradata/RVCAT1/rman01.dbf' SIZE 50m REUSE AUTOEXTEND ON NEXT 20m MAXSIZE 1G, '/u01/app/oracle/oradata/RVCAT1/rman02.dbf' SIZE 50m REUSE AUTOEXTEND ON NEXT 20m MAXSIZE 1G LOGGING BLOCKSIZE 8k EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M SEGMENT SPACE MANAGEMENT AUTO PERMANENT ONLINE create user rman identified by rman default tablespace rman temporary tablespace temptbs quota unlimited on rman account unlock; grant connect,resource,recovery_catalog_owner to rman; rman target sys/oracle@bus catalog rman/rman@rvcat create catalog tablespace "RMAN"; register database; show all; backup as backupset current controlfile format '/orabackup/rman1_backup/BUS/BUSDBCTRL_%U' TAG='CURRENT CONTROLFILE BACKU P'; backup as compressed backupset current controlfile format '/orabackup/rman1_backup/BUS/BUSDBCTRL_%U' TAG='CURRENT CONTROLFILE BACKU P'; backup as copy current controlfile

format '/orabackup/rman1_backup/BUS/BUSDBCTRL_%U' TAG='CURRENT CONTROLFILE BACKU P'; (the above command doesn't shows information in the recovery catalog) backup as backupset full database format '/orabackup/rman1_backup/BUS/BUSDB_%U' plus archivelog format '/orabackup/rman1_backup/BUS/BUSARCH_%U' TAG='FULL DATABA SE BACKUP'; backup as compressed backupset full database format '/orabackup/rman1_backup/BUS /BUSDB_%U' plus archivelog format '/orabackup/rman1_backup/BUS/BUSARCH_%U' TAG='FULL DATABA SE BACKUP'; backup as copy full database format '/orabackup/rman1_backup/BUS/BUSDB_%U' plus archivelog format '/orabackup/rman1_backup/BUS/BUSARCH_%U' TAG='FULL DATABA SE BACKUP'; RECOVERY CATALOG UPGRADATION:If you use a version of the recovery catalog schema that is older than that requ ired by the RMAN client, then you must upgrade it. The compatibility matrix in O racle Database Backup and Recovery Reference explains which schema versions are compatible with which versions of RMAN. For example, you must upgrade the catalo g if you use an Oracle Database 11g RMAN client with a release 10.2 version of t he recovery catalog schema. Note that the Oracle Database 10gR1 version of the recovery catalog schema requi res the CREATE TYPE privilege. If you created the recovery catalog owner in a re lease before 10gR1, and if you granted the RECOVERY_CATALOG_OWNER role when it d id not include the CREATE TYPE privilege, then you must grant CREATE TYPE to thi s user explicitly before upgrading the catalog. You receive an error when issuing UPGRADE CATALOG if the recovery catalog is alr eady at a version greater than that required by the RMAN client. RMAN permits th e UPGRADE CATALOG command to be run if the recovery catalog is current and does not require upgrading, however, so that you can re-create packages at any time i f necessary. Check the message log for error messages generated during the upgra de. SELECT * FROM rcver;

If the table displays multiple rows, then the highest version in the RCVER table is the current catalog schema version. The table stores only the major version numbers and not the patch numbers. For example, assume that the rcver table disp lays the following rows: VERSION -----------08.01.07 09.00.01 10.02.00 These rows indicate that the catalog was created with a release 8.1.7 executable , then upgraded to release 9.0.1, and finally upgraded to release 10.2.0. The cu rrent version of the catalog schema is 10.2.0. ORA-19571: archived log RECID 88 STAMP 730939665 not found in control file --------------------------------------------------------------------------------

-Note:ORA-03214: File Size specified is smaller than minimum required the size of a datafile must always be above the size specified in the Uniform si ze... In 9i, when you create a tablespace, it is by default created as a Locally Manag ed Tablespace with a minimum extent size of 64K, if the local management is system managed. Therefore, you mu st allocate size DG1files to 64 Kbytes + 3 blocks for the bitmap blocks. In solution 1, the DG1file size coul d not be set less than 64K + (3*8k)= 88k For Oracle9i, if you don't specify extent size, the minimum size for a DG1 file is 88k. --------------------------------------------------------------------------------alter system set db_recovery_file_dest_size=4G scope=both; alter system set db_recovery_file_dest='+FRA/' scope=both; Catalog - creates data dictionary views. @?/rdbms/admin/catalog.sql Catproc - create in built PL/SQL Procedures, Packages etc @?/rdbms/admin/catproc.sql utlrp - to compile invalid objects @?/rdbms/admin/utlrp.sql utlrp - to compile invalid objects @?/rdbms/admin/utlirp.sql check if any invalid objects remain their or not SQL>select COUNT(*) "ERRORS DURING RECOMPILATION" from utl_recomp_errors; select * from dba_registry; conn system/oracle @?/sqlplus/admin/pupbld.sql show parameter service --------------------------------------------------------------------------------ALTER DISKGROUP ALL MOUNT; alter system reset asm_diskgroups sid='1'; Run some queries on asm insatnce:-

SQL>show parameter diskgroups SQL>show parameter diskstring set linesize 200 column NAME format a10 column COMPATIBILITY format a25 column DG1BASE_COMPATIBILITY format a10 SQL>select name,total_mb,COMPATIBILITY,DATABASE_COMPATIBILITY from v$asm_diskgr oup;

set linesize 200 column NAME format a20 column PATH format a22 column FAILGROUP format a15 column REDUNDANCY format a8 SQL>select failgroup,name,path,mount_status,redundancy,total_mb,free_mb from v$ asm_disk order by failgroup;

ASM disk query set pages 9999 set lines 200 column path format a22 column name format a15 column redundancy format a10 column state format a10

SQL> select dg.name, d.path, d.total_mb, d.free_mb, round((d.total_mb-d.free_mb)*100/ d.total_mb,0) "% Used", d.disk_number, dg.type redundancy, dg.state from v$asm_disk d, v$asm_diskgroup dg where d.group_number=dg.group_number order by dg.name,d.disk_number;

Query the V$ASM_CLIENT for fixed view in an ASM instance or use asmcmd lsof to l ist active clients. set lines 200 pages 200 column GROUP_NUMBER format a20 column INSTANCE_NAME format a14 column SOFTWARE_VERSION format a20 column COMPATIBLE_VERSION format a10

column GROUP_NUMBER clear SQL>select * from V$ASM_CLIENT; --------------------------------------------------------------------------------OEM repository creation:--------------------------------------------------------------------------------emca -config dbcontrol db -repos create if errors happen then execute the undermention SQL commands as sys drop drop drop drop drop user sysman cascade; role MGMT_USER; user MGMT_VIEW cascade; public synonym MGMT_TARGET_BLACKOUTS; public synonym SETEMVIEWUSERCONTEXT;

emca -deconfig dbcontrol db -repos drop emca -config dbcontrol db -repos recreate ----------------------------------------------------------------------------------------------------------------------------------------------------------------ORACLE_HOME PATCHING:--------------------------------------------------------------------------------SQL>select * from v$version; 10.2.0.1.0 10->Major Database Release number 2->Database Maintainance Release number 0->Application Server Release number 4->Component Specific Release number 0->Platform Specific Release number echo $ORACLE_HOME which opatch $ORACLE_HOME/OPatch/opatch In 10g opatch is by default available opatch version opatch lsinventory (to check pre-installed patches for a particular ORACLE_HOME) opatch lsinventory -oh $ORACLE_HOME

unzip -qn p6810189_10204_Linux-x86 & p6810189_10204_Linux-x86----->patch<7digit patch number>_<ORACLE-VERSION>_<OS-PL ATFORM> shutdown gracefully the database instance before applying the patch fuser $ORACLE_HOME (to chech whether any process is using oracle hoem binaries) opatch apply opatch apply -oh $ORACLE_HOME (re-linking of oracle binaries is done) opatch lsinventory (check for confirmation) opatch creates a log file to tarck changes opatch rollback -id 6810189 (to remove a patch) check for inventory location:-keeps track of all the ORACLE_HOMES installed on a particular node. -location->/etc/oraInst.loc or /u01/app/oraInventory/oraInst.loc(this th e the inventory location whicch was asked during Oracle 10gR2 installation) -contains information about inventory owner group and inventory location . -any OD user who is a member of inventory group can apply the patch. -the ORACLE_HOME must be registered in the inventory then only patches c an be applied. -vi /u01/app/oraInventory/ContentsXML/ the above file keeps track of the ORACLE_HOME locations and your ORACLE _HOME should be there to apply the patch.

--------------------------------------------------------------------------------Remote display capture:--------------------------------------------------------------------------------echo $HOME echo $HOSTNAME echo $HOSTTYPE X-Server :- VNC(Virtual Networked Computing) Server X-Client :- VNC viewer

VNC client is used to connect a remote computer running the VNC server. Keyboard and Mouse commands are sent from your client machine to the server. There are a few important things to note, you must open port 5900 on the server for this to work since by default the Gnome Remote Desktop (called vino) listens on this port, also the person connecting will see the same session that you are currently logg ed in as. This means that any programs you have open will also be visible to the client, o f course this is very useful if you are helping someone remotely. rpm -qa|grep vnc chkconfig --list vncserver chkconfig --level 35 vncserver on service service service service vncserver vncserver vncserver vncserver status start stop restart status start stop restart

/etc/init.d/vncserver /etc/init.d/vncserver /etc/init.d/vncserver /etc/init.d/vncserver vi $HOME/.vnc/xstartup OR vi ~/.vnc/xstartup

(for fixed fonts for xterm) -fn *-fixed-*-*-*-20-* uncomment for normal desktop unset SESSION_MANAGER exec /etc/X11/xinit/xinitrc we need to add at least 1 VNC user, open the file /etc/sysconfig/vncservers as r oot vi /etc/sysconfig/vncservers VNCSERVERS="2:root 3:grid 4:oracle" VNCSERVERARGS[2]="-geometry 1152x864 -depth 16" VNCSERVERARGS[3]="-geometry 1152x864 -depth 16" VNCSERVERARGS[4]="-geometry 1152x864 -depth 16" ssh -X root@192.168.184.31 vncviewer :2

In this case we need to use port 5900+2000 so port 7900.

vncserver :1 this will startup VNC server that bind and listen to port 5801, 5901, and 6001 r espectively vncserver :2 this will startup VNC server that bind and listen to port 5802, 5902, and 6002 r espectively vncpasswd This creates a hidden folder called .vnc in your home folder containing the pass word file. vi /etc/sysconfig/iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 5901 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 5902 -j ACCEPT netstat -tulpn | less iptables -L -n /sbin/service iptables restart Now flush the iptables:iptables -F You will require a password to access your desktops. Password: Verify: ps -ef | grep -i vnc vncserver -geometry 1152x864 -depth 16 :4 vncserver -kill :1 vncserver -kill :2 --------------------------------------------------------------------------------for root network administration:ifconfig eth0 down ifconfig eth0 up ---------------------------------------------------------------------------------

--------------------------------------------------------------------------------some grid(ASM instance)+oracle database(RDBMS instance) related commands:--------------------------------------------------------------------------------ps -ef | grep reboot ps -ef | grep -i grid pgrep -lf SIPU to see the SIPU instance processes pgrep -lf ASM to see the +ASM instance processes pgrep -lf tns to see the listener processes As grid user:crsctl start resource -all crsctl stop resource -all crsctl status resource crsctl status resource -t crsctl status resource -p | more crsctl start crs -all crsctl stop crs -all crsctl stat res -t crsctl crsctl crsctl crsctl crsctl start resource [resource-name] stop resource [resource-name] stop resource [resource-name] -f (to forcefully shutdown) status resource [resource-name] getperm resource [resource-name]

crs_stat -ls crs_stat -ls [resource-name] crs_stop crsctl modify resource "ora.cssd" -attr "AUTO_START=1" crsctl modify resource "ora.diskmon" -attr "AUTO_START=1" crsctl modify resource "ora.evmd" -attr "AUTO_START=1" crsctl modify resource "ora.ons" -attr "AUTO_START=1" crsctl modify resource ora.LISTENER.lsnr -attr AUTO_START=1 crsctl modify resource ora.LISTENER1.lsnr -attr AUTO_START=1 crsctl modify resource ora.asm -attr AUTO_START=1 crsctl modify resource ora.SPFILE.dg -attr AUTO_START=1 crsctl modify resource ora.DATA.dg -attr AUTO_START=1 crsctl modify resource ora.ARCH.dg -attr AUTO_START=1

crsctl modify resource ora.FRA.dg -attr AUTO_START=1 crsctl modify resource ora.prod.db -attr AUTO_START=always srvctl enable nodeapps -g /u002/app/oracle/product/10.2.0/crs/bin/oifcfg iflist -p -n crsctl check cluster -all(only in oracle 11gr2 rac) srvctl start nodeapps cemutlo -n -w (command to check the name of the cluster) you cannot change the name of a cluster. The only to do that is to reinstall the clusterware mate!! and you can name it to what you want then. srvctl srvctl srvctl srvctl status config status config scan_listener scan_listener scan scan

pgrep -lf inherit pgrep -lf d.bin To upgrade Oracle Clusterware release 10.2 to Oracle Clusterware release 11g, you must first apply the 10.2.0.3 or later patch set. oifcfg iflist -p -n select name,ip_address from v$cluster_interconnects; select * from gv$cluster_interconnects; lists all the background processes initially started by Grid Infrastructure afte r a fresh installation:crsctl status resource -init -t

(for example auto start of resource will only happen for grid user and grid user cannot start the RDBMS instance) check these below flags:AUTO_START=restore AUTO_START=never AUTO_START=1 crsctl help crsctl check css (Cluster Synchronization Services) crsctl check has (Oracle High Availability Services)

crsctl config has (Displays the Oracle Restart configuration) crsctl disable has (Disables automatic restart of Oracle Restart) crsctl enable has (Enables automatic restart of Oracle Restart) crsctl start has (Starts Oracle Restart) crsctl stop has (Stops Oracle Restart->stop resources managed by Oracle Restart gracefully) crsctl stop has -f (Stops Oracle Restart->stop resources managed by Oracle Resta rt forcefully) (As oracle user) srvctl srvctl srvctl srvctl srvctl srvctl srvctl srvctl srvctl start database -d TRUCK stop database -d TRUCK remove database -d TRUCK status database -d TRUCK enable database -d TRUCK disble database -d TRUCK config database -d TRUCK config database -d TRUCK -a modify database -d BUS -a SPFILE,DATA,ARCH,FRA

srvctl add service -d BUS -s ora.diskmon -l PRIMARY srvctl remove service -d BUS -s ora.diskmon

(to add spfile to ASM instance) srvctl add asm -p /tmp/init+ASM.ora

(to add spfile to database) create spfile='+SPFILE' from pfile; srvctl modify database -d TRUCK -p $ORACLE_HOME/dbs/spfileSIPU.ora srvctl modify database -d TRUCK -p +SPFILE/ASM/ASMPARAMETERFILE/REGISTRY.253.729 805017

(to start a database using parameter file from non-default location) srvctl modify database -d TRUCK -p $ORACLE_HOME/dbs/initSIPU.ora -s open srvctl modify database -d TRUCK -p $ORACLE_HOME/dbs/spfileSIPU.ora -s open (how to register rdbms instance in oracle restart)

srvctl add database -d TRUCK -o $ORACLE_HOME -p $ORACLE_HOME/dbs/spfileSIPU.ora srvctl add database -d BUS -o $ORACLE_HOME -p +SPFILE/ASM/ASMPARAMETERFILE/REGIS TRY.253.729805017

(listener registration with oracle restart) srvctl add listener -l LISTENER -o $ORACLE_HOME -p "1521" srvctl add listener -l LISTENER1 -o $ORACLE_HOME -p "1522" (to skip the port checking) srvctl add listener -l LISTENER -s -o $ORACLE_HOME srvctl add listener -l LISTENER1 -s -o $ORACLE_HOME (adding asm insatnce to oracle restart) manually adding listener as well as diskgroups to an asm instance srvctl add asm -l LISTENER -p '+SPFILE' -d +DATA,+ARCH,+FRA,+ACFSDB,+SPFILE srvctl modify asm -l LISTENER1 srvctl add asm -l LISTENER use the above command to use a spfile from within a diskgroups

srvctl remove listener -l LISTENER srvctl remove listener -l LISTENER1 srvctl remove listener -l LISTENER2 srvctl status listener -l LISTENER srvctl status listener -l LISTENER1 srvctl status listener -l LISTENER2 srvctl config listener -l LISTENER srvctl config listener -l LISTENER1 srvctl config listener -l LISTENER2

(while adding the above listener the listener should be stopped and then execute the command) (to check whether ports are used or not) netstat -an |grep 1521 netstat -an |grep 1522 netstat -an |grep 2484 netstat -an |grep 2485

(Important log file locations in Oracle Restart Environment are) cat /u01/app/oracle/product/11.2.0.1.0/grid_1/log/<hostname>/ohasd/ohasd.log --------------------------------------------------------------------------------

-Hi, seems that your covering Oracle Bug 3458327. Search in Metalink for further desc riptions. Solutions is to modify /etc/inittab. remove last line from /etc/initab added by Oracle h1:35:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 </dev/null and insert this line to l1:1:wait:/etc/rc.d/rc 1 l2:2:wait:/etc/rc.d/rc 2 h1:35:respawn:/etc/init.d/init.cssd run >/dev/null 2>&1 </dev/null l3:3:wait:/etc/rc.d/rc 3 l4:4:wait:/etc/rc.d/rc 4 --------------------------------------------------------------------------------ORA-00845: MEMORY_TARGET not supported on this system -Starting from Oracle 11g the automatic memory management feature is now defined with parameter MEMORY_TARGET and MEMMORY_MAX_TARGET. -On linux file system the shared memory need to be mounted on /dev/shm directory on the operating system. -And the size of /dev/shm needs to be greater than MEMORY_TARGET or MEMMORY_MAX_ TARGET. -The AMM (Automatic Memory Management) now in 11g manages both SGA and PGA toget her by MMAN process. -MMAN (Memory Manager) is a background process that manages the dynamic resizing of SGA+PGA memory areas as the workload increases or decreases.This process was introduced in Orac le 10g. -And MEMORY_MAX_TARGET parameter in 11g comes instead of SGA_MAX_TARGET paramete r which was in 10g. -SGA_TARGET->MEMORY_TARGET (dynamic initialization parameter) -SGA_MAX_SIZE->MEMORY_MAX_TARGET (static initialization parameter) -The ORA-00845:can arises for the following two reasons on linux system. 1)If the shared memory which is mapped to /dev/shm directory is less tha n the size of MEMORY_TARGET or MEMORY_MAX_TARGET. OR 2)If the shared memory is not mapped to /dev/shm directory. -Make sure /dev/shm is properly mounted. You can see it by df -h

-As a root user, mount -t tmpfs shmfs -o size=4g /dev/shm -In order to make the settings persistence so that it will affect after restarti ng machine add an entry in /etc/fstab similar to the following:1)shmfs 0 0 /dev/shm tmpfs size=4g

How to increase /dev/shm ? umount /dev/shm mount -t tmpfs shmfs -o *size=><xx>m* /dev/shm --------------------------------------------------------------------------------DATABASE CHARACTER SET NATIONAL CHARACTER SET

GSD and ONS are daemons.ONS (Oracle Notification Service) has been introduced in 10.1 version of Oracle. GSD:- This daemon interact with SRVCTL,DBCA and Enterprise Manager and perform v arious managability operations on the nodes.You must start GSD on all the nodes before start running command on SRVCTL.You need only one GSD on each node. On unix you can find the GSD log $ORACLE_HOME/srvm/log/gsdaemon.log You can start GSD by following command GSDCTL -start / -install/ -remove ONS:- Oracle Notification Service has been introduced in 10.1.This component is for the interactions of applications,for example when one node fails then this d aemon that the application to failover to another node etc. Command for ONS onsctl start/stop/reconfigure/help/detailed/ping How to open a port in linux:vi /etc/sysconfig/iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 1521 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 1522 -j ACCEPT To make the changes in effect:-

service iptables restart to check if the ports are open or not:iptables -L -n /etc/init.d/iptables status service iptables status chkconfig --list iptables iptables 0:off 1:off chkconfig iptables off general listener:SQL>show parameter service SQL>alter system set db_unique_name='SAHU_BUS' scope=spfile; 2:on 3:on 4:on 5:on 6:off

Upgrading Grid Infrastructure home to 11.2.0.2 Ensure following environment variables are not set :ORA_CRS_HOME; ORACLE_HOME; O RA_NLS10; TNS_ADMIN # echo $ORA_CRS_HOME; echo $ORACLE_HOME; echo $ORA_NLS10; echo $TNS_ADMIN Strating with 11.2.0.2, Grid infrastructure (Clusterware and ASM Home) upgrade i s out of place upgrade i.e we install in new ORACLE_HOME. Unlike database home we cannot perform an inplace upgrade of Oracle Clusterware and Oracle ASM to existing homes. Unset following variables too $ unset ORACLE_BASE $ unset ORACLE_HOME $ unset ORACLE_SID --------------------------------------------------------------------------------LVM steps:--------------------------------------------------------------------------------step 1:create primary raw partitions of full disk size:physical volumes(PV) fdisk /dev/sdb fdisk /dev/sdc fdisk /dev/sdd

step 2:pvcreate /dev/sdb1 /dev/sdc1 /dev/sdd1 pvdisplay pvscan step 3:vgcreate racvg /dev/sdb1 /dev/sdc1 /dev/sdd1 vgdisplay vgscan step 4:--------------------------------------------------------------------------------all the LUNS have been attached to a single target:(oracle11gr2->OCFS OCR,VOTING DISK,DATABASE FILES) lvcreate -L 1g -n rawocrdata1 racvg lvcreate -L 1g -n rawocrdata2 racvg lvcreate -L 1g -n rawocrdata3 racvg lvcreate -L 2g -n rawvotedata1 racvg lvcreate -L 2g -n rawvotedata2 racvg lvcreate -L 2g -n rawvotedata3 racvg lvcreate -L 1g -n ocfsocrdata1 racvg lvcreate -L 1g -n ocfsocrdata2 racvg lvcreate -L 1g -n ocfsocrdata3 racvg lvcreate -L 2g -n ocfsvotedata1 racvg lvcreate -L 2g -n ocfsvotedata2 racvg lvcreate -L 2g -n ocfsvotedata3 racvg lvcreate -L 100g -n ocfsoradata racvg (oracle11gr2->SPFILE DISK ASM)->NORMAL REDUNDANCY(RAID 1 ONLY) lvcreate -L 1024mb -n spfile_fg1_disk1 racvg lvcreate -L 1024mb -n spfile_fg2_disk1 racvg (oracle11gr2->OCR ASM)->NORMAL REDUNDANCY(RAID 01) lvcreate -L 1024mb -n ocr_fg1_disk1 racvg lvcreate -L 1024mb -n ocr_fg1_disk2 racvg lvcreate -L 1024mb -n ocr_fg2_disk1 racvg lvcreate -L 1024mb -n ocr_fg2_disk2 racvg lvcreate -L 1024mb -n ocr_fg3_disk1 racvg lvcreate -L 1024mb -n ocr_fg3_disk2 racvg (oracle11gr2->VOTING DISK ASM)->HIGH REDUNDANCY(RAID 1) HIGH REDUNDANCY->5 disks NORMAL REDUNDANCY->3 disks VOTING DISKS->doesn't support mirrorong in ASM

lvcreate lvcreate lvcreate lvcreate lvcreate

-L -L -L -L -L

2048mb 2048mb 2048mb 2048mb 2048mb

-n -n -n -n -n

vote_disk1 vote_disk2 vote_disk3 vote_disk4 vote_disk5

racvg racvg racvg racvg racvg

(oracle11gr2->DATABASE FILES(controlfiles,datafiles,redologfiles digkgroup ASM)>HIGH REDUNDANCY(RAID 01) lvcreate lvcreate lvcreate lvcreate lvcreate lvcreate -L -L -L -L -L -L 110g 110g 110g 110g 110g 110g -n -n -n -n -n -n data_fg1_disk1 data_fg1_disk2 data_fg2_disk1 data_fg2_disk2 data_fg3_disk1 data_fg3_disk2 racvg racvg racvg racvg racvg racvg

(oracle11gr2->FRA digkgroup ASM)->NORMAL REDUNDANCY(RAID 01) lvcreate lvcreate lvcreate lvcreate -L -L -L -L 8g 8g 8g 8g -n -n -n -n fra_fg1_disk1 fra_fg1_disk2 fra_fg2_disk1 fra_fg2_disk2 racvg racvg racvg racvg

(oracle11gr2->ARCHIVE LOG FILES digkgroup ASM)->NORMAL REDUNDANCY(RAID 01) lvcreate lvcreate lvcreate lvcreate -L -L -L -L 10g 10g 10g 10g -n -n -n -n arch_fg1_disk1 arch_fg1_disk2 arch_fg2_disk1 arch_fg2_disk2 racvg racvg racvg racvg

(oracle11gr2->ACFS DISK ASM)->NORMAL REDUNDANCY(RAID 01) lvcreate lvcreate lvcreate lvcreate lvdisplay lvscan ---------------------------------------------------------------------------------L -L -L -L 50g 50g 50g 50g -n -n -n -n acfs_fg1_disk1 acfs_fg1_disk2 acfs_fg2_disk1 acfs_fg2_disk2 racvg racvg racvg racvg

--------------------------------------------------------------------------------(RHEL 5 ISCSI TARGET/INITIATOR ISCSI SAN) --------------------------------------------------------------------------------ALL THE BELOW SHOULD BE DONE AT THE STORAGE SERVER SIDE:--------------------------------------------------------------------------------chkconfig --list tgtd chkconfig --level 35 tgtd on

/sbin/service tgtd start /sbin/service tgtd status /sbin/service tgtd stop STEP1:--------------------------------------------------------------------------------STEPS to create ISCSI target setup:--------------------------------------------------------------------------------to delete a target:tgtadm --lld iscsi --mode target --op delete --tid=1 --------------------------------------------------------------------------------STEP2:--------------------------------------------------------------------------------to create a target:tgtadm --lld iscsi --op new --mode target --tid=1 --targetname iqn.1984-05.com.s ipu:storage-racvg-oracle11gr2 OR tgtadm --lld iscsi --op new --mode target --tid=1 -T iqn.1984-05.com.sipu:storag e-racvg-oracle11gr2 --------------------------------------------------------------------------------STEP3:--------------------------------------------------------------------------------to view the current target configuration:tgtadm --lld iscsi --op show --mode target iscsi-target initiator address example:tgtadm --lld iscsi --mode target --op bind --tid 1 --initiator-address=192.168.1 59.71/24 tgtadm --lld iscsi --op unbind --mode target --tid 1 --initiator-address=192.168 .159.71/24 --------------------------------------------------------------------------------STEP6:---------------------------------------------------------------------------------

Enable the target to accept any initiators:(anyboby in the network can login) tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL example:tgtadm --lld iscsi --op bind --mode target --tid 1 -I 192.168.159.22 tgtadm --lld iscsi --op bind --mode target --tid 1 -I 192.168.159.23 tgtadm --lld iscsi --op bind --mode target --tid 1 -I 192.168.159.24 --------------------------------------------------------------------------------Ensure TCP port 3260 can be accessed from remote host:netstat -tulpn | grep 3260 tcp 0 0 0.0.0.0:3260 0.0.0.0:* LIST EN 4198/tgtd tcp 0 0 :::3260 :::* LIST EN 4198/tgtd --------------------------------------------------------------------------------STEP7:--------------------------------------------------------------------------------vi /etc/rc.local is a soft link to /etc/rc.d/rc.local To make the setup persistent over reboot, put the command from the "tgtadm" comm ands into /etc/rc.local and enable the tgtd service:echo >> /etc/rc.local <<EOF tgtadm --lld iscsi --op new --mode target --tid=1 -T iqn.1984-05.com.sipu:storag e-racvg-oracle11gr2 tgtadm --lld ocrdata1 tgtadm --lld ocrdata2 tgtadm --lld ocrdata2 tgtadm --lld votedata1 tgtadm --lld votedata2 tgtadm --lld votedata3 tgtadm --lld socrdata1 tgtadm --lld socrdata2 tgtadm --lld socrdata3 tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /dev/racvg/raw iscsi --op new --mode logicalunit --tid 1 --lun 2 -b /dev/racvg/raw iscsi --op new --mode logicalunit --tid 1 --lun 3 -b /dev/racvg/raw iscsi --op new --mode logicalunit --tid 1 --lun 4 -b /dev/racvg/raw iscsi --op new --mode logicalunit --tid 1 --lun 5 -b /dev/racvg/raw iscsi --op new --mode logicalunit --tid 1 --lun 6 -b /dev/racvg/raw iscsi --op new --mode logicalunit --tid 1 --lun 7 -b /dev/racvg/ocf iscsi --op new --mode logicalunit --tid 1 --lun 8 -b /dev/racvg/ocf iscsi --op new --mode logicalunit --tid 1 --lun 9 -b /dev/racvg/ocf iscsi --op new --mode logicalunit --tid 1 --lun 10 -b /dev/racvg/oc

fsvotedata1 tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 11 -b /dev/racvg/oc fsvotedata2 tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 12 -b /dev/racvg/oc fsvotedata3 tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 13 -b /dev/racvg/oc fsoradata tgtadm --lld iscsi file_fg1_disk1 tgtadm --lld iscsi file_fg2_disk1 tgtadm --lld iscsi r_fg1_disk1 tgtadm --lld iscsi r_fg1_disk2 tgtadm --lld iscsi r_fg2_disk1 tgtadm --lld iscsi r_fg2_disk2 tgtadm --lld iscsi r_fg3_disk1 tgtadm --lld iscsi r_fg3_disk2 tgtadm --lld iscsi te_disk1 tgtadm --lld iscsi te_disk2 tgtadm --lld iscsi te_disk3 tgtadm --lld iscsi te_disk4 tgtadm --lld iscsi te_disk5 tgtadm --lld ta_fg1_disk1 tgtadm --lld ta_fg1_disk2 tgtadm --lld ta_fg2_disk1 tgtadm --lld ta_fg2_disk2 tgtadm --lld ta_fg3_disk1 tgtadm --lld ta_fg3_disk2 tgtadm --lld a_fg1_disk1 tgtadm --lld a_fg1_disk2 tgtadm --lld a_fg2_disk1 tgtadm --lld a_fg2_disk2 --op new --mode logicalunit --tid 1 --lun 14 -b /dev/racvg/sp --op new --mode logicalunit --tid 1 --lun 15 -b /dev/racvg/sp --op new --mode logicalunit --tid 1 --lun 16 -b /dev/racvg/oc --op new --mode logicalunit --tid 1 --lun 17 -b /dev/racvg/oc --op new --mode logicalunit --tid 1 --lun 18 -b /dev/racvg/oc --op new --mode logicalunit --tid 1 --lun 19 -b /dev/racvg/oc --op new --mode logicalunit --tid 1 --lun 20 -b /dev/racvg/oc --op new --mode logicalunit --tid 1 --lun 21 -b /dev/racvg/oc --op new --mode logicalunit --tid 1 --lun 22 -b /dev/racvg/vo --op new --mode logicalunit --tid 1 --lun 23 -b /dev/racvg/vo --op new --mode logicalunit --tid 1 --lun 24 -b /dev/racvg/vo --op new --mode logicalunit --tid 1 --lun 25 -b /dev/racvg/vo --op new --mode logicalunit --tid 1 --lun 26 -b /dev/racvg/vo

iscsi --op new --mode logicalunit --tid 1 --lun 27 -b /dev/racvg/da iscsi --op new --mode logicalunit --tid 1 --lun 28 -b /dev/racvg/da iscsi --op new --mode logicalunit --tid 1 --lun 29 -b /dev/racvg/da iscsi --op new --mode logicalunit --tid 1 --lun 30 -b /dev/racvg/da iscsi --op new --mode logicalunit --tid 1 --lun 31 -b /dev/racvg/da iscsi --op new --mode logicalunit --tid 1 --lun 32 -b /dev/racvg/da iscsi --op new --mode logicalunit --tid 1 --lun 33 -b /dev/racvg/fr iscsi --op new --mode logicalunit --tid 1 --lun 34 -b /dev/racvg/fr iscsi --op new --mode logicalunit --tid 1 --lun 35 -b /dev/racvg/fr iscsi --op new --mode logicalunit --tid 1 --lun 36 -b /dev/racvg/fr

tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 37 -b /dev/racvg/ar ch_fg1_disk1 tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 38 -b /dev/racvg/ar

ch_fg1_disk2 tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 39 -b /dev/racvg/ar ch_fg2_disk1 tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 40 -b /dev/racvg/ar ch_fg2_disk2 tgtadm --lld fs_fg1_disk1 tgtadm --lld fs_fg1_disk2 tgtadm --lld fs_fg2_disk1 tgtadm --lld fs_fg2_disk2 iscsi --op new --mode logicalunit --tid 1 --lun 41 -b /dev/racvg/ac iscsi --op new --mode logicalunit --tid 1 --lun 42 -b /dev/racvg/ac iscsi --op new --mode logicalunit --tid 1 --lun 43 -b /dev/racvg/ac iscsi --op new --mode logicalunit --tid 1 --lun 44 -b /dev/racvg/ac

tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL OR tgtadm --lld iscsi --op bind --mode target --tid 1 -I 192.168.159.22 tgtadm --lld iscsi --op bind --mode target --tid 1 -I 192.168.159.23 tgtadm --lld iscsi --op bind --mode target --tid 1 -I 192.168.159.24 EOF ---------------------------------------------------------------------------------

--------------------------------------------------------------------------------ALL THE BELOW SHOULD BE DONE AT THE CLIENT SIDE(AT ALL THE CLUSTERED NODES) --------------------------------------------------------------------------------chkconfig --list iscsi chkconfig --level 345 iscsi on /sbin/service iscsi start /sbin/service iscsi status /sbin/service iscsi stop chkconfig --list iscsid chkconfig --level 345 iscsid on /sbin/service iscsid start /sbin/service iscsid status /sbin/service iscsid stop After verifying that the iscsi-initiator-utils package is installed on both Orac le RAC nodes, start the iscsid service and enable it to automatically start when the system boots. We will also configure the iscsi service to automatically start which logs into iSCSI targets needed at system startup.

STEP1:-

--------------------------------------------------------------------------------STEPS to create ISCSI initiator setup:--------------------------------------------------------------------------------Type the following command to discover targets at a given IP address :iscsiadm --mode discovery --type sendtargets --portal 192.168.159.71:3260 OR iscsiadm -m discovery -t sendtargets -p 192.168.159.71:3260 --------------------------------------------------------------------------------STEP2:--------------------------------------------------------------------------------Login to the iscsi target session:--------------------------------------------------------------------------------iscsiadm -m node -T iqn.1984-05.com.sipu:storage-racvg-oracle11gr2 -p 192.168.15 9.71:3260 -l OR iscsiadm --mode node --targetname iqn.1984-05.com.sipu:storage-racvg-oracle11gr2 --portal 192.168.159.71:3260 --login --------------------------------------------------------------------------------STEP3:--------------------------------------------------------------------------------Logout from the iscsi target session:--------------------------------------------------------------------------------iscsiadm -m node -T iqn.1984-05.com.sipu:storage-racvg-oracle11gr2 -p 192.168.15 9.71:3260 -u OR iscsiadm --mode node --targetname iqn.1984-05.com.sipu:storage-racvg-oracle11gr2 --portal 192.168.159.71:3260 --logout --------------------------------------------------------------------------------STEP4:---------------------------------------------------------------------------------

automatic Login to the iscsi target session during startup:--------------------------------------------------------------------------------iscsiadm -m node -T iqn.1984-05.com.sipu:storage-racvg-oracle11gr2 -p 192.168.15 9.71 -o update -n node.startup -v automatic OR iscsiadm --mode node --targetname iqn.1984-05.com.sipu:storage-racvg-oracle11gr2 --portal 192.168.159.71 --op update -n node.startup -v automatic --------------------------------------------------------------------------------STEP5:--------------------------------------------------------------------------------disable automatic Login to the iscsi target session during startup:--------------------------------------------------------------------------------iscsiadm --mode node --targetname iqn.1984-05.com.sipu:storage-racvg-oracle11gr2 --portal 192.168.159.71:3260 --logout iscsiadm -m node -T iqn.1984-05.com.sipu:storage-racvg-oracle11gr2 -p 192.168.15 9.71 -o update -n node.startup -v manual OR iscsiadm --mode node --targetname iqn.1984-05.com.sipu:storage-racvg-oracle11gr2 --portal 192.168.159.71 --op update -n node.startup -v manual ---------------------------------------------------------------------------------

STEP6:--------------------------------------------------------------------------------delete the iscsi target initiator:--------------------------------------------------------------------------------iscsiadm -m node -o delete -T iqn.1984-05.com.sipu:storage-racvg-oracle11gr2 OR iscsiadm --mode node --op delete --targetname iqn.1984-05.com.sipu:storage-racvg -oracle11gr2 ---------------------------------------------------------------------------------

STEP7:--------------------------------------------------------------------------------Verify that login was successful:-

--------------------------------------------------------------------------------tail -f /var/log/messages You may need to restart iSCSI to probe partition and check disks: # service iscsi restart # partprobe # fdisk -l ----------------------------------------------------------------------------------------------------------------------------------------------------------------iSCSI initiator setup in RHEL 5.4x86_64:--------------------------------------------------------------------------------on three nodes:In talling the iSCSI (initiator) service rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"| grep iscsi-in itiator-utils or install rpm -Uvh iscsi-initiator-utils-* Configure the iSCSI (initiator) service on 3 nodes:chkconfig --list iscsi chkconfig --list iscsid [root@rac1 ~]# service iscsid start [root@rac2 ~]# service iscsid start [root@rac3 ~]# service iscsid start [root@rac1 ~]#chkconfig iscsi on [root@rac2 ~]#chkconfig iscsi on [root@rac3 ~]#chkconfig iscsi on [root@rac1 ~]# service iscsi status [root@rac2 ~]# service iscsi status [root@rac3 ~]# service iscsi status [root@rac1 ~]#chkconfig iscsid on [root@rac2 ~]#chkconfig iscsid on [root@rac3 ~]#chkconfig iscsid on [root@rac1 ~]# service iscsid status [root@rac2 ~]# service iscsid status [root@rac3 ~]# service iscsid status chkconfig --list iscsi chkconfig --list iscsid iscsi-ls -cl --------------------------------------------------------------------------------

-DEVLABEL PERSISTENT NODE SETUP:--------------------------------------------------------------------------------at all 3 nodes:rpm -ivh devlabel-0.48.01-1.i386.rpm service iscsi restart chkconfig --level 345 iscsi on fdisk -l | grep -i ^disk fdisk -l and create partitions thereafter partprobe on all the nodes in the clus ter devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel printid printid printid printid printid printid printid printid printid printid printid printid printid printid printid printid printid printid printid printid printid printid printid printid printid printid printid printid printid printid printid printid printid printid printid printid printid printid printid -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1 /dev/sdp1 /dev/sdq1 /dev/sdr1 /dev/sds1 /dev/sdt1 /dev/sdu1 /dev/sdv1 /dev/sdw1 /dev/sdx1 /dev/sdy1 /dev/sdz1 /dev/sdaa1 /dev/sdab1 /dev/sdac1 /dev/sdad1 /dev/sdae1 /dev/sdaf1 /dev/sdag1 /dev/sdah1 /dev/sdai1 /dev/sdaj1 /dev/sdak1 /dev/sdal1 /dev/sdam1 /dev/sdan1 /dev/sdao1

devlabel devlabel devlabel devlabel devlabel

printid printid printid printid printid

-d -d -d -d -d

/dev/sdap1 /dev/sdaq1 /dev/sdar1 /dev/sdas1 /dev/sdat1

devlabel add -d /dev/sdc1 -s /dev/rawocrdata1 devlabel add -d /dev/sdd1 -s /dev/rawocrdata2 devlabel add -d /dev/sde1 -s /dev/rawocrdata3 devlabel add -d /dev/sdf1 -s /dev/rawvotedata1 devlabel add -d /dev/sdg1 -s /dev/rawvotedata2 devlabel add -d /dev/sdh1 -s /dev/rawvotedata3 devlabel add -d /dev/sdi1 -s /dev/ocfsocrdata1 devlabel add -d /dev/sdj1 -s /dev/ocfsocrdata2 devlabel add -d /dev/sdk1 -s /dev/ocfsocrdata3 devlabel add -d /dev/sdl1 -s /dev/ocfsvotedata1 devlabel add -d /dev/sdm1 -s /dev/ocfsvotedata2 devlabel add -d /dev/sdn1 -s /dev/ocfsvotedata3 devlabel add -d /dev/sdo1 -s /dev/ocfsoradata devlabel add -d /dev/sdp1 -s /dev/spfile_fg1_disk1 devlabel add -d /dev/sdq1 -s /dev/spfile_fg2_disk1 devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel devlabel add add add add add add add add add add add add add add add add add add add add add add add add add -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d -d /dev/sdr1 /dev/sds1 /dev/sdt1 /dev/sdu1 /dev/sdv1 /dev/sdw1 -s -s -s -s -s -s /dev/ocr_fg1_disk1 /dev/ocr_fg1_disk2 /dev/ocr_fg2_disk1 /dev/ocr_fg2_disk2 /dev/ocr_fg3_disk1 /dev/ocr_fg3_disk2

/dev/sdx1 -s /dev/vote_disk1 /dev/sdy1 -s /dev/vote_disk2 /dev/sdz1 -s /dev/vote_disk3 /dev/sdaa1 -s /dev/vote_disk4 /dev/sdab1 -s /dev/vote_disk5 /dev/sdac1 /dev/sdad1 /dev/sdae1 /dev/sdaf1 /dev/sdag1 /dev/sdah1 /dev/sdai1 /dev/sdaj1 /dev/sdak1 /dev/sdal1 /dev/sdam1 /dev/sdan1 /dev/sdao1 /dev/sdap1 -s -s -s -s -s -s -s -s -s -s -s -s -s -s /dev/data_fg1_disk1 /dev/data_fg1_disk2 /dev/data_fg2_disk1 /dev/data_fg2_disk2 /dev/data_fg3_disk1 /dev/data_fg3_disk2 /dev/fra_fg1_disk1 /dev/fra_fg1_disk2 /dev/fra_fg2_disk1 /dev/fra_fg2_disk2 /dev/arch_fg1_disk1 /dev/arch_fg1_disk2 /dev/arch_fg2_disk1 /dev/arch_fg2_disk2

devlabel add -d /dev/sdaq1 -s /dev/acfs_fg1_disk1 devlabel add -d /dev/sdar1 -s /dev/acfs_fg1_disk2

devlabel add -d /dev/sdas1 -s /dev/acfs_fg2_disk1 devlabel add -d /dev/sdat1 -s /dev/acfs_fg2_disk2 devlabel status devlabel restart->automaticallly reads /etc/sysconfig/devlabel cat /etc/sysconfig/devlabel to confirm the symbolic links now send the above file to node2,node3 by logging there as follows:on rac2:cd /etc/sysconfig scp rac1:/etc/sysconfig/devlabel . devlabel restart on rac3:cd /etc/sysconfig scp rac1:/etc/sysconfig/devlabel . devlabel restart make an entry in vi /etc/rc.d/rc.local or vi /etc/rc.local to automatically map during reboot:devlabel restart configure ASM disks:rpm -ivh oracleasm-support-2.1.3-1.el5.i386.rpm rpm -ivh oracleasm-2.6.18-164.el5PAE-2.0.5-1.el5.i686.rpm rpm -ivh oracleasmlib-2.0.4-1.el5.i386.rpm chkconfig --list oracleasm service oracleasm configure -i enter the following command to load the oracleasm kernel module:/usr/sbin/oracleasm init service oracleasm createdisk /usr/sbin/oracleasm-discover 'ORCL:*' the asmdisk creation will be reflected by other nodes as well only you thave to scan the disks in other 2 nodes. service oracleasm createdisk spfile_fg1_disk1 /dev/spfile_fg1_disk1 service oracleasm createdisk spfile_fg2_disk1 /dev/spfile_fg2_disk1

service service service service service service service service service service service service service service service service service service service service service service service service service service service

oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm oracleasm

createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk createdisk

ocr_fg1_disk1 ocr_fg1_disk2 ocr_fg2_disk1 ocr_fg2_disk2 vote_disk1 vote_disk2 vote_disk3 vote_disk4 vote_disk5

/dev/ocr_fg1_disk1 /dev/ocr_fg1_disk2 /dev/ocr_fg2_disk1 /dev/ocr_fg2_disk2

/dev/vote_disk1 /dev/vote_disk2 /dev/vote_disk3 /dev/vote_disk4 /dev/vote_disk5 /dev/data_fg1_disk1 /dev/data_fg1_disk2 /dev/data_fg2_disk1 /dev/data_fg2_disk2 /dev/data_fg3_disk1 /dev/data_fg3_disk2 /dev/fra_fg1_disk1 /dev/fra_fg1_disk2 /dev/fra_fg2_disk1 /dev/fra_fg2_disk2 /dev/arch_fg1_disk1 /dev/arch_fg1_disk2 /dev/arch_fg2_disk1 /dev/arch_fg2_disk2 /dev/acfs_fg1_disk1 /dev/acfs_fg1_disk2 /dev/acfs_fg2_disk1 /dev/acfs_fg2_disk2

data_fg1_disk1 data_fg1_disk2 data_fg2_disk1 data_fg2_disk2 data_fg3_disk1 data_fg3_disk2 fra_fg1_disk1 fra_fg1_disk2 fra_fg2_disk1 fra_fg2_disk2 arch_fg1_disk1 arch_fg1_disk2 arch_fg2_disk1 arch_fg2_disk2 acfs_fg1_disk1 acfs_fg1_disk2 acfs_fg2_disk1 acfs_fg2_disk2

service oracleasm scandisks service oracleasm listdisks

service oracleasm scandisks service oracleasm listdisks ----------------------------------------------------------------------------------------------------------------------------------------------------------------ocfs2 configuration on all nodes:--------------------------------------------------------------------------------rpm -ivh ocfs2-tools-1.4.4-1.el5.i386.rpm rpm -ivh ocfs2-2.6.18-164.el5PAE-1.4.7-1.el5.i686.rpm rpm -ivh ocfs2console-1.4.4-1.el5.i386.rpm Then run the console using the following command as root just from all the nodes :ocfs2console &

and add hostname and the private ip of the node check the below file to confirm on all the nodes. cat /etc/ocfs2/cluster.conf some commands related to ocfs2:/etc/init.d/o2cb /etc/init.d/o2cb reated, ocfs2. /etc/init.d/o2cb /etc/init.d/o2cb /etc/init.d/o2cb ated, ocfs2. status offline ocfs2 ->The above command will offline the cluster we c unload ->The above command will unload all OCFS2 modules. load ->Loads all OCFS2 modules. online ocfs2 ->The above command will online the cluster we cre

Configure O2CB to Start on Boot and Adjust O2CB Heartbeat Threshold run the below command on all the nodes:/etc/init.d/o2cb configure /etc/init.d/o2cb /etc/init.d/o2cb /etc/init.d/o2cb /etc/init.d/o2cb enable offline ocfs2 unload configure

chkconfig --list ocfs2 chkconfig --list o2cb Format the OCFS2 File System create the below directories in all the nodes:mkdir -p /ocfsocrdata1 /ocfsocrdata2 /ocfsocrdata3 mkdir -p /ocfsvotedata1 /ocfsvotedata2 /ocfsvotedata3 mkdir -p /ocfsoradata format on any on of nodes as it is shared across the nodes:mkfs.ocfs2 -b 4K -C 32K -N 4 -L ocfsocrdata1 /dev/ocfsocrdata1 mkfs.ocfs2 -b 4K -C 32K -N 4 -L ocfsocrdata2 /dev/ocfsocrdata2 mkfs.ocfs2 -b 4K -C 32K -N 4 -L ocfsocrdata3 /dev/ocfsocrdata3 mkfs.ocfs2 -b 4K -C 32K -N 4 -L ocfsvotedata1 /dev/ocfsvotedata1 mkfs.ocfs2 -b 4K -C 32K -N 4 -L ocfsvotedata2 /dev/ocfsvotedata2 mkfs.ocfs2 -b 4K -C 32K -N 4 -L ocfsvotedata3 /dev/ocfsvotedata3 mkfs.ocfs2 -b 4K -C 32K -N 4 -L ocfsoradata /dev/ocfsoradata (the bwlow command is used only for labeling the ocfs formatted disk) mkfs.ocfs2 -L ocfsocrdata1 /dev/ocfsocrdata1 mkfs.ocfs2 -L ocfsocrdata2 /dev/ocfsocrdata2 mkfs.ocfs2 -L ocfsocrdata3 /dev/ocfsocrdata3

mkfs.ocfs2 -L ocfsvotedata1 /dev/ocfsvotedata1 mkfs.ocfs2 -L ocfsvotedata2 /dev/ocfsvotedata2 mkfs.ocfs2 -L ocfsvotedata3 /dev/ocfsvotedata3 mkfs.ocfs2 -L ocfsoradata /ocfsoradata Mount the OCFS2 File System:mount -t ocfs2 -o datavolume,nointr -L ocfsocrdata1 /ocfsocrdata1 mount -t ocfs2 -o datavolume,nointr -L ocfsocrdata2 /ocfsocrdata2 mount -t ocfs2 -o datavolume,nointr -L ocfsocrdata3 /ocfsocrdata3 mount -t ocfs2 -o datavolume,nointr -L ocfsvotedata1 /ocfsvotedata1 mount -t ocfs2 -o datavolume,nointr -L ocfsvotedata2 /ocfsvotedata2 mount -t ocfs2 -o datavolume,nointr -L ocfsvotedata3 /ocfsvotedata3 mount -t ocfs2 -o datavolume,nointr -L ocfsoradata /ocfsoradata

fsck.ocfs2 -n /dev/ocfsoradata -> checking the ocfs2 filesystems for errors check df -h or mount -v vi /etc/fstab for automount on all the nodes:#ocfs2 mount points LABEL=ocfsocrdata1 0 0 LABEL=ocfsocrdata2 0 0 LABEL=ocfsocrdata3 0 0 /ocfsocrdata1 /ocfsocrdata2 /ocfsocrdata3 ocfs2 ocfs2 ocfs2 ocfs2 ocfs2 ocfs2 ocfs2 _netdev,datavolume,nointr _netdev,datavolume,nointr _netdev,datavolume,nointr _netdev,datavolume,nointr _netdev,datavolume,nointr _netdev,datavolume,nointr _netdev,datavolume,nointr

LABEL=ocfsvotedata1 /ocfsvotedata1 0 0 LABEL=ocfsvotedata2 /ocfsvotedata2 0 0 LABEL=ocfsvotedata3 /ocfsvotedata3 0 0 LABEL=ocfsoradata 0 0 /ocfsoradata

now give the required permission on any one of the node:mount -a && df -h ls -ld /ocfsocrdata1 /ocfsocrdata2 /ocfsocrdata3 ls -ld /ocfsvotedata1 /ocfsvotedata2 /ocfsvotedata3 ls -ld /ocfsoradata chown -R grid:oinstall /ocfsocrdata1 /ocfsocrdata2 /ocfsocrdata3 chown -R grid:oinstall /ocfsvotedata1 /ocfsvotedata2 /ocfsvotedata3 chown -R oracle:oinstall /ocfsoradata

ls -ld /ocfsocrdata1 /ocfsocrdata2 /ocfsocrdata3 ls -ld /ocfsvotedata1 /ocfsvotedata2 /ocfsvotedata3 ls -ld /ocfsoradata -----------------------------------------------------------------------------------------------------------------****************************************------------------------------RAW device still support. OCFS is not stable in some case and ASM c n't support OCR and Voting(10g only), hence RAW is still the best option for OCR and Voting. ---------------------------------****************************************------------------------------On all the 3 rac nodes in a cluster:UDEV entries Dynamic Device Management tool in Redhat Enterprise Linux 5 onwards :(The mapping file) ---------------------------------****************************************------------------------------vi /etc/udev/rules.d/61-raw.rules OR vi /etc/udev/rules.d/65-raw-permissions.rules #OCR DISKS ACTION=="add", KERNEL=="sdc1", RUN+="/bin/raw /dev/raw/raw1 %N" ACTION=="add", KERNEL=="sdd1", RUN+="/bin/raw /dev/raw/raw2 %N" ACTION=="add", KERNEL=="sde1", RUN+="/bin/raw /dev/raw/raw3 %N" #VOTING DISKS ACTION=="add", KERNEL=="sdf1", RUN+="/bin/raw /dev/raw/raw3 %N" ACTION=="add", KERNEL=="sdg1", RUN+="/bin/raw /dev/raw/raw4 %N" ACTION=="add", KERNEL=="sdh1", RUN+="/bin/raw /dev/raw/raw5 %N" ---------------------------------****************************************---------------------------------------------------------------****************************************------------------------------To get the permissions right, create the below new file,with the following conte nt: ---------------------------------****************************************------------------------------vi /etc/udev/rules.d/99-oracle.rules OR vi /etc/udev/rules.d/63-oracle-raw.rules #OCR DISKS KERNEL=="raw1", OWNER="root", GROUP="oinstall", MODE="0640" KERNEL=="raw2", OWNER="root", GROUP="oinstall", MODE="0640" KERNEL=="raw3", OWNER="root", GROUP="oinstall", MODE="0640"

#VOTING DISKS KERNEL=="raw4", OWNER="grid", GROUP="oinstall", MODE="0640" KERNEL=="raw5", OWNER="grid", GROUP="oinstall", MODE="0640" KERNEL=="raw6", OWNER="grid", GROUP="oinstall", MODE="0640" This in turn will change all /dev/raw/raw[1-5] permissions to 0640, owner root,o racle, group oinstall. ---------------------------------****************************************------------------------------SQL>SELECT inst_name FROM v$active_instances; SQL>SELECT svrname, dirname FROM v$dnfs_servers; --------------------------------------------------------------------------------STEP 11:- SSH configuration -------------------------------------------------------------------- -----------user equivalence for oracle user:at node all the nodes:[root@node1 ~]# su - oracle [oracle@node1 ~]$ [oracle@node1 ~]$ [oracle@node1 ~]$ [oracle@node1 ~]$ [oracle@node1 ~]$ [oracle@node1 ~]$ only at node 1:[oracle@node1 ~]$ [oracle@node1 ~]$ [oracle@node1 ~]$ s [oracle@node1 ~]$ s [oracle@node1 ~]$ s [oracle@node1 ~]$ s [oracle@node1 ~]$ s [oracle@node1 ~]$ s [oracle@node1 ~]$ s touch ~/.ssh/authorized_keys ls -l ~/.ssh ssh node1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_key ssh node1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_key ssh node2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_key ssh node2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_key ssh node3 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_key ssh node3 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_key ssh node4 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_key id -a oracle mkdir ~/.ssh chmod 700 ~/.ssh /usr/bin/ssh-keygen -t dsa /usr/bin/ssh-keygen -t rsa ls -l ~/.ssh

[oracle@node1 ~]$ s [oracle@node1 ~]$ [oracle@node1 ~]$ [oracle@node1 ~]$ [oracle@node1 [oracle@node2 [oracle@node3 [oracle@node4 [oracle@node5 ~]$ ~]$ ~]$ ~]$ ~]$

ssh node4 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_key scp ~/.ssh/authorized_keys node2:.ssh/authorized_keys scp ~/.ssh/authorized_keys node3:.ssh/authorized_keys scp ~/.ssh/authorized_keys node4:.ssh/authorized_keys chmod chmod chmod chmod chmod 600 600 600 600 600 ~/.ssh/authorized_keys ~/.ssh/authorized_keys ~/.ssh/authorized_keys ~/.ssh/authorized_keys ~/.ssh/authorized_keys

Perform the following as the oracle user on node node1 ,node2 and node3:ssh ssh ssh ssh ssh ssh ssh ssh ssh ssh ssh ssh ssh ssh ssh ssh ssh ssh ssh ssh ssh ssh ssh ssh ssh ssh ssh ssh ssh ssh rac1 rac2 rac3 rac4 rac5 date date date date date date date date date date

rac1.sipu.com rac2.sipu.com rac3.sipu.com rac4.sipu.com rac5.sipu.com rac1-priv rac2-priv rac3-priv rac4-priv rac5-priv date date date date date

rac1-priv.sipu.com rac2-priv.sipu.com rac3-priv.sipu.com rac4-priv.sipu.com rac5-priv.sipu.com rac1-storage rac2-storage rac3-storage rac4-storage rac5-storage date date date date date

date date date date date

rac1-storage.sipu.com rac2-storage.sipu.com rac3-storage.sipu.com rac4-storage.sipu.com rac5-storage.sipu.com

date date date date date

ssh rac1 "date;hostname" && ssh rac2 "date;hostname" && ssh rac3 "date;hostname" && ssh rac4 "date;hostname" && ssh rac5 "date;hostname" -----------------------------