分类目录归档:Oracle

11.1.0.7.0 RAC升级11.2.0.4.0

本文档是在《OEL5.10安装11.1.0.6 RAC升级11.1.0.7》基础上完成的升级测试:http://www.lynnlee.cn/?p=1000,文档包含了整个升级过程的所有步骤和过程,包括将Cluster管理集群模式更换为Grid Infrastructure,迁移原ASM磁盘组到grid下,安装11.2.0.4版本的databse软件,升级database等一些列过程。由于11gR2之后版本的RAC和11gR1相比变化很大,因此本文档按照11gR2官方推荐管理方式进行安装配置升级。

〇、环境描述
OEL 5.10
Database 11.1.0.7
Cluster 11.1.0.7
/dev/asm-diskb/c存放OCR
/dev/asm-diskd 存放VOTING DISK
/dev/asm-diske DATA ASM DISK GROUP数据
/dev/asm-diskf FRA  ASM DISK GROUP闪回区

一、配置SCAN IP和DNS
由于11gR1版本之前的RAC没有SCAN IP概念,从11gR2之后版本均使用SCAN IP,该IP可是像VIP一样使用hosts文件解析,但是官方还是首选推荐使用DNS进行解析,SCAN最多可以有三个IP地址解析,这里我们使用节点1作为DNS Server,配置DNS解析SCAN IP的步骤过程请参考《Oracle 11g R2 RAC配置DNS解析SCAN IP》链接:http://www.lynnlee.cn/?p=960,这里我们的SCAN IP设置如下:
# SCAN IP(使用DNS进行)
192.168.56.101 scan scan.oracle.com
192.168.56.102 scan scan.oracle.com
192.168.56.103 scan scan.oracle.com

二、添加grid用户及修改oracle用户组
11gR2版本的RAC使用grid用户管理安装和管理集群,因此这里需要添加grid用户,同时调整oracle用户属主:
[root@11grac1.localdomain:/root]$ id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba)
[root@11grac1.localdomain:/root]$ groupadd -g 5000 asmadmin
[root@11grac1.localdomain:/root]$ groupadd -g 5001 asmdba
[root@11grac1.localdomain:/root]$ groupadd -g 5002 asmoper
[root@11grac1.localdomain:/root]$ groupadd -g 6002 oper 

[root@11grac2.localdomain:/root]$ id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba)
[root@11grac2.localdomain:/root]$ groupadd -g 5000 asmadmin
[root@11grac2.localdomain:/root]$ groupadd -g 5001 asmdba
[root@11grac2.localdomain:/root]$ groupadd -g 5002 asmoper
[root@11grac2.localdomain:/root]$ groupadd -g 6002 oper 

[root@11grac1.localdomain:/root]$ usermod -a -G asmadmin,asmdba,asmoper,oper oracle
[root@11grac1.localdomain:/root]$ id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),5000(asmadmin),5001(asmdba),5002(asmoper),6002(oper)
[root@11grac2.localdomain:/root]$ usermod -a -G asmadmin,asmdba,asmoper,oper oracle
[root@11grac2.localdomain:/root]$ id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),5000(asmadmin),5001(asmdba),5002(asmoper),6002(oper)

[root@11grac1.localdomain:/root]$ useradd -g oinstall -G dba,asmadmin,asmdba,asmoper grid  
[root@11grac1.localdomain:/root]$ id grid
uid=54322(grid) gid=54321(oinstall) groups=54321(oinstall),54322(dba),5000(asmadmin),5001(asmdba),5002(asmoper)
[root@11grac1.localdomain:/root]$ passwd grid
Changing password for user grid.
New UNIX password: 
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password: 
passwd: all authentication tokens updated successfully.
[root@11grac2.localdomain:/root]$ useradd -g oinstall -G dba,asmadmin,asmdba,asmoper grid  
[root@11grac2.localdomain:/root]$ id grid
uid=54322(grid) gid=54321(oinstall) groups=54321(oinstall),54322(dba),5000(asmadmin),5001(asmdba),5002(asmoper)
[root@11grac2.localdomain:/root]$ passwd grid
Changing password for user grid.
New UNIX password: 
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password: 
passwd: all authentication tokens updated successfully.

三、创建grid软件安装的GI_BASE/GI_HOME相关目录修改环境变量
[root@11grac1.localdomain:/root]$ mkdir -p /u01/app/grid
[root@11grac1.localdomain:/root]$ mkdir -p /u01/app/11.2.0/grid
[root@11grac2.localdomain:/root]$ mkdir -p /u01/app/grid
[root@11grac2.localdomain:/root]$ mkdir -p /u01/app/11.2.0/grid
[root@11grac1.localdomain:/root]$ cd /u01/app/
[root@11grac1.localdomain:/u01/app]$ chown -R grid:oinstall 11.2.0
[root@11grac1.localdomain:/u01/app]$ chown grid:oinstall grid
[root@11grac2.localdomain:/root]$ cd /u01/app/
[root@11grac2.localdomain:/u01/app]$ chown -R grid:oinstall 11.2.0
[root@11grac2.localdomain:/u01/app]$ chown grid:oinstall grid

[grid@11grac1 ~]$ cat .bash_profile 
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

export ORACLE_SID=+ASM1
export ORACLE_BASE=/u01/app/grid
export GRID_BASE=$ORACLE_BASE
export GI_BASE=$GRID_BASE
export ORACLE_HOME=/u01/app/11.2.0/grid
export GRID_HOME=$ORACLE_HOME
export GI_HOME=$GRID_HOME
export PATH=$ORACLE_HOME/bin:$PATH

[grid@11grac2 ~]$ cat .bash_profile 
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

export ORACLE_SID=+ASM2
export ORACLE_BASE=/u01/app/grid
export GRID_BASE=$ORACLE_BASE
export GI_BASE=$GRID_BASE
export ORACLE_HOME=/u01/app/11.2.0/grid
export GRID_HOME=$ORACLE_HOME
export GI_HOME=$GRID_HOME
export PATH=$ORACLE_HOME/bin:$PATH

四、创建11gR2 oracle软件的HOME目录(BASE沿用原来的BASE目录)
[oracle@11grac1.localdomain:/home/oracle]$ cd /u01/app/oracle/product/
[oracle@11grac1.localdomain:/u01/app/oracle/product]$ ls -l
total 4
drwxr-xr-x 3 oracle oinstall 4096 May 14 22:54 11.1.0
[oracle@11grac1.localdomain:/u01/app/oracle/product]$ mkdir -p 11.2.0/dbhome_1
[oracle@11grac1.localdomain:/u01/app/oracle/product]$ ls -l
total 8
drwxr-xr-x 3 oracle oinstall 4096 May 14 22:54 11.1.0
drwxr-xr-x 3 oracle oinstall 4096 May 18 07:28 11.2.0
[oracle@11grac2.localdomain:/home/oracle]$ cd /u01/app/oracle/product/
[oracle@11grac2.localdomain:/u01/app/oracle/product]$ ls -l
total 4
drwxr-xr-x 3 oracle oinstall 4096 May 14 22:54 11.1.0
[oracle@11grac2.localdomain:/u01/app/oracle/product]$ mkdir -p 11.2.0/dbhome_1
[oracle@11grac2.localdomain:/u01/app/oracle/product]$ ls -l
total 8
drwxr-xr-x 3 oracle oinstall 4096 May 14 23:06 11.1.0
drwxr-xr-x 3 oracle oinstall 4096 May 18 07:28 11.2.0

五、创建安装11gR2 grid Infrastructure ASM共享磁盘(使用udev不使用asmlib)
这里作为测试只添加一块磁盘/dev/sdg,创建ASM磁盘组,存放11gR2版本grid集群的ocr/votingdisk(VirtualBox为两个节点添加共享磁盘方法不再赘述)
[root@11grac1.localdomain:/root]$ fdisk -l

Disk /dev/sdg: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdg doesn’t contain a valid partition table

[root@11grac2.localdomain:/root]$ fdisk -l

Disk /dev/sdg: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdg doesn’t contain a valid partition table

使用该命令将新添加的磁盘/dev/sdg添加到udev rule文件中:
for i in g;
do
echo “KERNEL==\”sd*\”, BUS==\”scsi\”, PROGRAM==\”/sbin/scsi_id -g -u -s %p\”, RESULT==\”`scsi_id -g -u -s /block/sd$i`\”, NAME=\”asm-disk$i\”, OWNER=\”grid\”, GROUP=\”asmadmin\”, MODE=\”0660\””
done

[root@11grac1.localdomain:/root]$ for i in g;
> do
> echo “KERNEL==\”sd*\”, BUS==\”scsi\”, PROGRAM==\”/sbin/scsi_id -g -u -s %p\”, RESULT==\”`scsi_id -g -u -s /block/sd$i`\”, NAME=\”asm-disk$i\”, OWNER=\”grid\”, GROUP=\”asmadmin\”, MODE=\”0660\””
> done
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VBa7feea0a-164d1478_”, NAME=”asm-diskg”, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″
[root@11grac1.localdomain:/root]$ 
[root@11grac1.localdomain:/root]$ vi /etc/udev/rules.d/99-oracle-asmdevices.rules 
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VB424a5eb7-c9274de0_”, NAME=”asm-diskb”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VB95c63929-9336a092_”, NAME=”asm-diskc”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VBa044f79d-51b67554_”, NAME=”asm-diskd”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”00660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VB86ee407e-415b5b32_”, NAME=”asm-diske”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VBe59f5561-e0df75b7_”, NAME=”asm-diskf”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VBa7feea0a-164d1478_”, NAME=”asm-diskg”, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″

[root@11grac2.localdomain:/root]$ for i in g;
> do
> echo “KERNEL==\”sd*\”, BUS==\”scsi\”, PROGRAM==\”/sbin/scsi_id -g -u -s %p\”, RESULT==\”`scsi_id -g -u -s /block/sd$i`\”, NAME=\”asm-disk$i\”, OWNER=\”grid\”, GROUP=\”asmadmin\”, MODE=\”0660\””
> done
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VBa7feea0a-164d1478_”, NAME=”asm-diskg”, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″
[root@11grac2.localdomain:/root]$ vi /etc/udev/rules.d/99-oracle-asmdevices.rules 

KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VB424a5eb7-c9274de0_”, NAME=”asm-diskb”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VB95c63929-9336a092_”, NAME=”asm-diskc”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VBa044f79d-51b67554_”, NAME=”asm-diskd”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VB86ee407e-415b5b32_”, NAME=”asm-diske”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VBe59f5561-e0df75b7_”, NAME=”asm-diskf”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VBa7feea0a-164d1478_”, NAME=”asm-diskg”, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″

[root@11grac1.localdomain:/root]$ start_udev 
Starting udev:                                             [  OK  ]
[root@11grac2.localdomain:/root]$ start_udev 
Starting udev:                                             [  OK  ]

[root@11grac1.localdomain:/root]$ ls -l /dev/asm-disk*
brw-rw—- 1 oracle oinstall 8, 16 May 18  2015 /dev/asm-diskb
brw-rw—- 1 oracle oinstall 8, 32 May 18  2015 /dev/asm-diskc
brw-rw—- 1 oracle oinstall 8, 48 May 18 08:16 /dev/asm-diskd
brw-rw—- 1 oracle oinstall 8, 64 May 18 08:15 /dev/asm-diske
brw-rw—- 1 oracle oinstall 8, 80 May 18 08:15 /dev/asm-diskf
brw-rw—- 1 grid   asmadmin 8, 96 May 18 08:17 /dev/asm-diskg
[root@11grac2.localdomain:/root]$ ls -l /dev/asm-disk*
brw-rw—- 1 oracle oinstall 8, 16 May 18 08:16 /dev/asm-diskb
brw-rw—- 1 oracle oinstall 8, 32 May 18 08:16 /dev/asm-diskc
brw-rw—- 1 oracle oinstall 8, 48 May 18 08:17 /dev/asm-diskd
brw-rw—- 1 oracle oinstall 8, 64 May 18 08:15 /dev/asm-diske
brw-rw—- 1 oracle oinstall 8, 80 May 18 08:15 /dev/asm-diskf
brw-rw—- 1 grid   asmadmin 8, 96 May 18 08:18 /dev/asm-diskg

由于11gR1的RAC ASM磁盘均使用oracle用户管理,因此从udev下的规则文件中我们可以看到,曾经的asm-diskb~f用户属主和权限均为oracle:oinstall,但是因为新添加的这块磁盘是用来存放11gR2 RAC grid集群OCR和Voting Disk,因此磁盘权限和属主均变化为grid:asmadmin

六、停止11gR1 RAC数据库

[root@11grac1.localdomain:/root]$ crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora....SM1.asm application    ONLINE    ONLINE    11grac1     
ora....C1.lsnr application    ONLINE    ONLINE    11grac1     
ora....ac1.gsd application    ONLINE    ONLINE    11grac1     
ora....ac1.ons application    ONLINE    ONLINE    11grac1     
ora....ac1.vip application    ONLINE    ONLINE    11grac1     
ora....SM2.asm application    ONLINE    ONLINE    11grac2     
ora....C2.lsnr application    ONLINE    ONLINE    11grac2     
ora....ac2.gsd application    ONLINE    ONLINE    11grac2     
ora....ac2.ons application    ONLINE    ONLINE    11grac2     
ora....ac2.vip application    ONLINE    ONLINE    11grac2     
ora.rac11g.db  application    ONLINE    ONLINE    11grac2     
ora....g1.inst application    ONLINE    ONLINE    11grac1     
ora....g2.inst application    ONLINE    ONLINE    11grac2     
[oracle@11grac1.localdomain:/home/oracle]$ srvctl  stop database -d rac11g
[oracle@11grac1.localdomain:/home/oracle]$ srvctl  stop asm -n 11grac1
[oracle@11grac1.localdomain:/home/oracle]$ srvctl  stop asm -n 11grac2
[oracle@11grac1.localdomain:/home/oracle]$ srvctl  stop nodeapps -n 11grac1
[oracle@11grac1.localdomain:/home/oracle]$ srvctl  stop nodeapps -n 11grac2
[oracle@11grac1.localdomain:/home/oracle]$ crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora....SM1.asm application    OFFLINE   OFFLINE               
ora....C1.lsnr application    OFFLINE   OFFLINE               
ora....ac1.gsd application    OFFLINE   OFFLINE               
ora....ac1.ons application    OFFLINE   OFFLINE               
ora....ac1.vip application    OFFLINE   OFFLINE               
ora....SM2.asm application    OFFLINE   OFFLINE               
ora....C2.lsnr application    OFFLINE   OFFLINE               
ora....ac2.gsd application    OFFLINE   OFFLINE               
ora....ac2.ons application    OFFLINE   OFFLINE               
ora....ac2.vip application    OFFLINE   OFFLINE               
ora.rac11g.db  application    OFFLINE   OFFLINE               
ora....g1.inst application    OFFLINE   OFFLINE               
ora....g2.inst application    OFFLINE   OFFLINE               
[root@11grac1.localdomain:/root]$ crsctl stop crs
Stopping resources. 
This could take several minutes.
Successfully stopped Oracle Clusterware resources 
Stopping Cluster Synchronization Services. 
Shutting down the Cluster Synchronization Services daemon. 
Shutdown request successfully issued.
[root@11grac2.localdomain:/root]$ crsctl stop crs
Stopping resources. 
This could take several minutes.
Successfully stopped Oracle Clusterware resources 
Stopping Cluster Synchronization Services. 
Shutting down the Cluster Synchronization Services daemon. 
Shutdown request successfully issued.

七、备份11gR1 RAC软件
[root@11grac1.localdomain:/root]$ mkdir -p /tmp/bk
[root@11grac1.localdomain:/root]$ cd /tmp/bk
1)备份OCR/voting disk
[root@11grac1.localdomain:/tmp/bk]$ ocrconfig -export ocrexp.bak
[root@11grac1.localdomain:/tmp/bk]$ ll
total 92
-rw-r–r– 1 root root 89399 May 18 08:42 ocrexp.bak
[root@11grac1.localdomain:/tmp/bk]$ crsctl query css votedisk
 0.     0    /dev/asm-diskd
Located 1 voting disk(s).
[root@11grac1.localdomain:/tmp/bk]$ dd if=/dev/asm-diskd of=/tmp/bk/votedisk.bak 
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 55.8614 seconds, 19.2 MB/s
2)备份RAC初始化脚本
— 这里需要在双节点上备份/etc/inittab配置文件以及以下初始化脚本
/etc/init.d/init.crs
/etc/init.d/init.crsd
/etc/init.d/init.cssd
/etc/init.d/init.evmd 

[root@11grac1.localdomain:/tmp/bk]$ cp /etc/inittab inittab.bak
[root@11grac1.localdomain:/tmp/bk]$ cp /etc/init.d/init.crs  init.crs.bak
[root@11grac1.localdomain:/tmp/bk]$ cp /etc/init.d/init.crsd init.crsd.bak
[root@11grac1.localdomain:/tmp/bk]$ cp /etc/init.d/init.cssd init.cssd.bak
[root@11grac1.localdomain:/tmp/bk]$ cp /etc/init.d/init.evmd init.evmd.bak
[root@11grac1.localdomain:/tmp/bk]$ ls -l
total 1049776
-rwxr-xr-x 1 root root       2236 May 18 08:50 init.crs.bak
-rwxr-xr-x 1 root root       5579 May 18 08:50 init.crsd.bak
-rwxr-xr-x 1 root root      56322 May 18 08:50 init.cssd.bak
-rwxr-xr-x 1 root root       3854 May 18 08:50 init.evmd.bak
-rw-r–r– 1 root root       1870 May 18 08:49 inittab.bak
-rw-r–r– 1 root root      89399 May 18 08:42 ocrexp.bak
-rw-r–r– 1 root root 1073741824 May 18 08:47 votedisk.bak

[root@11grac2.localdomain:/root]$ mkdir -p /tmp/bk
[root@11grac2.localdomain:/root]$ cd /tmp/bk
[root@11grac2.localdomain:/tmp/bk]$ cp /etc/inittab inittab.bak
[root@11grac2.localdomain:/tmp/bk]$ cp /etc/init.d/init.crs  init.crs.bak
[root@11grac2.localdomain:/tmp/bk]$ cp /etc/init.d/init.crsd init.crsd.bak
[root@11grac2.localdomain:/tmp/bk]$ cp /etc/init.d/init.cssd init.cssd.bak
[root@11grac2.localdomain:/tmp/bk]$ cp /etc/init.d/init.evmd init.evmd.bak
[root@11grac2.localdomain:/tmp/bk]$ ls -l
total 80
-rwxr-xr-x 1 root root  2236 May 18 08:52 init.crs.bak
-rwxr-xr-x 1 root root  5579 May 18 08:52 init.crsd.bak
-rwxr-xr-x 1 root root 56322 May 18 08:52 init.cssd.bak
-rwxr-xr-x 1 root root  3854 May 18 08:52 init.evmd.bak
-rw-r–r– 1 root root  1870 May 18 08:52 inittab.bak
3)备份集群及数据库软件
tar -cxzf /tmp/bk/dbs.tar.gz /u01/app/oracle/*
tar -cxzf /tmp/bk/crs.tar.gz /u01/app/crs/*
4)备份数据库
使用RMAN备份数据库不再赘述。
5)移除/etc/oracle
[root@11grac1.localdomain:/root]$ ls -l /etc/oracle 
total 12
-rw-r–r– 1 root oinstall   81 May 14 22:29 ocr.loc
drwxrwxr-x 5 root root     4096 May 18 07:06 oprocd
drwxr-xr-x 3 root root     4096 May 14 22:29 scls_scr
[root@11grac1.localdomain:/root]$ mv /etc/oracle/ /tmp/bk/etc_oracle
[root@11grac2.localdomain:/root]$ ls -l /etc/oracle
total 12
-rw-r–r– 1 root oinstall   81 May 14 22:35 ocr.loc
drwxrwxr-x 5 root root     4096 May 18 07:06 oprocd
drwxr-xr-x 3 root root     4096 May 14 22:35 scls_scr
[root@11grac2.localdomain:/root]$ mv /etc/oracle /tmp/bk/etc_oracle
6)移除/etc/init.d/init*
[root@11grac1.localdomain:/root]$ cd /tmp/bk
[root@11grac1.localdomain:/tmp/bk]$ ls -l
total 1049780
drwxr-xr-x 4 root oinstall       4096 May 14 22:29 etc_oracle
-rwxr-xr-x 1 root root           2236 May 18 08:50 init.crs.bak
-rwxr-xr-x 1 root root           5579 May 18 08:50 init.crsd.bak
-rwxr-xr-x 1 root root          56322 May 18 08:50 init.cssd.bak
-rwxr-xr-x 1 root root           3854 May 18 08:50 init.evmd.bak
-rw-r–r– 1 root root           1870 May 18 08:49 inittab.bak
-rw-r–r– 1 root root          89399 May 18 08:42 ocrexp.bak
-rw-r–r– 1 root root     1073741824 May 18 08:47 votedisk.bak
[root@11grac1.localdomain:/tmp/bk]$ mkdir init_mv
[root@11grac1.localdomain:/tmp/bk]$ cd init_mv
[root@11grac1.localdomain:/tmp/bk/init_mv]$ mv /etc/init.d/init.* /tmp/bk/init_mv/
[root@11grac1.localdomain:/tmp/bk/init_mv]$ ls -l
total 76
-rwxr-xr-x 1 root root  2236 May 17 08:20 init.crs
-rwxr-xr-x 1 root root  5579 May 17 08:20 init.crsd
-rwxr-xr-x 1 root root 56322 May 17 08:20 init.cssd
-rwxr-xr-x 1 root root  3854 May 17 08:20 init.evmd

[root@11grac2.localdomain:/root]$ cd /tmp/bk/
[root@11grac2.localdomain:/tmp/bk]$ ls -l
total 84
drwxr-xr-x 4 root oinstall  4096 May 14 22:35 etc_oracle
-rwxr-xr-x 1 root root      2236 May 18 08:52 init.crs.bak
-rwxr-xr-x 1 root root      5579 May 18 08:52 init.crsd.bak
-rwxr-xr-x 1 root root     56322 May 18 08:52 init.cssd.bak
-rwxr-xr-x 1 root root      3854 May 18 08:52 init.evmd.bak
-rw-r–r– 1 root root      1870 May 18 08:52 inittab.bak
[root@11grac2.localdomain:/tmp/bk]$ mkdir init_mv
[root@11grac2.localdomain:/tmp/bk]$ cd init_mv
[root@11grac2.localdomain:/tmp/bk/init_mv]$ mv /etc/init.d/init.* /tmp/bk/init_mv/
[root@11grac2.localdomain:/tmp/bk/init_mv]$ ls -l
total 76
-rwxr-xr-x 1 root root  2236 May 17 08:31 init.crs
-rwxr-xr-x 1 root root  5579 May 17 08:31 init.crsd
-rwxr-xr-x 1 root root 56322 May 17 08:31 init.cssd
-rwxr-xr-x 1 root root  3854 May 17 08:31 init.evmd
7)修改/etc/inittab文件
— 将文件中最后三行注释掉
[root@11grac1.localdomain:/root]$ tail -f /etc/inittab
4:2345:respawn:/sbin/mingetty tty4
5:2345:respawn:/sbin/mingetty tty5
6:2345:respawn:/sbin/mingetty tty6

# Run xdm in runlevel 5
x:5:respawn:/etc/X11/prefdm -nodaemon

# h1:35:respawn:/etc/init.d/init.evmd run >/dev/null 2>&1 </dev/null
# h2:35:respawn:/etc/init.d/init.cssd fatal >/dev/null 2>&1 </dev/null
# h3:35:respawn:/etc/init.d/init.crsd run >/dev/null 2>&1 </dev/null

[root@11grac1.localdomain:/root]$ tail -f /etc/inittab
4:2345:respawn:/sbin/mingetty tty4
5:2345:respawn:/sbin/mingetty tty5
6:2345:respawn:/sbin/mingetty tty6

# Run xdm in runlevel 5
x:5:respawn:/etc/X11/prefdm -nodaemon

# h1:35:respawn:/etc/init.d/init.evmd run >/dev/null 2>&1 </dev/null
# h2:35:respawn:/etc/init.d/init.cssd fatal >/dev/null 2>&1 </dev/null
# h3:35:respawn:/etc/init.d/init.crsd run >/dev/null 2>&1 </dev/null
8)删除/tmp/.oracle和/var/tmp/.oracle
[root@11grac1.localdomain:/root]$ rm -rf /tmp/.oracle
[root@11grac1.localdomain:/root]$ rm -rf /var/tmp/.oracle
[root@11grac2.localdomain:/root]$ rm -rf /tmp/.oracle
[root@11grac2.localdomain:/root]$ rm -rf /var/tmp/.oracle

八、安装grid软件
1)在安装grid之前先配置grid用户的ssh互信关系
[grid@11grac1 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_rsa): 
Created directory ‘/home/grid/.ssh’.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/grid/.ssh/id_rsa.
Your public key has been saved in /home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
10:75:22:03:21:35:c7:5b:ef:c6:3a:d9:5a:dd:6b:7b grid@11grac1.localdomain
[grid@11grac1 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_dsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/grid/.ssh/id_dsa.
Your public key has been saved in /home/grid/.ssh/id_dsa.pub.
The key fingerprint is:
ca:19:c2:55:b7:c9:2e:30:55:96:8c:be:da:35:06:a9 grid@11grac1.localdomain

[grid@11grac2 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_rsa): 
Created directory ‘/home/grid/.ssh’.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/grid/.ssh/id_rsa.
Your public key has been saved in /home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
8e:4b:06:9e:7f:ce:8a:6b:d0:b8:9f:15:23:63:3b:33 grid@11grac2.localdomain
[grid@11grac2 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_dsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/grid/.ssh/id_dsa.
Your public key has been saved in /home/grid/.ssh/id_dsa.pub.
The key fingerprint is:
a9:d2:dd:d5:91:37:30:23:81:fc:fa:c9:5f:0e:5a:96 grid@11grac2.localdomain

[grid@11grac1 ~]$ cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys
[grid@11grac1 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[grid@11grac1 ~]$ ssh 11grac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
The authenticity of host ’11grac2 (192.168.56.222)’ can’t be established.
RSA key fingerprint is 25:8c:5f:0f:cd:8a:4b:35:84:75:c8:cd:58:75:35:6b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ’11grac2,192.168.56.222′ (RSA) to the list of known hosts.
grid@11grac2’s password: 
[grid@11grac1 ~]$ ssh 11grac2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
grid@11grac2’s password: 
[grid@11grac1 ~]$ scp ~/.ssh/authorized_keys 11grac2:~/.ssh/authorized_keys
grid@11grac2’s password: 
authorized_keys                                                                   100% 2040     2.0KB/s   00:00    

2)测试grid用户ssh互信关系
$ cat ssh.sh 
ssh 11grac1 date
ssh 11grac2 date
ssh 11grac1-priv date 
ssh 11grac2-priv date
[grid@11grac1 ~]$ sh ssh.sh 
Mon May 18 08:33:34 CST 2015
Mon May 18 08:33:35 CST 2015
Mon May 18 08:33:34 CST 2015
Mon May 18 08:33:35 CST 2015
[grid@11grac2 ~]$ sh ssh.sh 
Mon May 18 08:33:44 CST 2015
Mon May 18 08:33:45 CST 2015
Mon May 18 08:33:45 CST 2015
Mon May 18 08:33:46 CST 2015

3)开始安装grid软件
[root@11grac1.localdomain:/tmp]$ unzip p13390677_112040_Linux-x86-64_3of7.zip 
[root@11grac1.localdomain:/root]$ su – grid
[grid@11grac1 ~]$ cd /tmp/grid
[grid@11grac1 grid]$ export DISPLAY=192.168.56.1:0.0
[grid@11grac1 grid]$ ./runInstaller

— 这里的SCAN Name一定要写成DNS中解析的名字

— 选择新添加配置好的ASM磁盘创建ASM DISK GROUP

— 按照提示运行fixup脚本

[root@11grac1.localdomain:/tmp]$ cd CVU_11.2.0.4.0_grid
[root@11grac1.localdomain:/tmp/CVU_11.2.0.4.0_grid]$ ./runfixup.sh 
Response file being used is :./fixup.response
Enable file being used is :./fixup.enable
Log file location: ./orarun.log
Installing Package /tmp/CVU_11.2.0.4.0_grid//cvuqdisk-1.0.9-1.rpm
Preparing…                ########################################### [100%]
   1:cvuqdisk               ########################################### [100%]

[root@11grac2.localdomain:/tmp]$ cd CVU_11.2.0.4.0_grid/
[root@11grac2.localdomain:/tmp/CVU_11.2.0.4.0_grid]$ ./runfixup.sh 
Response file being used is :./fixup.response
Enable file being used is :./fixup.enable
Log file location: ./orarun.log
Installing Package /tmp/CVU_11.2.0.4.0_grid//cvuqdisk-1.0.9-1.rpm
Preparing…                ########################################### [100%]
   1:cvuqdisk               ########################################### [100%]

— 运行完fixup脚本后再执行Check

— 这里还是有两个报错,经检查我们的磁盘权限和DNS解析没有问题,因此可以忽略。

[root@11grac1.localdomain:/root]$ ls -l /dev/asm-diskg
brw-rw—- 1 grid asmadmin 8, 96 May 18 08:17 /dev/asm-diskg
[root@11grac2.localdomain:/root]$ ls -l /dev/asm-diskg
brw-rw—- 1 grid asmadmin 8, 96 May 18 08:18 /dev/asm-diskg

[root@11grac1.localdomain:/root]$ nslookup scan
Server: 192.168.56.111
Address: 192.168.56.111#53

Name: scan.oracle.com
Address: 192.168.56.102
Name: scan.oracle.com
Address: 192.168.56.103
Name: scan.oracle.com
Address: 192.168.56.101

[root@11grac1.localdomain:/root]$ nslookup 192.168.56.101
Server: 192.168.56.111
Address: 192.168.56.111#53

101.56.168.192.in-addr.arpa name = scan.oracle.com.

[root@11grac1.localdomain:/root]$ nslookup 192.168.56.102
Server: 192.168.56.111
Address: 192.168.56.111#53

102.56.168.192.in-addr.arpa name = scan.oracle.com.

[root@11grac1.localdomain:/root]$ nslookup 192.168.56.103
Server: 192.168.56.111
Address: 192.168.56.111#53

103.56.168.192.in-addr.arpa name = scan.oracle.com.

[root@11grac2.localdomain:/root]$ nslookup scan
Server: 192.168.56.111
Address: 192.168.56.111#53

Name: scan.oracle.com
Address: 192.168.56.103
Name: scan.oracle.com
Address: 192.168.56.101
Name: scan.oracle.com
Address: 192.168.56.102

[root@11grac2.localdomain:/root]$ nslookup 192.168.56.101
Server: 192.168.56.111
Address: 192.168.56.111#53

101.56.168.192.in-addr.arpa name = scan.oracle.com.

[root@11grac2.localdomain:/root]$ nslookup 192.168.56.102
Server: 192.168.56.111
Address: 192.168.56.111#53

102.56.168.192.in-addr.arpa name = scan.oracle.com.

[root@11grac2.localdomain:/root]$ nslookup 192.168.56.103
Server: 192.168.56.111
Address: 192.168.56.111#53

103.56.168.192.in-addr.arpa name = scan.oracle.com.


— 按照提示执行脚本

[root@11grac1.localdomain:/root]$ /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of “dbhome” have not changed. No need to overwrite.
The file “oraenv” already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying oraenv to /usr/local/bin …
The file “coraenv” already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying coraenv to /usr/local/bin …

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization – successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding Clusterware entries to inittab
CRS-2672: Attempting to start ‘ora.mdnsd’ on ’11grac1′
CRS-2676: Start of ‘ora.mdnsd’ on ’11grac1′ succeeded
CRS-2672: Attempting to start ‘ora.gpnpd’ on ’11grac1′
CRS-2676: Start of ‘ora.gpnpd’ on ’11grac1′ succeeded
CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ’11grac1′
CRS-2672: Attempting to start ‘ora.gipcd’ on ’11grac1′
CRS-2676: Start of ‘ora.gipcd’ on ’11grac1′ succeeded
CRS-2676: Start of ‘ora.cssdmonitor’ on ’11grac1′ succeeded
CRS-2672: Attempting to start ‘ora.cssd’ on ’11grac1′
CRS-2672: Attempting to start ‘ora.diskmon’ on ’11grac1′
CRS-2676: Start of ‘ora.diskmon’ on ’11grac1′ succeeded
CRS-2676: Start of ‘ora.cssd’ on ’11grac1′ succeeded

ASM created and started successfully.

Disk Group OVDF created successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 1affe9a19f2a4f34bfdd54c179f54ae3.
Successfully replaced voting disk group with +OVDF.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
–  —–    —————–                ——— ———
 1. ONLINE   1affe9a19f2a4f34bfdd54c179f54ae3 (/dev/asm-diskg) [OVDF]
Located 1 voting disk(s).
CRS-2672: Attempting to start ‘ora.OVDF.dg’ on ’11grac1′
CRS-2676: Start of ‘ora.OVDF.dg’ on ’11grac1′ succeeded
Configure Oracle Grid Infrastructure for a Cluster … succeeded

[root@11grac2.localdomain:/root]$ /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of “dbhome” have not changed. No need to overwrite.
The file “oraenv” already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying oraenv to /usr/local/bin …
The file “coraenv” already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying coraenv to /usr/local/bin …

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization – successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node 11grac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster … succeeded

— 执行完点击“OK”完成grid最后的配置

— 检查集群状态

[grid@11grac1 ~]$ crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora....SM1.asm application    ONLINE    ONLINE    11grac1     
ora....C1.lsnr application    ONLINE    ONLINE    11grac1     
ora....ac1.gsd application    OFFLINE   OFFLINE               
ora....ac1.ons application    ONLINE    ONLINE    11grac1     
ora....ac1.vip ora....t1.type ONLINE    ONLINE    11grac1     
ora....SM2.asm application    ONLINE    ONLINE    11grac2     
ora....C2.lsnr application    ONLINE    ONLINE    11grac2     
ora....ac2.gsd application    OFFLINE   OFFLINE               
ora....ac2.ons application    ONLINE    ONLINE    11grac2     
ora....ac2.vip ora....t1.type ONLINE    ONLINE    11grac2     
ora....ER.lsnr ora....er.type ONLINE    ONLINE    11grac1     
ora....N1.lsnr ora....er.type ONLINE    ONLINE    11grac2     
ora....N2.lsnr ora....er.type ONLINE    ONLINE    11grac1     
ora....N3.lsnr ora....er.type ONLINE    ONLINE    11grac1     
ora.OVDF.dg    ora....up.type ONLINE    ONLINE    11grac1     
ora.asm        ora.asm.type   ONLINE    ONLINE    11grac1     
ora.cvu        ora.cvu.type   ONLINE    ONLINE    11grac1     
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE               
ora....network ora....rk.type ONLINE    ONLINE    11grac1     
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    11grac1     
ora.ons        ora.ons.type   ONLINE    ONLINE    11grac1     
ora....ry.acfs ora....fs.type ONLINE    ONLINE    11grac1     
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    11grac2     
ora.scan2.vip  ora....ip.type ONLINE    ONLINE    11grac1     
ora.scan3.vip  ora....ip.type ONLINE    ONLINE    11grac1     
[grid@11grac1 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.OVDF.dg
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.asm
               ONLINE  ONLINE       11grac1                  Started             
               ONLINE  ONLINE       11grac2                  Started             
ora.gsd
               OFFLINE OFFLINE      11grac1                                      
               OFFLINE OFFLINE      11grac2                                      
ora.net1.network
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.ons
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.registry.acfs
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.11grac1.vip
      1        ONLINE  ONLINE       11grac1                                      
ora.11grac2.vip
      1        ONLINE  ONLINE       11grac2                                      
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       11grac2                                      
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       11grac1                                      
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       11grac1                                      
ora.cvu
      1        ONLINE  ONLINE       11grac1                                      
ora.oc4j
      1        ONLINE  ONLINE       11grac1                                      
ora.scan1.vip
      1        ONLINE  ONLINE       11grac2                                      
ora.scan2.vip
      1        ONLINE  ONLINE       11grac1                                      
ora.scan3.vip
      1        ONLINE  ONLINE       11grac1

九、迁移11gR1 RAC 磁盘组至11gR2 Grid下管理
这地方在操作之前,需要先对之前11gR1 ASM下的DATA和FRA磁盘组所对应的磁盘修改权限和属主(11gR1下没有grid用户),修改uedv rule文件。
[root@11grac1.localdomain:/root]$ ls -l /dev/asm-disk*
brw-rw—- 1 oracle oinstall 8, 16 May 18  2015 /dev/asm-diskb
brw-rw—- 1 oracle oinstall 8, 32 May 18  2015 /dev/asm-diskc
brw-rw—- 1 oracle oinstall 8, 48 May 18 08:16 /dev/asm-diskd
brw-rw—- 1 oracle oinstall 8, 64 May 18 08:15 /dev/asm-diske
brw-rw—- 1 oracle oinstall 8, 80 May 18 08:15 /dev/asm-diskf
brw-rw—- 1 grid   asmadmin 8, 96 May 18 11:12 /dev/asm-diskg
[root@11grac2.localdomain:/root]$ ls -l /dev/asm-disk*
brw-rw—- 1 oracle oinstall 8, 16 May 18 08:16 /dev/asm-diskb
brw-rw—- 1 oracle oinstall 8, 32 May 18 08:16 /dev/asm-diskc
brw-rw—- 1 oracle oinstall 8, 48 May 18 08:17 /dev/asm-diskd
brw-rw—- 1 oracle oinstall 8, 64 May 18 08:15 /dev/asm-diske
brw-rw—- 1 oracle oinstall 8, 80 May 18 08:15 /dev/asm-diskf
brw-rw—- 1 grid   asmadmin 8, 96 May 18 11:13 /dev/asm-diskg

[root@11grac1.localdomain:/root]$ cat /etc/udev/rules.d/99-oracle-asmdevices.rules 
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VB424a5eb7-c9274de0_”, NAME=”asm-diskb”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VB95c63929-9336a092_”, NAME=”asm-diskc”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VBa044f79d-51b67554_”, NAME=”asm-diskd”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”00660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VB86ee407e-415b5b32_”, NAME=”asm-diske”, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VBe59f5561-e0df75b7_”, NAME=”asm-diskf”, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VBa7feea0a-164d1478_”, NAME=”asm-diskg”, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″
[root@11grac2.localdomain:/root]$ cat /etc/udev/rules.d/99-oracle-asmdevices.rules 
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VB424a5eb7-c9274de0_”, NAME=”asm-diskb”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VB95c63929-9336a092_”, NAME=”asm-diskc”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VBa044f79d-51b67554_”, NAME=”asm-diskd”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VB86ee407e-415b5b32_”, NAME=”asm-diske”, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VBe59f5561-e0df75b7_”, NAME=”asm-diskf”, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VBa7feea0a-164d1478_”, NAME=”asm-diskg”, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″

[root@11grac1.localdomain:/root]$ start_udev 
Starting udev:                                             [  OK  ]
[root@11grac1.localdomain:/root]$ ls -l /dev/asm-disk*
brw-rw—- 1 oracle oinstall 8, 16 May 18  2015 /dev/asm-diskb
brw-rw—- 1 oracle oinstall 8, 32 May 18  2015 /dev/asm-diskc
brw-rw—- 1 oracle oinstall 8, 48 May 18 08:16 /dev/asm-diskd
brw-rw—- 1 grid   asmadmin 8, 64 May 18 08:15 /dev/asm-diske
brw-rw—- 1 grid   asmadmin 8, 80 May 18 08:15 /dev/asm-diskf
brw-rw—- 1 grid   asmadmin 8, 96 May 18 11:13 /dev/asm-diskg
[root@11grac2.localdomain:/root]$ start_udev
Starting udev:                                             [  OK  ]
[root@11grac2.localdomain:/root]$ ls -l /dev/asm-disk*
brw-rw—- 1 oracle oinstall 8, 16 May 18 08:16 /dev/asm-diskb
brw-rw—- 1 oracle oinstall 8, 32 May 18 08:16 /dev/asm-diskc
brw-rw—- 1 oracle oinstall 8, 48 May 18 08:17 /dev/asm-diskd
brw-rw—- 1 grid   asmadmin 8, 64 May 18 08:15 /dev/asm-diske
brw-rw—- 1 grid   asmadmin 8, 80 May 18 08:15 /dev/asm-diskf
brw-rw—- 1 grid   asmadmin 8, 96 May 18 11:13 /dev/asm-diskg

使用grid用户调用/u01/app/11.2.0/grid/bin/asmca进入ASMCA图形界面,将之前11gR1 RAC下的2块ASM磁盘组添加到11gR2 grid软件下的ASM实例中进行管理。

SQL> select name,state,total_mb,free_mb from v$asm_diskgroup;

NAME			       STATE	     TOTAL_MB	 FREE_MB
------------------------------ ----------- ---------- ----------
OVDF			       MOUNTED		 8192	    7796
RFA			       MOUNTED		 2048	    1706
DATA			       MOUNTED		 5120	    3121

SQL> select name,state,total_mb,free_mb,path from v$asm_disk

NAME	   STATE      TOTAL_MB	  FREE_MB PATH
---------- -------- ---------- ---------- ------------------------------
	   NORMAL	     0		0 /dev/asm-diskd
	   NORMAL	     0		0 /dev/asm-diskc
	   NORMAL	     0		0 /dev/asm-diskb
OVDF_0000  NORMAL	  8192	     7796 /dev/asm-diskg
RFA_0000   NORMAL	  2048	     1706 /dev/asm-diskf
DATA_0000  NORMAL	  5120	     3121 /dev/asm-diske
其中v$ASM_DISK中看到的name空的前3条记录是11gR1 RAC下的存放OCR和Voting Disk的磁盘,不予理会。

[grid@11grac1 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.LISTENER.lsnr
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.OVDF.dg
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.RFA.dg
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.asm
               ONLINE  ONLINE       11grac1                  Started             
               ONLINE  ONLINE       11grac2                  Started             
ora.gsd
               OFFLINE OFFLINE      11grac1                                      
               OFFLINE OFFLINE      11grac2                                      
ora.net1.network
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.ons
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.registry.acfs
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.11grac1.vip
      1        ONLINE  ONLINE       11grac1                                      
ora.11grac2.vip
      1        ONLINE  ONLINE       11grac2                                      
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       11grac1                                      
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       11grac1                                      
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       11grac2                                      
ora.cvu
      1        ONLINE  ONLINE       11grac1                                      
ora.oc4j
      1        ONLINE  ONLINE       11grac1                                      
ora.scan1.vip
      1        ONLINE  ONLINE       11grac1                                      
ora.scan2.vip
      1        ONLINE  ONLINE       11grac1                                      
ora.scan3.vip
      1        ONLINE  ONLINE       11grac2

十、安装11gR2 oracle软件
[root@11grac1.localdomain:/tmp]$ unzip p13390677_112040_Linux-x86-64_1of7.zip
[root@11grac1.localdomain:/tmp]$ unzip p13390677_112040_Linux-x86-64_2of7.zip
[root@11grac1.localdomain:/root]$ su – oracle
[oracle@11grac1.localdomain:/home/oracle]$ cd /tmp/database/
[oracle@11grac1.localdomain:/tmp/database]$ ls -l
total 60
drwxr-xr-x  4 root root  4096 Aug 27  2013 install
-rw-r–r–  1 root root 30016 Aug 27  2013 readme.html
drwxr-xr-x  2 root root  4096 Aug 27  2013 response
drwxr-xr-x  2 root root  4096 Aug 27  2013 rpm
-rwxr-xr-x  1 root root  3267 Aug 27  2013 runInstaller
drwxr-xr-x  2 root root  4096 Aug 27  2013 sshsetup
drwxr-xr-x 14 root root  4096 Aug 27  2013 stage
-rw-r–r–  1 root root   500 Aug 27  2013 welcome.html
[oracle@11grac1.localdomain:/tmp/database]$ export DISPLAY=192.168.56.1:0.0
[oracle@11grac1.localdomain:/tmp/database]$ ./runInstaller 

— 解决安装RAC数据库软件时,OUI找不到节点,解决办法如下:

[oracle@11grac1.localdomain:/home/oracle]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version=”1.0″ standalone=”yes” ?>
<!– Copyright (c) 1999, 2013, Oracle and/or its affiliates.
All rights reserved. –>
<!– Do not modify the contents of this file by hand. –>
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>11.2.0.4.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME=”OraCrs11g_home” LOC=”/u01/app/crs/11.1.0/crshome_1″ TYPE=”O” IDX=”1″ CRS=”true”>
   <NODE_LIST>
      <NODE NAME=”11grac1″/>
      <NODE NAME=”11grac2″/>
   </NODE_LIST>
</HOME>
<HOME NAME=”OraDb11g_home1″ LOC=”/u01/app/oracle/product/11.1.0/dbhome_1″ TYPE=”O” IDX=”2″>
   <NODE_LIST>
      <NODE NAME=”11grac1″/>
      <NODE NAME=”11grac2″/>
   </NODE_LIST>
</HOME>
<HOME NAME=”Ora11g_gridinfrahome1″ LOC=”/u01/app/11.2.0/grid” TYPE=”O” IDX=”3″ CRS=”true”>
   <NODE_LIST>
      <NODE NAME=”11grac1″/>
      <NODE NAME=”11grac2″/>
   </NODE_LIST>
</HOME>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>

[oracle@11grac2.localdomain:/home/oracle]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version=”1.0″ standalone=”yes” ?>
<!– Copyright (c) 1999, 2013, Oracle and/or its affiliates.
All rights reserved. –>
<!– Do not modify the contents of this file by hand. –>
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>11.2.0.4.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME=”OraCrs11g_home” LOC=”/u01/app/crs/11.1.0/crshome_1″ TYPE=”O” IDX=”1″ CRS=”true”>
   <NODE_LIST>
      <NODE NAME=”11grac1″/>
      <NODE NAME=”11grac2″/>
   </NODE_LIST>
</HOME>
<HOME NAME=”OraDb11g_home1″ LOC=”/u01/app/oracle/product/11.1.0/dbhome_1″ TYPE=”O” IDX=”2″>
   <NODE_LIST>
      <NODE NAME=”11grac1″/>
      <NODE NAME=”11grac2″/>
   </NODE_LIST>
</HOME>
<HOME NAME=”Ora11g_gridinfrahome1″ LOC=”/u01/app/11.2.0/grid” TYPE=”O” IDX=”3″ CRS=”true”>
   <NODE_LIST>
      <NODE NAME=”11grac1″/>
      <NODE NAME=”11grac2″/>
   </NODE_LIST>
</HOME>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
— 在两个节点将以上标色部分删除,重新运行安装即可

[root@11grac1.localdomain:/root]$ /u01/app/oracle/product/11.2.0/dbhome_1/root.sh
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/11.2.0/dbhome_1

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of “dbhome” have not changed. No need to overwrite.
The contents of “oraenv” have not changed. No need to overwrite.
The contents of “coraenv” have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.

[root@11grac2.localdomain:/root]$ /u01/app/oracle/product/11.2.0/dbhome_1/root.sh
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/11.2.0/dbhome_1

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of “dbhome” have not changed. No need to overwrite.
The contents of “oraenv” have not changed. No need to overwrite.
The contents of “coraenv” have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.

十一、升级11gR1 RAC至11gR2 RAC
1)复制11gR1 RAC下的初始化参数文件、口令文件、网络配置文件至11gR2 RAC对应的目录下
[oracle@11grac1.localdomain:/u01/app/oracle/product/11.1.0/dbhome_1/network/admin]$ cp tnsnames.ora /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/
[oracle@11grac2.localdomain:/u01/app/oracle/product/11.1.0/dbhome_1/network/admin]$ cp tnsnames.ora /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/
[oracle@11grac1.localdomain:/u01/app/oracle/product/11.2.0/dbhome_1/network/admin]$ ll
total 12
drwxr-xr-x 2 oracle oinstall 4096 May 18 12:20 samples
-rw-r–r– 1 oracle oinstall  381 Dec 17  2012 shrept.lst
-rw-r—– 1 oracle oinstall 1195 May 18 13:03 tnsnames.ora
[oracle@11grac2.localdomain:/u01/app/oracle/product/11.2.0/dbhome_1/network/admin]$ ll
total 12
drwxr-xr-x 2 oracle oinstall 4096 May 18 12:46 samples
-rw-r–r– 1 oracle oinstall  381 May 18 12:46 shrept.lst
-rw-r—– 1 oracle oinstall 1195 May 18 13:04 tnsnames.ora

[oracle@11grac1.localdomain:/u01/app/oracle/product/11.1.0/dbhome_1/dbs]$ cp initrac11g1.ora /u01/app/oracle/product/11.2.0/dbhome_1/dbs/
[oracle@11grac1.localdomain:/u01/app/oracle/product/11.1.0/dbhome_1/dbs]$ cp orapwrac11g1 /u01/app/oracle/product/11.2.0/dbhome_1/dbs/
[oracle@11grac2.localdomain:/u01/app/oracle/product/11.1.0/dbhome_1/dbs]$ cp initrac11g2.ora /u01/app/oracle/product/11.2.0/dbhome_1/dbs/
[oracle@11grac2.localdomain:/u01/app/oracle/product/11.1.0/dbhome_1/dbs]$ cp orapwrac11g2 /u01/app/oracle/product/11.2.0/dbhome_1/dbs/
[oracle@11grac1.localdomain:/u01/app/oracle/product/11.2.0/dbhome_1/dbs]$ ll
total 12
-rw-r–r– 1 oracle oinstall 2851 May 15  2009 init.ora
-rw-r—– 1 oracle oinstall   39 May 18 13:09 initrac11g1.ora
-rw-r—– 1 oracle oinstall 1536 May 18 13:09 orapwrac11g1
[oracle@11grac2.localdomain:/u01/app/oracle/product/11.2.0/dbhome_1/dbs]$ ll
total 12
-rw-r–r– 1 oracle oinstall 2851 May 18 12:36 init.ora
-rw-r—– 1 oracle oinstall   39 May 18 13:10 initrac11g2.ora
-rw-r—– 1 oracle oinstall 1536 May 18 13:10 orapwrac11g2

2)修改oracle用户环境变量
[oracle@11grac1.localdomain:/home/oracle]$ cat .bash_profile 
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

export PS1=[$LOGNAME@`hostname`:’$PWD”]$ ‘

export ORACLE_UNQNAME=rac11g
export ORACLE_SID=rac11g1
export ORACLE_BASE=/u01/app
#export CRS_HOME=$ORACLE_BASE/crs/11.1.0/crshome_1
export ORACLE_HOME=$ORACLE_BASE/oracle/product/11.2.0/dbhome_1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/share/lib
export CLASSPATH=$ORACLE_HOME/jre:$ORACLE_HOME/jlib:$ORACLE_HOME/jdbc/lib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib
export PATH=$ORACLE_HOME/bin:$PATH

[oracle@11grac2.localdomain:/home/oracle]$ cat .bash_profile 
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

export PS1=[$LOGNAME@`hostname`:’$PWD”]$ ‘

export ORACLE_UNQNAME=rac11g
export ORACLE_SID=rac11g2
export ORACLE_BASE=/u01/app
#export CRS_HOME=$ORACLE_BASE/crs/11.1.0/crshome_1
export ORACLE_HOME=$ORACLE_BASE/oracle/product/11.2.0/dbhome_1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/share/lib
export CLASSPATH=$ORACLE_HOME/jre:$ORACLE_HOME/jlib:$ORACLE_HOME/jdbc/lib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib
export PATH=$ORACLE_HOME/bin:$PATH

[oracle@11grac1.localdomain:/home/oracle]$ source .bash_profile
[oracle@11grac2.localdomain:/home/oracle]$ source .bash_profile

3)修改初始化参数cluster_database
[oracle@11grac1.localdomain:/home/oracle]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Mon May 18 13:44:24 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> create pfile=’/tmp/pfile.ora’ from spfile=’+DATA/rac11g/spfilerac11g.ora';

File created.

SQL> exit
Disconnected
[oracle@11grac1.localdomain:/home/oracle]$ cd /tmp/
[oracle@11grac1.localdomain:/tmp]$ vi pfile.ora
— 将cluster_database参数改为false

[oracle@11grac1.localdomain:/tmp]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Mon May 18 13:46:52 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> create spfile=’+DATA/rac11g/spfilerac11g.ora’ from pfile=’/tmp/pfile.ora';

File created.

4)执行升级前预检查脚本
这里需要注意,我们要以原来11gR1 RAC环境的数据库软件来启动数据库,去执行升级前的预检查脚本。该预检查脚本是要把低版本的数据库升级到11204版本的数据库,需要执行预检查脚本ORACLE_HOME/rdbms/admin/utlu112i.sql,可以参考MetaLink官方文档Doc ID 837570.1及Note 884522.1
这里要以原来11gR1 RAC环境的数据库软件来启动数据库,并且以UPGRADE方式来启库,且需要注意环境变量需要设置成对应原11gR1 RAC数据库软件HOME的配置,同样需要修改11gR1参数文件里的cluster_database=false(如果原版本为10g版本,还需要注意在11gR2版本下,oracle会自动创建一个隐含参数”__oracle_base”,而在10g版本下不支持该参数,因此创建出pfile,然后将pfile里的该初始化参数注释掉。)
— 使用如下修改后的参数文件启动11gR1的database
[oracle@11grac1.localdomain:/tmp]$ cat pfile.ora 
rac11g1.__db_cache_size=251658240
rac11g2.__db_cache_size=268435456
rac11g1.__java_pool_size=16777216
rac11g2.__java_pool_size=4194304
rac11g1.__large_pool_size=4194304
rac11g2.__large_pool_size=4194304
rac11g1.__oracle_base=’/u01/app’#ORACLE_BASE set from environment
rac11g2.__oracle_base=’/u01/app’#ORACLE_BASE set from environment
rac11g1.__pga_aggregate_target=348127232
rac11g2.__pga_aggregate_target=348127232
rac11g1.__sga_target=494927872
rac11g2.__sga_target=494927872
rac11g1.__shared_io_pool_size=0
rac11g2.__shared_io_pool_size=0
rac11g1.__shared_pool_size=213909504
rac11g2.__shared_pool_size=209715200
rac11g1.__streams_pool_size=0
rac11g2.__streams_pool_size=0
*.audit_file_dest=’/u01/app/admin/rac11g/adump’
*.audit_trail=’db’
*.cluster_database_instances=2
*.cluster_database=false
*.compatible=’11.1.0.0.0′
*.control_files=’+DATA/rac11g/controlfile/current.260.879941791′,’+RFA/rac11g/controlfile/current.256.879941793′
*.db_block_size=8192
*.db_create_file_dest=’+DATA’
*.db_domain=”
*.db_name=’rac11g’
*.db_recovery_file_dest=’+RFA’
*.db_recovery_file_dest_size=2147483648
*.diagnostic_dest=’/u01/app’
*.dispatchers='(PROTOCOL=TCP) (SERVICE=rac11gXDB)’
rac11g1.instance_number=1
rac11g2.instance_number=2
rac11g2.local_listener=’LISTENER_RAC11G2′
rac11g1.local_listener=’LISTENER_RAC11G1′
*.log_archive_dest_1=’LOCATION=+DATA/’
*.log_archive_format=’%t_%s_%r.dbf’
*.memory_target=839909376
*.open_cursors=300
*.processes=150
*.remote_listener=’LISTENERS_RAC11G’
*.remote_login_passwordfile=’exclusive’
rac11g2.thread=2
rac11g1.thread=1
rac11g1.undo_tablespace=’UNDOTBS1′
rac11g2.undo_tablespace=’UNDOTBS2′

[oracle@11grac1.localdomain:/home/oracle]$ echo $ORACLE_HOME
/u01/app/oracle/product/11.1.0/dbhome_1
[oracle@11grac1.localdomain:/home/oracle]$ sqlplus / as sysdba

SQL*Plus: Release 11.1.0.7.0 – Production on Mon May 18 15:27:18 2015

Copyright (c) 1982, 2008, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup upgrade pfile=’/tmp/pfile.ora';
ORA-29702: error occurred in Cluster Group Service operation

— 查看alert日志发现如下信息
[oracle@11grac1.localdomain:/home/oracle]$ cd /u01/app/diag/rdbms/rac11g/rac11g1/trace/
[oracle@11grac1.localdomain:/u01/app/diag/rdbms/rac11g/rac11g1/trace]$ tail -20f alert_rac11g1.log 
CKPT started with pid=16, OS id=11345 
Mon May 18 15:27:33 2015
SMON started with pid=17, OS id=11349 
Mon May 18 15:27:33 2015
RECO started with pid=18, OS id=11353 
Mon May 18 15:27:33 2015
RBAL started with pid=19, OS id=11357 
Mon May 18 15:27:33 2015
ASMB started with pid=20, OS id=11361 
Errors in file /u01/app/diag/rdbms/rac11g/rac11g1/trace/rac11g1_asmb_11361.trc:
ORA-15077: could not locate ASM instance serving a required diskgroup
ORA-29701: unable to connect to Cluster Manager
Mon May 18 15:27:34 2015
MMON started with pid=21, OS id=11365 
starting up 1 dispatcher(s) for network address ‘(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))’…
Mon May 18 15:27:34 2015
MMNL started with pid=20, OS id=11369 
starting up 1 shared server(s) …
USER (ospid: 11213): terminating the instance due to error 29702
Instance terminated by USER, pid = 11213
这里想以原来11gR1 RAC环境的数据库软件来启动数据库执行升级预检查脚本,结果因为ASM问题失败(因为我们已经将原11gR1 RAC下的ASM磁盘组使用11gR2 grid进行管理,因此这里可能需要回退到11gR1 RAC版本以upgrade方式启动数据库后运行11gR2 ORACLE_HOME/rdbms/admin/catupgrd.sql,但经过测试不能成功,因此考虑绕过这一步直接执行升级脚本catupgrd.sql)

— 如果升级前不执行预检查脚本,直接升级,则会出现如下报错:
[oracle@11grac1.localdomain:/home/oracle]$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0/dbhome_1
[oracle@11grac1.localdomain:/home/oracle]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Tue May 19 00:42:05 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup upgrade
ORACLE instance started.

Total System Global Area  626327552 bytes
Fixed Size    2255832 bytes
Variable Size  230687784 bytes
Database Buffers  390070272 bytes
Redo Buffers    3313664 bytes
Database mounted.
Database opened.
SQL> @?/rdbms/admin/catupgrd.sql
DOC>#######################################################################
DOC>#######################################################################
DOC>
DOC>   The first time this script is run, there should be no error messages
DOC>   generated; all normal upgrade error messages are suppressed.
DOC>
DOC>   If this script is being re-run after correcting some problem, then
DOC>   expect the following error which is not automatically suppressed:
DOC>
DOC>   ORA-00001: unique constraint (<constraint_name>) violated
DOC>  possibly in conjunction with
DOC>   ORA-06512: at “<procedure/function name>”, line NN
DOC>
DOC>   These errors will automatically be suppressed by the Database Upgrade
DOC>   Assistant (DBUA) when it re-runs an upgrade.
DOC>
DOC>#######################################################################
DOC>#######################################################################
DOC>#
DOC>######################################################################
DOC>######################################################################
DOC> The following statement will cause an “ORA-01722: invalid number”
DOC> error if the user running this script is not SYS.  Disconnect
DOC> and reconnect with AS SYSDBA.
DOC>######################################################################
DOC>######################################################################
DOC>#

no rows selected

DOC>######################################################################
DOC>######################################################################
DOC> The following statement will cause an “ORA-01722: invalid number”
DOC> error if the database server version is not correct for this script.
DOC> Perform “ALTER SYSTEM CHECKPOINT” prior to “SHUTDOWN ABORT”, and use
DOC> a different script or a different server.
DOC>######################################################################
DOC>######################################################################
DOC>#

no rows selected

DOC>#######################################################################
DOC>#######################################################################
DOC>   The following statement will cause an “ORA-01722: invalid number”
DOC>   error if the database has not been opened for UPGRADE.
DOC>
DOC>   Perform “ALTER SYSTEM CHECKPOINT” prior to “SHUTDOWN ABORT”,  and
DOC>   restart using UPGRADE.
DOC>#######################################################################
DOC>#######################################################################
DOC>#

no rows selected

DOC>#######################################################################
DOC>#######################################################################
DOC> The following statement will cause an “ORA-01722: invalid number”
DOC> error if the Oracle Database Vault option is TRUE.  Upgrades cannot
DOC> be run with the Oracle Database Vault option set to TRUE since
DOC> AS SYSDBA connections are restricted.
DOC>
DOC> Perform “ALTER SYSTEM CHECKPOINT” prior to “SHUTDOWN ABORT”, relink
DOC> the server without the Database Vault option, and restart the server
DOC> using UPGRADE mode.
DOC>
DOC>
DOC>#######################################################################
DOC>#######################################################################
DOC>#

no rows selected

DOC>#######################################################################
DOC>#######################################################################
DOC>   The following statement will cause an “ORA-01722: invalid number”
DOC>   error if Database Vault is installed in the database but the Oracle
DOC>   Label Security option is FALSE. To successfully upgrade Oracle
DOC>   Database Vault, the Oracle Label Security option must be TRUE.
DOC>
DOC>   Perform “ALTER SYSTEM CHECKPOINT” prior to “SHUTDOWN ABORT”,
DOC>   relink the server with the OLS option (but without the Oracle Database
DOC>   Vault option) and restart the server using UPGRADE.
DOC>#######################################################################
DOC>#######################################################################
DOC>#

no rows selected

DOC>#######################################################################
DOC>#######################################################################
DOC>   The following statement will cause an “ORA-01722: invalid number”
DOC>   error if bootstrap migration is in progress and logminer clients
DOC>   require utlmmig.sql to be run next to support this redo stream.
DOC>
DOC>   Run utlmmig.sql
DOC>   then (if needed)
DOC>   restart the database using UPGRADE and
DOC>   rerun the upgrade script.
DOC>#######################################################################
DOC>#######################################################################
DOC>#

no rows selected

DOC>#######################################################################
DOC>#######################################################################
DOC>   The following error is generated if the pre-upgrade tool has not been
DOC>   run in the old ORACLE_HOME home prior to upgrading a pre-11.2 database:
DOC>
DOC>   SELECT TO_NUMBER(‘MUST_HAVE_RUN_PRE-UPGRADE_TOOL_FOR_TIMEZONE’)
DOC>   *
DOC>  ERROR at line 1:
DOC>  ORA-01722: invalid number
DOC>
DOC> o Action:
DOC>   Shutdown database (“alter system checkpoint” and then “shutdown abort”).
DOC>   Revert to the original oracle home and start the database.
DOC>   Run pre-upgrade tool against the database.
DOC>   Review and take appropriate actions based on the pre-upgrade
DOC>   output before opening the datatabase in the new software version.
DOC>
DOC>#######################################################################
DOC>#######################################################################
DOC>#

Session altered.

Table created.

Table altered.

no rows selected

DOC>#######################################################################
DOC>#######################################################################
DOC>   The following error is generated if the pre-upgrade tool has not been
DOC>   run in the old oracle home prior to upgrading a pre-11.2 database:
DOC>
DOC>  SELECT TO_NUMBER(‘MUST_BE_SAME_TIMEZONE_FILE_VERSION’)
DOC>   *
DOC>  ERROR at line 1:
DOC>  ORA-01722: invalid number
DOC>
DOC>
DOC> o Action:
DOC>   Shutdown database (“alter system checkpoint” and then “shutdown abort”).
DOC>   Revert to the original ORACLE_HOME and start the database.
DOC>   Run pre-upgrade tool against the database.
DOC>   Review and take appropriate actions based on the pre-upgrade
DOC>   output before opening the datatabase in the new software version.
DOC>
DOC>#######################################################################
DOC>#######################################################################
DOC>#
SELECT TO_NUMBER(‘MUST_BE_SAME_TIMEZONE_FILE_VERSION’)
                 *
ERROR at line 1:
ORA-01722: invalid number

Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

从报错原因上似乎也很明确,错误出现在SELECT TO_NUMBER(‘MUST_HAVE_RUN_PRE-UPGRADE_TOOL_FOR_TIMEZONE’)处,从上面反馈的信息中也看到:Revert to the original oracle home and start the database. Run pre-upgrade tool against the database.
根据该错在在MOS上发现Master Note : ORA-1722 Errors during Upgrade (文档 ID 1466464.1)中提到解决办法:
1)检查该sys.registry$database表是否存在,若不存在手动创建;
CREATE TABLE registry$database( 
            platform_id   NUMBER,       
            platform_name VARCHAR2(101),
            edition       VARCHAR2(30), 
            tz_version    NUMBER        
            );
2)若存在检查该表中的记录值是否正确,若不正确请修改;
INSERT into registry$database 
                    (platform_id, platform_name, edition, tz_version) 
               VALUES ((select platform_id from v$database),
                       (select platform_name from v$database),
                        NULL,
                       (select version from v$timezone_file));

— 按照上述方法检查调整
SQL> select * from sys.registry$database

PLATFORM_ID PLATFORM_NAME   EDITION  TZ_VERSION
———– —————————— —————————— ———-
13 Linux x86 64-bit   4

SQL> select version from v$timezone_file;

   VERSION
———-
14

SQL> select platform_id from v$database;

PLATFORM_ID
———–
13

SQL> select platform_name from v$database;

PLATFORM_NAME
——————————
Linux x86 64-bit

SQL> update sys.registry$database set TZ_VERSION=14 where PLATFORM_ID=13;

1 row updated.

SQL> commit;

Commit complete.

5)修改完sys.registry$database表之后再直接执行升级脚本catupgrd.sql成功
SQL> @?/rdbms/admin/catupgrd.sql
DOC>#######################################################################
DOC>#######################################################################
DOC>
DOC>   The first time this script is run, there should be no error messages
DOC>   generated; all normal upgrade error messages are suppressed.
DOC>
DOC>   If this script is being re-run after correcting some problem, then
DOC>   expect the following error which is not automatically suppressed:
DOC>
DOC>   ORA-00001: unique constraint (<constraint_name>) violated
DOC>  possibly in conjunction with
DOC>   ORA-06512: at “<procedure/function name>”, line NN
DOC>
DOC>   These errors will automatically be suppressed by the Database Upgrade
DOC>   Assistant (DBUA) when it re-runs an upgrade.
DOC>
DOC>#######################################################################
DOC>#######################################################################
DOC>#
DOC>######################################################################
DOC>######################################################################
DOC> The following statement will cause an “ORA-01722: invalid number”
DOC> error if the user running this script is not SYS.  Disconnect
DOC> and reconnect with AS SYSDBA.
DOC>######################################################################
DOC>######################################################################

………………………….省略输出……………………………….

Oracle Database 11.2 Post-Upgrade Status Tool           05-19-2015 00:30:30
.
Component                               Current      Version     Elapsed Time
Name                                    Status       Number      HH:MM:SS
.
Oracle Server
.                                         VALID      11.2.0.4.0  00:35:25
JServer JAVA Virtual Machine
.                                         VALID      11.2.0.4.0  00:23:43
Oracle Real Application Clusters
.                                         VALID      11.2.0.4.0  00:00:03
Oracle Workspace Manager
.                                         VALID      11.2.0.4.0  00:02:05
OLAP Analytic Workspace
.                                         VALID      11.2.0.4.0  00:02:01
OLAP Catalog
.                                         VALID      11.2.0.4.0  00:01:32
Oracle OLAP API
.                                         VALID      11.2.0.4.0  00:02:33
Oracle Enterprise Manager
.                                         VALID      11.2.0.4.0  00:11:27
Oracle XDK
.                                         VALID      11.2.0.4.0  00:02:38
Oracle Text
.                                         VALID      11.2.0.4.0  00:02:03
Oracle XML Database
.                                         VALID      11.2.0.4.0  00:07:24
Oracle Database Java Packages
.                                         VALID      11.2.0.4.0  00:00:57
Oracle Multimedia
.                                         VALID      11.2.0.4.0  00:12:50
Spatial
.                                         VALID      11.2.0.4.0  00:18:06
Oracle Expression Filter
.                                         VALID      11.2.0.4.0  00:00:40
Oracle Rules Manager
.                                         VALID      11.2.0.4.0  05:51:54
Oracle Application Express
.                                         VALID     3.2.1.00.12  00:35:14
Final Actions
.                                                                00:04:26
Total Upgrade Time: 08:36:32

PL/SQL procedure successfully completed.

SQL> 
SQL> SET SERVEROUTPUT OFF
SQL> SET VERIFY ON
SQL> commit;

Commit complete.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> 
SQL> 
SQL> 
SQL> DOC
DOC>#######################################################################
DOC>#######################################################################
DOC>
DOC>   The above sql script is the final step of the upgrade. Please
DOC>   review any errors in the spool log file. If there are any errors in
DOC>   the spool file, consult the Oracle Database Upgrade Guide for
DOC>   troubleshooting recommendations.
DOC>
DOC>   Next restart for normal operation, and then run utlrp.sql to
DOC>   recompile any invalid application objects.
DOC>
DOC>   If the source database had an older time zone version prior to
DOC>   upgrade, then please run the DBMS_DST package.  DBMS_DST will upgrade
DOC>   TIMESTAMP WITH TIME ZONE data to use the latest time zone file shipped
DOC>   with Oracle.
DOC>
DOC>#######################################################################
DOC>#######################################################################
DOC>#
SQL> 
SQL> Rem Set errorlogging off
SQL> SET ERRORLOGGING OFF;
SQL> 
SQL> REM END OF CATUPGRD.SQL
SQL> 
SQL> REM bug 12337546 – Exit current sqlplus session at end of catupgrd.sql.
SQL> REM                This forces user to start a new sqlplus session in order
SQL> REM                to connect to the upgraded db.
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
— 在执行过程中注意监控alert日志,归档空间使用等问题,执行完脚本后注意检查日志,看是否存在错误。

6)正常启动数据库执行catuppst.sql脚本
[oracle@11grac1.localdomain:/home/oracle]$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0/dbhome_1
[oracle@11grac1.localdomain:/home/oracle]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Tue May 19 00:52:31 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area  839282688 bytes
Fixed Size    2257880 bytes
Variable Size  759172136 bytes
Database Buffers   75497472 bytes
Redo Buffers    2355200 bytes
Database mounted.
Database opened.
SQL> @?/rdbms/admin/catuppst.sql
………………………….省略输出……………………………….
SQL> SET echo off
Check the following log file for errors:
/u01/app/cfgtoollogs/catbundle/catbundle_PSU_RAC11G_APPLY_2015May19_00_57_34.log
— 根据提示检查日志文件看是否有错误存在

7)编译失效对象运行脚本utlrp.sql
SQL> @?/rdbms/admin/utlrp.sql
………………………….省略输出……………………………….
ERRORS DURING RECOMPILATION
—————————
                          0

Function created.

PL/SQL procedure successfully completed.

Function dropped.

PL/SQL procedure successfully completed.

8)确保listener是运行在11gR2 Grid路径下
[oracle@11grac1.localdomain:/home/oracle]$ lsnrctl status

LSNRCTL for Linux: Version 11.2.0.4.0 – Production on 19-MAY-2015 01:39:38

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
————————
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 11.2.0.4.0 – Production
Start Date                19-MAY-2015 00:49:33
Uptime                    0 days 0 hr. 50 min. 6 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/11.2.0/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/11grac1/listener/alert/log.xml
Listening Endpoints Summary…
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.111)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.112)(PORT=1521)))
Services Summary…
Service “+ASM” has 1 instance(s).
  Instance “+ASM1″, status READY, has 1 handler(s) for this service…
Service “rac11g” has 1 instance(s).
  Instance “rac11g1″, status READY, has 2 handler(s) for this service…
Service “rac11gXDB” has 1 instance(s).
  Instance “rac11g1″, status READY, has 1 handler(s) for this service…
The command completed successfully

9)升级后两个节点修改/etc/oratab文件
[oracle@11grac1.localdomain:/home/oracle]$ tail -f /etc/oratab 
# The first and second fields are the system identifier and home
# directory of the database respectively.  The third filed indicates
# to the dbstart utility that the database should , “Y”, or should not,
# “N”, be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
+ASM1:/u01/app/11.2.0/grid:N # line added by Agent
rac11g:/u01/app/oracle/product/11.2.0/dbhome_1:N # line added by Agent

[oracle@11grac2.localdomain:/home/oracle]$ tail -f /etc/oratab 
# The first and second fields are the system identifier and home
# directory of the database respectively.  The third filed indicates
# to the dbstart utility that the database should , “Y”, or should not,
# “N”, be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
+ASM2:/u01/app/11.2.0/grid:N # line added by Agent
rac11g:/u01/app/oracle/product/11.2.0/dbhome_1:N # line added by Agent

10)修改cluster_database=true
SQL> show parameter cluster

NAME     TYPE VALUE
———————————— ———– ——————————
cluster_database     boolean FALSE
cluster_database_instances     integer 1
cluster_interconnects     string
SQL> alter system set cluster_database=true scope=spfile;

System altered.

SQL> shutdown immediate;
SQL> startup;

11)检查当前集群资源和服务状态

[grid@11grac1 ~]$ crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora....SM1.asm application    ONLINE    ONLINE    11grac1     
ora....C1.lsnr application    ONLINE    ONLINE    11grac1     
ora....ac1.gsd application    OFFLINE   OFFLINE               
ora....ac1.ons application    ONLINE    ONLINE    11grac1     
ora....ac1.vip ora....t1.type ONLINE    ONLINE    11grac1     
ora....SM2.asm application    ONLINE    ONLINE    11grac2     
ora....C2.lsnr application    ONLINE    ONLINE    11grac2     
ora....ac2.gsd application    OFFLINE   OFFLINE               
ora....ac2.ons application    ONLINE    ONLINE    11grac2     
ora....ac2.vip ora....t1.type ONLINE    ONLINE    11grac2     
ora.DATA.dg    ora....up.type ONLINE    ONLINE    11grac1     
ora....ER.lsnr ora....er.type ONLINE    ONLINE    11grac1     
ora....N1.lsnr ora....er.type ONLINE    ONLINE    11grac1     
ora....N2.lsnr ora....er.type ONLINE    ONLINE    11grac2     
ora....N3.lsnr ora....er.type ONLINE    ONLINE    11grac2     
ora.OVDF.dg    ora....up.type ONLINE    ONLINE    11grac1     
ora.RFA.dg     ora....up.type ONLINE    ONLINE    11grac1     
ora.asm        ora.asm.type   ONLINE    ONLINE    11grac1     
ora.cvu        ora.cvu.type   ONLINE    ONLINE    11grac2     
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE               
ora....network ora....rk.type ONLINE    ONLINE    11grac1     
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    11grac2     
ora.ons        ora.ons.type   ONLINE    ONLINE    11grac1     
ora....ry.acfs ora....fs.type ONLINE    ONLINE    11grac1     
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    11grac1     
ora.scan2.vip  ora....ip.type ONLINE    ONLINE    11grac2     
ora.scan3.vip  ora....ip.type ONLINE    ONLINE    11grac2
[grid@11grac1 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.LISTENER.lsnr
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.OVDF.dg
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.RFA.dg
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.asm
               ONLINE  ONLINE       11grac1                  Started             
               ONLINE  ONLINE       11grac2                  Started             
ora.gsd
               OFFLINE OFFLINE      11grac1                                      
               OFFLINE OFFLINE      11grac2                                      
ora.net1.network
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.ons
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.registry.acfs
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.11grac1.vip
      1        ONLINE  ONLINE       11grac1                                      
ora.11grac2.vip
      1        ONLINE  ONLINE       11grac2                                      
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       11grac1                                      
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       11grac1                                      
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       11grac2                                      
ora.cvu
      1        ONLINE  ONLINE       11grac1                                      
ora.oc4j
      1        ONLINE  ONLINE       11grac1                                      
ora.scan1.vip
      1        ONLINE  ONLINE       11grac1                                      
ora.scan2.vip
      1        ONLINE  ONLINE       11grac1                                      
ora.scan3.vip
      1        ONLINE  ONLINE       11grac2

十二、将数据库、实例、服务重新配置到Grid Infrastructure下管理
1)添加数据库
[oracle@11grac1.localdomain:/home/oracle]$ srvctl add database -d rac11g -o /u01/app/oracle/product/11.2.0/dbhome_1 -c RAC -p +DATA/rac11g/spfilerac11g.ora -y AUTOMATIC  -a DATA,RFA
2)配置数据
[oracle@11grac1.localdomain:/home/oracle]$ srvctl config database -d rac11g
Database unique name: rac11g
Database name: 
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/rac11g/spfilerac11g.ora
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: rac11g
Database instances: 
Disk Groups: DATA,RFA
Mount point paths: 
Services: 
Type: RAC
Database is administrator managed
3)添加实例
[oracle@11grac1.localdomain:/home/oracle]$ srvctl add instance -d rac11g -i rac11g1 -n 11grac1
[oracle@11grac1.localdomain:/home/oracle]$ srvctl add instance -d rac11g -i rac11g2 -n 11grac2
4)添加服务
[oracle@11grac1.localdomain:/home/oracle]$ srvctl add service -d rac11g -s rac11gsrv -r rac11g1 -a rac11g2 -P PRECONNECT
5)查看资源服务状态

[grid@11grac1 ~]$ crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora....SM1.asm application    ONLINE    ONLINE    11grac1     
ora....C1.lsnr application    ONLINE    ONLINE    11grac1     
ora....ac1.gsd application    OFFLINE   OFFLINE               
ora....ac1.ons application    ONLINE    ONLINE    11grac1     
ora....ac1.vip ora....t1.type ONLINE    ONLINE    11grac1     
ora....SM2.asm application    ONLINE    ONLINE    11grac2     
ora....C2.lsnr application    ONLINE    ONLINE    11grac2     
ora....ac2.gsd application    OFFLINE   OFFLINE               
ora....ac2.ons application    ONLINE    ONLINE    11grac2     
ora....ac2.vip ora....t1.type ONLINE    ONLINE    11grac2     
ora.DATA.dg    ora....up.type ONLINE    ONLINE    11grac1     
ora....ER.lsnr ora....er.type ONLINE    ONLINE    11grac1     
ora....N1.lsnr ora....er.type ONLINE    ONLINE    11grac1     
ora....N2.lsnr ora....er.type ONLINE    ONLINE    11grac2     
ora....N3.lsnr ora....er.type ONLINE    ONLINE    11grac2     
ora.OVDF.dg    ora....up.type ONLINE    ONLINE    11grac1     
ora.RFA.dg     ora....up.type ONLINE    ONLINE    11grac1     
ora.asm        ora.asm.type   ONLINE    ONLINE    11grac1     
ora.cvu        ora.cvu.type   ONLINE    ONLINE    11grac2     
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE               
ora....network ora....rk.type ONLINE    ONLINE    11grac1     
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    11grac2     
ora.ons        ora.ons.type   ONLINE    ONLINE    11grac1     
ora.rac11g.db  ora....se.type OFFLINE   OFFLINE               
ora....srv.svc ora....ce.type OFFLINE   OFFLINE               
ora....ect.svc ora....ce.type OFFLINE   OFFLINE               
ora....ry.acfs ora....fs.type ONLINE    ONLINE    11grac1     
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    11grac1     
ora.scan2.vip  ora....ip.type ONLINE    ONLINE    11grac2     
ora.scan3.vip  ora....ip.type ONLINE    ONLINE    11grac2     
[grid@11grac1 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.LISTENER.lsnr
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.OVDF.dg
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.RFA.dg
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.asm
               ONLINE  ONLINE       11grac1                  Started             
               ONLINE  ONLINE       11grac2                  Started             
ora.gsd
               OFFLINE OFFLINE      11grac1                                      
               OFFLINE OFFLINE      11grac2                                      
ora.net1.network
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.ons
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.registry.acfs
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.11grac1.vip
      1        ONLINE  ONLINE       11grac1                                      
ora.11grac2.vip
      1        ONLINE  ONLINE       11grac2                                      
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       11grac1                                      
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       11grac2                                      
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       11grac2                                      
ora.cvu
      1        ONLINE  ONLINE       11grac2                                      
ora.oc4j
      1        ONLINE  ONLINE       11grac2                                      
ora.rac11g.db
      1        OFFLINE OFFLINE                                                   
      2        OFFLINE OFFLINE                                                   
ora.rac11g.rac11gsrv.svc
      1        OFFLINE OFFLINE                                                   
ora.rac11g.rac11gsrv_preconnect.svc
      1        OFFLINE OFFLINE                                                   
ora.scan1.vip
      1        ONLINE  ONLINE       11grac1                                      
ora.scan2.vip
      1        ONLINE  ONLINE       11grac2                                      
ora.scan3.vip
      1        ONLINE  ONLINE       11grac2

6)启动数据库实例服务
[grid@11grac1 ~]$ srvctl start database -d rac11g
7)启动服务
[grid@11grac1 ~]$ srvctl start service -d rac11g
8)查看资源服务状态

[grid@11grac1 ~]$ crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora....SM1.asm application    ONLINE    ONLINE    11grac1     
ora....C1.lsnr application    ONLINE    ONLINE    11grac1     
ora....ac1.gsd application    OFFLINE   OFFLINE               
ora....ac1.ons application    ONLINE    ONLINE    11grac1     
ora....ac1.vip ora....t1.type ONLINE    ONLINE    11grac1     
ora....SM2.asm application    ONLINE    ONLINE    11grac2     
ora....C2.lsnr application    ONLINE    ONLINE    11grac2     
ora....ac2.gsd application    OFFLINE   OFFLINE               
ora....ac2.ons application    ONLINE    ONLINE    11grac2     
ora....ac2.vip ora....t1.type ONLINE    ONLINE    11grac2     
ora.DATA.dg    ora....up.type ONLINE    ONLINE    11grac1     
ora....ER.lsnr ora....er.type ONLINE    ONLINE    11grac1     
ora....N1.lsnr ora....er.type ONLINE    ONLINE    11grac1     
ora....N2.lsnr ora....er.type ONLINE    ONLINE    11grac2     
ora....N3.lsnr ora....er.type ONLINE    ONLINE    11grac2     
ora.OVDF.dg    ora....up.type ONLINE    ONLINE    11grac1     
ora.RFA.dg     ora....up.type ONLINE    ONLINE    11grac1     
ora.asm        ora.asm.type   ONLINE    ONLINE    11grac1     
ora.cvu        ora.cvu.type   ONLINE    ONLINE    11grac2     
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE               
ora....network ora....rk.type ONLINE    ONLINE    11grac1     
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    11grac2     
ora.ons        ora.ons.type   ONLINE    ONLINE    11grac1     
ora.rac11g.db  ora....se.type ONLINE    ONLINE    11grac1     
ora....srv.svc ora....ce.type ONLINE    ONLINE    11grac1     
ora....ect.svc ora....ce.type OFFLINE   OFFLINE               
ora....ry.acfs ora....fs.type ONLINE    ONLINE    11grac1     
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    11grac1     
ora.scan2.vip  ora....ip.type ONLINE    ONLINE    11grac2     
ora.scan3.vip  ora....ip.type ONLINE    ONLINE    11grac2     
[grid@11grac1 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.LISTENER.lsnr
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.OVDF.dg
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.RFA.dg
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.asm
               ONLINE  ONLINE       11grac1                  Started             
               ONLINE  ONLINE       11grac2                  Started             
ora.gsd
               OFFLINE OFFLINE      11grac1                                      
               OFFLINE OFFLINE      11grac2                                      
ora.net1.network
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.ons
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
ora.registry.acfs
               ONLINE  ONLINE       11grac1                                      
               ONLINE  ONLINE       11grac2                                      
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.11grac1.vip
      1        ONLINE  ONLINE       11grac1                                      
ora.11grac2.vip
      1        ONLINE  ONLINE       11grac2                                      
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       11grac1                                      
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       11grac2                                      
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       11grac2                                      
ora.cvu
      1        ONLINE  ONLINE       11grac2                                      
ora.oc4j
      1        ONLINE  ONLINE       11grac2                                      
ora.rac11g.db
      1        ONLINE  ONLINE       11grac1                  Open                
      2        ONLINE  ONLINE       11grac2                  Open                
ora.rac11g.rac11gsrv.svc
      1        ONLINE  ONLINE       11grac1                                      
ora.rac11g.rac11gsrv_preconnect.svc
      1        OFFLINE OFFLINE                                                   
ora.scan1.vip
      1        ONLINE  ONLINE       11grac1                                      
ora.scan2.vip
      1        ONLINE  ONLINE       11grac2                                      
ora.scan3.vip
      1        ONLINE  ONLINE       11grac2

9)查看数据库状态

[oracle@11grac1.localdomain:/home/oracle]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Tue May 19 02:42:11 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select instance_name,status from gv$instance;

INSTANCE_NAME	 STATUS
---------------- ------------
rac11g1 	 OPEN
rac11g2 	 OPEN
10)查看数据库版本
SQL> select * from DBA_REGISTRY_HISTORY

ACTION_TIME		       ACTION	       NAMESPACE  VERSION	   ID BUNDLE_SERIES   COMMENTS
------------------------------ --------------- ---------- ---------- -------- --------------- -------------------------
19-MAY-15 12.26.56.586864 AM   VIEW INVALIDATE			      8289601		      view invalidation
19-MAY-15 12.30.29.942568 AM   UPGRADE	       SERVER	  11.2.0.4.0			      Upgraded from 11.1.0.7.0
19-MAY-15 12.57.34.299763 AM   APPLY	       SERVER	  11.2.0.4	    0 PSU	      Patchset 11.2.0.2.0

11)查看opatch版本
[grid@11grac1 OPatch]$ ./opatch lsinventory
Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/11.2.0/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/11.2.0/grid/oraInst.loc
OPatch version    : 11.2.0.3.4
OUI version       : 11.2.0.4.0
Log file location : /u01/app/11.2.0/grid/cfgtoollogs/opatch/opatch2015-05-19_02-50-56AM_1.log

Lsinventory Output file location : /u01/app/11.2.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2015-05-19_02-50-56AM.txt

——————————————————————————–
Installed Top-level Products (1): 

Oracle Grid Infrastructure 11g                                       11.2.0.4.0
There are 1 products installed in this Oracle Home.

There are no Interim patches installed in this Oracle Home.

Rac system comprising of multiple nodes
  Local node = 11grac1
  Remote node = 11grac2

——————————————————————————–

OPatch succeeded.

[oracle@11grac1.localdomain:/u01/app/oracle/product/11.2.0/dbhome_1/OPatch]$ ./opatch lsinventory
Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/11.2.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/11.2.0/dbhome_1/oraInst.loc
OPatch version    : 11.2.0.3.4
OUI version       : 11.2.0.4.0
Log file location : /u01/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch2015-05-19_02-50-27AM_1.log

Lsinventory Output file location : /u01/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2015-05-19_02-50-27AM.txt

——————————————————————————–
Installed Top-level Products (1): 

Oracle Database 11g                                                  11.2.0.4.0
There are 1 products installed in this Oracle Home.

There are no Interim patches installed in this Oracle Home.

Rac system comprising of multiple nodes
  Local node = 11grac1
  Remote node = 11grac2

——————————————————————————–

OPatch succeeded.

[grid@11grac2 OPatch]$ ./opatch lspatches
There are no Interim patches installed in this Oracle Home.
[grid@11grac2 OPatch]$ ./opatch lsinventory
Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/11.2.0/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/11.2.0/grid/oraInst.loc
OPatch version    : 11.2.0.3.4
OUI version       : 11.2.0.4.0
Log file location : /u01/app/11.2.0/grid/cfgtoollogs/opatch/opatch2015-05-19_02-53-58AM_1.log

Lsinventory Output file location : /u01/app/11.2.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2015-05-19_02-53-58AM.txt

——————————————————————————–
Installed Top-level Products (1): 

Oracle Grid Infrastructure 11g                                       11.2.0.4.0
There are 1 products installed in this Oracle Home.

There are no Interim patches installed in this Oracle Home.

Rac system comprising of multiple nodes
  Local node = 11grac2
  Remote node = 11grac1

——————————————————————————–

OPatch succeeded.

[oracle@11grac2.localdomain:/u01/app/oracle/product/11.2.0/dbhome_1/OPatch]$ ./opatch lsinventory
Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/11.2.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/11.2.0/dbhome_1/oraInst.loc
OPatch version    : 11.2.0.3.4
OUI version       : 11.2.0.4.0
Log file location : /u01/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch2015-05-19_02-54-07AM_1.log

Lsinventory Output file location : /u01/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2015-05-19_02-54-07AM.txt

——————————————————————————–
Installed Top-level Products (1): 

Oracle Database 11g                                                  11.2.0.4.0
There are 1 products installed in this Oracle Home.

There are no Interim patches installed in this Oracle Home.

Rac system comprising of multiple nodes
  Local node = 11grac2
  Remote node = 11grac1

——————————————————————————–

OPatch succeeded.

十三、总结
1)在执行升级预检查脚本时,由于使用原11gR1的$ORACLE_HOME启动数据库失败,因此我们通过一定的方法绕过了这个步骤而直接执行升级脚本。但是在实际生产环境中应该按照规范步骤执行,这就要求我们在将11R1 RAC环境备份删除前先执行该步骤,否则在后期执行出错时可能还需要会退再执行;
2)NTP和DNS均采用RAC中的一个节点作为Server,这在生产环境下是不合理的;
3)以上文档只作为RAC安装升级的方法参考,不代表真实环境实施步骤。

OEL5.10安装11.1.0.6 RAC升级11.1.0.7

经阅读11gR1 Cluster和Database安装包readme文档说明,发现11gR1 RAC安装和10gR2安装配置基本没有太大差别,大体上完全相同,没有11gR2的grid用户及集群管理方式,因此本文档简单记录一下安装及升级过程。

〇、环境描述
1.虚拟机软件
VirtualBox 4.3.26
2.操作系统版本
OEL 5.10 Linux-64
3.数据库软件
11.1.0.6 Cluster + Database
4.升级补丁包
11.1.0.7 Patch升级包 [p6890831_111070_Linux-x86-64.zip]
5.存储磁盘分配
OCR使用/dev/sdb sdc两块盘存放
Voting disk使用/dev/sdd存放
/dev/sde作为数据库盘
/dev/sdf作为闪回区盘
6.RAC集群IP地址主机名分配信息
#Public IP 
192.168.56.111 11grac1 
192.168.56.112 11grac2
#Private IP 
10.0.10.11 11grac1-priv 
10.0.10.12 11grac2-priv
#Virtual IP 
192.168.56.112 11grac1-vip 
192.168.56.113 11grac2-vip

一、创建虚拟机安装操作系统添加共享磁盘配置此步骤只简单说明
1.每个虚拟机至少添加两块网卡eth0/eth1

eth0作为Public网卡
eth1作为Private网卡
2.共享磁盘添加5块(3块1GB/1块5GB/1块2GB

1)关于5块共享磁盘说明
OCRVOTE1/2用于存放OCR
OCRVOTE3用于存放Voting Disk
DATA用于数据盘
ARCH用于闪回区
2)共享磁盘创建步骤
a.关闭已经安装好系统的两台虚拟机,在节点1上添加以上5块共享磁盘,点击节点1虚拟机主界面“存储”弹出如下对话框,按图示依次进行操作。

点击“创建”,按照上面的方法一次创建剩余的四块共享磁盘。

b.修改以上创建的5块磁盘属性,使其变为可共享模式。
点击VirtualBox主菜单上的“管理”—>“虚拟介质管理”,找到刚添加的5块磁盘,依次选中后,单击鼠标右键在弹出窗口中选择“可共享”后点击确定。

c.将5块共享磁盘依次添加到节点2虚拟机,点击节点2虚拟机主界面“存储”弹出如下对话框,按图示依次进行操作。

依次浏览添加在节点1上创建的5块共享磁盘,注意这里浏览添加的顺序和节点1创建磁盘时的顺序相同,完成后虚拟机节点2就可以看到和节点1相同的磁盘。

完成以上步骤启动2个节点虚拟机,完后后续步骤。

三、设置主机名IP地址hosts解析等信息此步骤忽略
[root@11grac1 ~]# cat /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=11grac1.localdomain
NOZEROCONF=yes
[root@11grac2 ~]# cat /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=11grac2.localdomain
NOZEROCONF=yes
[root@11grac1 ~]# ifconfig 
eth0      Link encap:Ethernet  HWaddr 08:00:27:5B:7E:27  
          inet addr:192.168.56.111  Bcast:192.168.56.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2333789 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7799133 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1044868449 (996.4 MiB)  TX bytes:10558577602 (9.8 GiB)

eth1      Link encap:Ethernet  HWaddr 08:00:27:20:63:C0  
          inet addr:10.0.10.11  Bcast:10.0.10.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2064610 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1852436 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1204869395 (1.1 GiB)  TX bytes:977565503 (932.2 MiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:600944 errors:0 dropped:0 overruns:0 frame:0
          TX packets:600944 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:412793105 (393.6 MiB)  TX bytes:412793105 (393.6 MiB)
[root@11grac2 ~]# ifconfig 
eth0      Link encap:Ethernet  HWaddr 08:00:27:9C:07:C2  
          inet addr:192.168.56.222  Bcast:192.168.56.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:7295240 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1149928 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:10203866831 (9.5 GiB)  TX bytes:86780191 (82.7 MiB)

eth1      Link encap:Ethernet  HWaddr 08:00:27:CA:4E:75  
          inet addr:10.0.10.22  Bcast:10.0.10.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1381921 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2018678 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:728991095 (695.2 MiB)  TX bytes:1199900523 (1.1 GiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:447067 errors:0 dropped:0 overruns:0 frame:0
          TX packets:447067 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:209613125 (199.9 MiB)  TX bytes:209613125 (199.9 MiB)
[root@11grac1 ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6

# Public IP
192.168.56.111 11grac1.localdomain 11grac1
192.168.56.222 11grac2.localdomain 11grac2

# Private IP
10.0.10.11 11grac1-priv.localdomain 11grac1-priv
10.0.10.22 11grac2-priv.localdomain 11grac2-priv

# Virtual IP
192.168.56.112 11grac1-vip.localdomain 11grac1-vip
192.168.56.223 11grac2-vip.localdomain 11grac2-vip

[root@11grac2 ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6

# Public IP
192.168.56.111 11grac1.localdomain 11grac1
192.168.56.222 11grac2.localdomain 11grac2

# Private IP
10.0.10.11  11grac1-priv.localdomain 11grac1-priv
10.0.10.22  11grac2-priv.localdomain 11grac2-priv

# Virtual IP
192.168.56.112 11grac1-vip.localdomain 11grac1-vip
192.168.56.223 11grac2-vip.localdomain 11grac2-vip

四、安装必要补丁包
可以使用yum直接安装OEL提供的oracle-validated包,安装此包也会创建oracle用户和oinstall/dba组,可参考:http://www.lynnlee.cn/?p=225

五、RAC节点间NTP时间同步
可以参考:http://www.lynnlee.cn/?p=969

六、配置oracle用户ssh互信关系并验证
1)配置ssh互信关系
[oracle@11grac1 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa): 
Created directory ‘/home/oracle/.ssh’.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
10:75:22:03:21:35:c7:5b:ef:c6:3a:d9:5a:dd:6b:7b oracle@11grac1.localdomain
[oracle@11grac1 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
ca:19:c2:55:b7:c9:2e:30:55:96:8c:be:da:35:06:a9 oracle@11grac1.localdomain

[oracle@11grac2 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa): 
Created directory ‘/home/oracle/.ssh’.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
8e:4b:06:9e:7f:ce:8a:6b:d0:b8:9f:15:23:63:3b:33 oracle@11grac2.localdomain
[oracle@11grac2 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
a9:d2:dd:d5:91:37:30:23:81:fc:fa:c9:5f:0e:5a:96 oracle@11grac2.localdomain

[oracle@11grac1 ~]$ cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys
[oracle@11grac1 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[oracle@11grac1 ~]$ ssh 11grac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
The authenticity of host ’11grac2 (192.168.56.222)’ can’t be established.
RSA key fingerprint is 25:8c:5f:0f:cd:8a:4b:35:84:75:c8:cd:58:75:35:6b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ’11grac2,192.168.56.222′ (RSA) to the list of known hosts.
oracle@11grac2’s password: 
[oracle@11grac1 ~]$ ssh 11grac2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
oracle@11grac2’s password: 
[oracle@11grac1 ~]$ scp ~/.ssh/authorized_keys 11grac2:~/.ssh/authorized_keys
oracle@11grac2’s password: 
authorized_keys                                                                   100% 2040     2.0KB/s   00:00    
2)测试oracle用户ssh互信关系
$ cat ssh.sh 
ssh 11grac1 date
ssh 11grac2 date
ssh 11grac1-priv date 
ssh 11grac2-priv date
[oracle@11grac1 ~]$ sh ssh.sh 
Mon May 18 08:33:34 CST 2015
Mon May 18 08:33:35 CST 2015
Mon May 18 08:33:34 CST 2015
Mon May 18 08:33:35 CST 2015
[oracle@11grac2 ~]$ sh ssh.sh 
Mon May 18 08:33:44 CST 2015
Mon May 18 08:33:45 CST 2015
Mon May 18 08:33:45 CST 2015
Mon May 18 08:33:46 CST 2015

七、使用udev配置ASM共享磁盘
1)查看磁盘信息
[root@11grac1.localdomain:/root]$ fdisk -l

Disk /dev/sdb: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn’t contain a valid partition table

Disk /dev/sdc: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdc doesn’t contain a valid partition table

Disk /dev/sdd: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdd doesn’t contain a valid partition table

Disk /dev/sde: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sde doesn’t contain a valid partition table

Disk /dev/sdf: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdf doesn’t contain a valid partition table

[root@11grac2.localdomain:/root]$ fdisk -l

Disk /dev/sdb: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn’t contain a valid partition table

Disk /dev/sdc: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdc doesn’t contain a valid partition table

Disk /dev/sdd: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdd doesn’t contain a valid partition table

Disk /dev/sde: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sde doesn’t contain a valid partition table

Disk /dev/sdf: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdf doesn’t contain a valid partition table
2)使用udev配置共享磁盘
在两个节点上执行以下令,并将输出结果添加到新建的udev规则文件/etc/udev/rules.d/99-oracle-asmdevices.rules中:
for i in b c d e f;
do
echo “KERNEL==\”sd*\”, BUS==\”scsi\”, PROGRAM==\”/sbin/scsi_id -g -u -s %p\”, RESULT==\”`scsi_id -g -u -s /block/sd$i`\”, NAME=\”asm-disk$i\”, OWNER=\”oracle\”, GROUP=\”oinstall\”, MODE=\”0660\””
done

[root@11grac1 ~]# for i in b c d e f;
> do
> echo “KERNEL==\”sd*\”, BUS==\”scsi\”, PROGRAM==\”/sbin/scsi_id -g -u -s %p\”, RESULT==\”`scsi_id -g -u -s /block/sd$i`\”, NAME=\”asm-disk$i\”, OWNER=\”oracle\”, GROUP=\”oinstall\”, MODE=\”0660\””
> done
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VB424a5eb7-c9274de0_”, NAME=”asm-diskb”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VB95c63929-9336a092_”, NAME=”asm-diskc”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VBa044f79d-51b67554_”, NAME=”asm-diskd”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VB86ee407e-415b5b32_”, NAME=”asm-diske”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VBe59f5561-e0df75b7_”, NAME=”asm-diskf”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″

[root@11grac2 ~]# for i in b c d e f;
> do
> echo “KERNEL==\”sd*\”, BUS==\”scsi\”, PROGRAM==\”/sbin/scsi_id -g -u -s %p\”, RESULT==\”`scsi_id -g -u -s /block/sd$i`\”, NAME=\”asm-disk$i\”, OWNER=\”oracle\”, GROUP=\”oinstall\”, MODE=\”0660\””
> done
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VB424a5eb7-c9274de0_”, NAME=”asm-diskb”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VB95c63929-9336a092_”, NAME=”asm-diskc”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VBa044f79d-51b67554_”, NAME=”asm-diskd”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VB86ee407e-415b5b32_”, NAME=”asm-diske”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VBe59f5561-e0df75b7_”, NAME=”asm-diskf”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″

[root@11grac1 ~]# cat /etc/udev/rules.d/99-oracle-asmdevices.rules 
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VB424a5eb7-c9274de0_”, NAME=”asm-diskb”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VB95c63929-9336a092_”, NAME=”asm-diskc”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VBa044f79d-51b67554_”, NAME=”asm-diskd”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”00660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VB86ee407e-415b5b32_”, NAME=”asm-diske”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VBe59f5561-e0df75b7_”, NAME=”asm-diskf”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
[root@11grac2 ~]# cat /etc/udev/rules.d/99-oracle-asmdevices.rules 
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VB424a5eb7-c9274de0_”, NAME=”asm-diskb”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VB95c63929-9336a092_”, NAME=”asm-diskc”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VBa044f79d-51b67554_”, NAME=”asm-diskd”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VB86ee407e-415b5b32_”, NAME=”asm-diske”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
KERNEL==”sd*”, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -s %p”, RESULT==”SATA_VBOX_HARDDISK_VBe59f5561-e0df75b7_”, NAME=”asm-diskf”, OWNER=”oracle”, GROUP=”oinstall”, MODE=”0660″
3)启动udev
[root@11grac1 ~]# start_udev 
Starting udev:                                             [  OK  ]
[root@11grac2 ~]# start_udev 
Starting udev:                                             [  OK  ]
4)查看共享磁盘
[root@11grac1 ~]# ls -l /dev/asm-disk*
brw-rw—- 1 oracle oinstall 8, 16 May 18  2015 /dev/asm-diskb
brw-rw—- 1 oracle oinstall 8, 32 May 18  2015 /dev/asm-diskc
brw-rw—- 1 oracle oinstall 8, 48 May 18 08:16 /dev/asm-diskd
brw-rw—- 1 oracle oinstall 8, 64 May 18 08:15 /dev/asm-diske
brw-rw—- 1 oracle oinstall 8, 80 May 18 08:15 /dev/asm-diskf
[root@11grac2 ~]#ls -l /dev/asm-disk*
brw-rw—- 1 oracle oinstall 8, 16 May 18 08:16 /dev/asm-diskb
brw-rw—- 1 oracle oinstall 8, 32 May 18 08:16 /dev/asm-diskc
brw-rw—- 1 oracle oinstall 8, 48 May 18 08:17 /dev/asm-diskd
brw-rw—- 1 oracle oinstall 8, 64 May 18 08:15 /dev/asm-diske
brw-rw—- 1 oracle oinstall 8, 80 May 18 08:15 /dev/asm-diskf

八、设置环境变量
1)oracle环境变量
[oracle@11grac1 ~]$ cat .bash_profile 
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

export ORACLE_UNQNAME=rac11g
export ORACLE_SID=rac11g1
export ORACLE_BASE=/u01/app
export CRS_HOME=$ORACLE_BASE/crs/11.1.0/crshome_1
export ORACLE_HOME=$ORACLE_BASE/oracle/product/11.1.0/dbhome_1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/share/lib
export CLASSPATH=$ORACLE_HOME/jre:$ORACLE_HOME/jlib:$ORACLE_HOME/jdbc/lib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib
export PATH=$ORACLE_HOME/bin:$CRS_HOME/bin:$PATH

[oracle@11grac2 ~]$ cat .bash_profile 
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

export ORACLE_UNQNAME=rac11g
export ORACLE_SID=rac11g1
export ORACLE_BASE=/u01/app
export CRS_HOME=$ORACLE_BASE/crs/11.1.0/crshome_1
export ORACLE_HOME=$ORACLE_BASE/oracle/product/11.1.0/dbhome_1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/share/lib
export CLASSPATH=$ORACLE_HOME/jre:$ORACLE_HOME/jlib:$ORACLE_HOME/jdbc/lib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib
export PATH=$ORACLE_HOME/bin:$CRS_HOME/bin:$PATH
2)root环境变量
[root@11grac1 ~]# cat .bash_profile 
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH
unset USERNAME

export ORACLE_BASE=/u01/app
export CRS_HOME=$ORACLE_BASE/crs/11.1.0/crshome_1
export PATH=$CRS_HOME/bin:$PATH

[root@11grac2 ~]# cat .bash_profile 
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH
unset USERNAME

export ORACLE_BASE=/u01/app
export CRS_HOME=$ORACLE_BASE/crs/11.1.0/crshome_1
export PATH=$CRS_HOME/bin:$PATH

九、创建用户属主目录修改权限此步骤忽略

十、安装11.1.0.6 Cluster软件
1)安装前预检查
[root@11grac1.localdomain:/tmp]$ unzip linux.x64_11gR1_clusterware.zip
[root@11grac1.localdomain:/tmp]$ cd clusterware
[oracle@11grac1.localdomain:/tmp/clusterware]$ ./runcluvfy.sh stage -pre crsinst -n 11grac1,11grac2 -verbose

Performing pre-checks for cluster services setup 

Checking node reachability…

Check: Node reachability from node “11grac1″
  Destination Node                      Reachable?              
  ————————————  ————————
  11grac2                               yes                     
  11grac1                               yes                     
Result: Node reachability check passed from node “11grac1″.

Checking user equivalence…

Check: User equivalence for user “oracle”
  Node Name                             Comment                 
  ————————————  ————————
  11grac2                               passed                  
  11grac1                               passed                  
Result: User equivalence check passed for user “oracle”.

Checking administrative privileges…

Check: Existence of user “oracle”
  Node Name     User Exists               Comment                 
  ————  ————————  ————————
  11grac2       yes                       passed                  
  11grac1       yes                       passed                  
Result: User existence check passed for “oracle”.

Check: Existence of group “oinstall”
  Node Name     Status                    Group ID                
  ————  ————————  ————————
  11grac2       exists                    54321                   
  11grac1       exists                    54321                   
Result: Group existence check passed for “oinstall”.

Check: Membership of user “oracle” in group “oinstall” [as Primary]
  Node Name         User Exists   Group Exists  User in Group  Primary       Comment     
  —————-  ————  ————  ————  ————  ————
  11grac2           yes           yes           yes           yes           passed      
  11grac1           yes           yes           yes           yes           passed      
Result: Membership check for user “oracle” in group “oinstall” [as Primary] passed.

Administrative privileges check passed.

Checking node connectivity…

Interface information for node “11grac2″
  Interface Name    IP Address    Subnet        Subnet Gateway  Default Gateway  Hardware Address
  —————-  ————  ————  ————  ————  ————
  eth0              192.168.56.222  192.168.56.0  0.0.0.0       10.0.10.1     08:00:27:9C:07:C2
  eth1              10.0.10.22    10.0.10.0     0.0.0.0       10.0.10.1     08:00:27:CA:4E:75

Interface information for node “11grac1″
  Interface Name    IP Address    Subnet        Subnet Gateway  Default Gateway  Hardware Address
  —————-  ————  ————  ————  ————  ————
  eth0              192.168.56.111  192.168.56.0  0.0.0.0       10.0.10.1     08:00:27:5B:7E:27
  eth1              10.0.10.11    10.0.10.0     0.0.0.0       10.0.10.1     08:00:27:20:63:C0

Check: Node connectivity of subnet “192.168.56.0”
  Source                          Destination                     Connected?      
  ——————————  ——————————  —————-
  11grac2:eth0                    11grac1:eth0                    yes             
Result: Node connectivity check passed for subnet “192.168.56.0” with node(s) 11grac2,11grac1.

Check: Node connectivity of subnet “10.0.10.0”
  Source                          Destination                     Connected?      
  ——————————  ——————————  —————-
  11grac2:eth1                    11grac1:eth1                    yes             
Result: Node connectivity check passed for subnet “10.0.10.0” with node(s) 11grac2,11grac1.

Interfaces found on subnet “192.168.56.0” that are likely candidates for VIP:
11grac2 eth0:192.168.56.222
11grac1 eth0:192.168.56.111

Interfaces found on subnet “10.0.10.0” that are likely candidates for a private interconnect:
11grac2 eth1:10.0.10.22
11grac1 eth1:10.0.10.11

Result: Node connectivity check passed.

Checking system requirements for ‘crs’…

Check: Total memory 
  Node Name     Available                 Required                  Comment   
  ————  ————————  ————————  ———-
  11grac2       1.96GB (2054984KB)        1GB (1048576KB)           passed    
  11grac1       1.96GB (2054984KB)        1GB (1048576KB)           passed    
Result: Total memory check passed.

Check: Free disk space in “/tmp” dir
  Node Name     Available                 Required                  Comment   
  ————  ————————  ————————  ———-
  11grac2       21.1GB (22124120KB)       400MB (409600KB)          passed    
  11grac1       18.44GB (19332556KB)      400MB (409600KB)          passed    
Result: Free disk space check passed.

Check: Swap space 
  Node Name     Available                 Required                  Comment   
  ————  ————————  ————————  ———-
  11grac2       4GB (4194300KB)           1.5GB (1572864KB)         passed    
  11grac1       4GB (4194300KB)           1.5GB (1572864KB)         passed    
Result: Swap space check passed.

Check: System architecture 
  Node Name     Available                 Required                  Comment   
  ————  ————————  ————————  ———-
  11grac2       x86_64                    x86_64                    passed    
  11grac1       x86_64                    x86_64                    passed    
Result: System architecture check passed.

Check: Kernel version 
  Node Name     Available                 Required                  Comment   
  ————  ————————  ————————  ———-
  11grac2       2.6.39-400.209.1.el5uek   2.6.18                    passed    
  11grac1       2.6.39-400.209.1.el5uek   2.6.18                    passed    
Result: Kernel version check passed.

Check: Package existence for “make-3.81″ 
  Node Name                       Status                          Comment         
  ——————————  ——————————  —————-
  11grac2                         make-3.81-3.el5                 passed          
  11grac1                         make-3.81-3.el5                 passed          
Result: Package existence check passed for “make-3.81″.

Check: Package existence for “binutils-2.17.50.0.6″ 
  Node Name                       Status                          Comment         
  ——————————  ——————————  —————-
  11grac2                         binutils-2.17.50.0.6-26.el5     passed          
  11grac1                         binutils-2.17.50.0.6-26.el5     passed          
Result: Package existence check passed for “binutils-2.17.50.0.6″.

Check: Package existence for “gcc-4.1.1″ 
  Node Name                       Status                          Comment         
  ——————————  ——————————  —————-
  11grac2                         gcc-4.1.2-54.el5                passed          
  11grac1                         gcc-4.1.2-54.el5                passed          
Result: Package existence check passed for “gcc-4.1.1″.

Check: Package existence for “libaio-0.3.106″ 
  Node Name                       Status                          Comment         
  ——————————  ——————————  —————-
  11grac2                         libaio-0.3.106-5                passed          
  11grac1                         libaio-0.3.106-5                passed          
Result: Package existence check passed for “libaio-0.3.106″.

Check: Package existence for “libaio-0.3.106″ 
  Node Name                       Status                          Comment         
  ——————————  ——————————  —————-
  11grac2                         libaio-0.3.106-5                passed          
  11grac1                         libaio-0.3.106-5                passed          
Result: Package existence check passed for “libaio-0.3.106″.

Check: Package existence for “libaio-devel-0.3.106″ 
  Node Name                       Status                          Comment         
  ——————————  ——————————  —————-
  11grac2                         libaio-devel-0.3.106-5          passed          
  11grac1                         libaio-devel-0.3.106-5          passed          
Result: Package existence check passed for “libaio-devel-0.3.106″.

Check: Package existence for “libstdc++-4.1.1″ 
  Node Name                       Status                          Comment         
  ——————————  ——————————  —————-
  11grac2                         libstdc++-4.1.2-54.el5          passed          
  11grac1                         libstdc++-4.1.2-54.el5          passed          
Result: Package existence check passed for “libstdc++-4.1.1″.

Check: Package existence for “libstdc++-4.1.1″ 
  Node Name                       Status                          Comment         
  ——————————  ——————————  —————-
  11grac2                         libstdc++-4.1.2-54.el5          passed          
  11grac1                         libstdc++-4.1.2-54.el5          passed          
Result: Package existence check passed for “libstdc++-4.1.1″.

Check: Package existence for “elfutils-libelf-devel-0.125″ 
  Node Name                       Status                          Comment         
  ——————————  ——————————  —————-
  11grac2                         elfutils-libelf-devel-0.137-3.el5  passed          
  11grac1                         elfutils-libelf-devel-0.137-3.el5  passed          
Result: Package existence check passed for “elfutils-libelf-devel-0.125″.

Check: Package existence for “sysstat-7.0.0″ 
  Node Name                       Status                          Comment         
  ——————————  ——————————  —————-
  11grac2                         sysstat-7.0.2-12.0.1.el5        passed          
  11grac1                         sysstat-7.0.2-12.0.1.el5        passed          
Result: Package existence check passed for “sysstat-7.0.0″.

Check: Package existence for “compat-libstdc++-33-3.2.3″ 
  Node Name                       Status                          Comment         
  ——————————  ——————————  —————-
  11grac2                         compat-libstdc++-33-3.2.3-61    passed          
  11grac1                         compat-libstdc++-33-3.2.3-61    passed          
Result: Package existence check passed for “compat-libstdc++-33-3.2.3″.

Check: Package existence for “compat-libstdc++-33-3.2.3″ 
  Node Name                       Status                          Comment         
  ——————————  ——————————  —————-
  11grac2                         compat-libstdc++-33-3.2.3-61    passed          
  11grac1                         compat-libstdc++-33-3.2.3-61    passed          
Result: Package existence check passed for “compat-libstdc++-33-3.2.3″.

Check: Package existence for “libgcc-4.1.1″ 
  Node Name                       Status                          Comment         
  ——————————  ——————————  —————-
  11grac2                         libgcc-4.1.2-54.el5             passed          
  11grac1                         libgcc-4.1.2-54.el5             passed          
Result: Package existence check passed for “libgcc-4.1.1″.

Check: Package existence for “libgcc-4.1.1″ 
  Node Name                       Status                          Comment         
  ——————————  ——————————  —————-
  11grac2                         libgcc-4.1.2-54.el5             passed          
  11grac1                         libgcc-4.1.2-54.el5             passed          
Result: Package existence check passed for “libgcc-4.1.1″.

Check: Package existence for “libstdc++-devel-4.1.1″ 
  Node Name                       Status                          Comment         
  ——————————  ——————————  —————-
  11grac2                         libstdc++-devel-4.1.2-54.el5    passed          
  11grac1                         libstdc++-devel-4.1.2-54.el5    passed          
Result: Package existence check passed for “libstdc++-devel-4.1.1″.

Check: Package existence for “elfutils-libelf-0.125″ 
  Node Name                       Status                          Comment         
  ——————————  ——————————  —————-
  11grac2                         elfutils-libelf-0.137-3.el5     passed          
  11grac1                         elfutils-libelf-0.137-3.el5     passed          
Result: Package existence check passed for “elfutils-libelf-0.125″.

Check: Package existence for “glibc-2.5-12″ 
  Node Name                       Status                          Comment         
  ——————————  ——————————  —————-
  11grac2                         missing                         failed          
  11grac1                         missing                         failed          
Result: Package existence check failed for “glibc-2.5-12″.

Check: Package existence for “glibc-2.5-12″ 
  Node Name                       Status                          Comment         
  ——————————  ——————————  —————-
  11grac2                         glibc-2.5-118                   passed          
  11grac1                         glibc-2.5-118                   passed          
Result: Package existence check passed for “glibc-2.5-12″.

Check: Package existence for “glibc-common-2.5″ 
  Node Name                       Status                          Comment         
  ——————————  ——————————  —————-
  11grac2                         glibc-common-2.5-118            passed          
  11grac1                         glibc-common-2.5-118            passed          
Result: Package existence check passed for “glibc-common-2.5″.

Check: Package existence for “glibc-devel-2.5″ 
  Node Name                       Status                          Comment         
  ——————————  ——————————  —————-
  11grac2                         glibc-devel-2.5-118             passed          
  11grac1                         glibc-devel-2.5-118             passed          
Result: Package existence check passed for “glibc-devel-2.5″.

Check: Package existence for “glibc-devel-2.5″ 
  Node Name                       Status                          Comment         
  ——————————  ——————————  —————-
  11grac2                         glibc-devel-2.5-118             passed          
  11grac1                         glibc-devel-2.5-118             passed          
Result: Package existence check passed for “glibc-devel-2.5″.

Check: Package existence for “gcc-c++-4.1.1″ 
  Node Name                       Status                          Comment         
  ——————————  ——————————  —————-
  11grac2                         gcc-c++-4.1.2-54.el5            passed          
  11grac1                         gcc-c++-4.1.2-54.el5            passed          
Result: Package existence check passed for “gcc-c++-4.1.1″.

Check: Group existence for “dba” 
  Node Name     Status                    Comment                 
  ————  ————————  ————————
  11grac2       exists                    passed                  
  11grac1       exists                    passed                  
Result: Group existence check passed for “dba”.

Check: Group existence for “oinstall” 
  Node Name     Status                    Comment                 
  ————  ————————  ————————
  11grac2       exists                    passed                  
  11grac1       exists                    passed                  
Result: Group existence check passed for “oinstall”.

Check: User existence for “nobody” 
  Node Name     Status                    Comment                 
  ————  ————————  ————————
  11grac2       exists                    passed                  
  11grac1       exists                    passed                  
Result: User existence check passed for “nobody”.

System requirement failed for ‘crs’

Pre-check for cluster services setup was unsuccessful on all the nodes. 
预安装检查完后,如有failed选项,请进行调整后再次检查直至所有选项均为passed即可安装。
2)开始安装Cluster
[oracle@11grac1.localdomain:/tmp/clusterware]$ export DISPLAY=192.168.56.1:0.0
[oracle@11grac1.localdomain:/tmp/clusterware]$ ./runInstaller
Starting Oracle Universal Installer…

Checking Temp space: must be greater than 120 MB.   Actual 18971 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 4095 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-05-14_10-08-26PM. Please wait …[oracle@11grac1.localdomain:/tmp/clusterware]$ Oracle Universal Installer, Version 11.1.0.6.0 Production
Copyright (C) 1999, 2007, Oracle. All rights reserved.
— 请安装如下图示依次操作即可完成Cluster安装

— 指定CRS安装目录

— 添加集群IP信息


— 指定OCR位置这里使用normal模式
指定Voting Disk位置这里使用外部冗余模式

— 安装要求依次按照顺序在每一个节点用root用户执行脚本

[root@11grac1.localdomain:/root]$ /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory to 770.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete

[root@11grac2.localdomain:/root]$ /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory to 770.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete

[root@11grac1.localdomain:/root]$ /u01/app/crs/11.1.0/crshome_1/root.sh
WARNING: directory ‘/u01/app/crs/11.1.0′ is not owned by root
WARNING: directory ‘/u01/app/crs’ is not owned by root
WARNING: directory ‘/u01/app’ is not owned by root
WARNING: directory ‘/u01′ is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up Network socket directories
Oracle Cluster Registry configuration upgraded successfully
The directory ‘/u01/app/crs/11.1.0′ is not owned by root. Changing owner to root
The directory ‘/u01/app/crs’ is not owned by root. Changing owner to root
The directory ‘/u01/app’ is not owned by root. Changing owner to root
The directory ‘/u01′ is not owned by root. Changing owner to root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: 11grac1 11grac1-priv 11grac1
node 2: 11grac2 11grac2-priv 11grac2
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
Now formatting voting device: /dev/asm-diskd
Format of 1 voting devices complete.
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Cluster Synchronization Services is active on these nodes. 
11grac1
Cluster Synchronization Services is inactive on these nodes. 
11grac2
Local node checking complete. Run root.sh on remaining nodes to start CRS daemons.

[root@11grac2.localdomain:/root]$ /u01/app/crs/11.1.0/crshome_1/root.sh
WARNING: directory ‘/u01/app/crs/11.1.0′ is not owned by root
WARNING: directory ‘/u01/app/crs’ is not owned by root
WARNING: directory ‘/u01/app’ is not owned by root
WARNING: directory ‘/u01′ is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up Network socket directories
Oracle Cluster Registry configuration upgraded successfully
The directory ‘/u01/app/crs/11.1.0′ is not owned by root. Changing owner to root
The directory ‘/u01/app/crs’ is not owned by root. Changing owner to root
The directory ‘/u01/app’ is not owned by root. Changing owner to root
The directory ‘/u01′ is not owned by root. Changing owner to root
clscfg: EXISTING configuration version 4 detected.
clscfg: version 4 is 11 Release 1.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: 11grac1 11grac1-priv 11grac1
node 2: 11grac2 11grac2-priv 11grac2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Cluster Synchronization Services is active on these nodes. 
11grac1
11grac2
Cluster Synchronization Services is active on all the nodes. 
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps

Creating VIP application resource on (2) nodes…
Creating GSD application resource on (2) nodes…
Creating ONS application resource on (2) nodes…
Starting VIP application resource on (2) nodes…
Starting GSD application resource on (2) nodes…
Starting ONS application resource on (2) nodes…

Done.

— root脚本运行成功后检查集群状态
[root@11grac1.localdomain:/root]$ crs_stat -t
Name           Type           Target    State     Host        
————————————————————
ora….ac1.gsd application    ONLINE    ONLINE    11grac1     
ora….ac1.ons application    ONLINE    ONLINE    11grac1     
ora….ac1.vip application    ONLINE    ONLINE    11grac1     
ora….ac2.gsd application    ONLINE    ONLINE    11grac2     
ora….ac2.ons application    ONLINE    ONLINE    11grac2     
ora….ac2.vip application    ONLINE    ONLINE    11grac2     

[root@11grac2.localdomain:/root]$ crs_stat -t
Name           Type           Target    State     Host        
————————————————————
ora….ac1.gsd application    ONLINE    ONLINE    11grac1     
ora….ac1.ons application    ONLINE    ONLINE    11grac1     
ora….ac1.vip application    ONLINE    ONLINE    11grac1     
ora….ac2.gsd application    ONLINE    ONLINE    11grac2     
ora….ac2.ons application    ONLINE    ONLINE    11grac2     
ora….ac2.vip application    ONLINE    ONLINE    11grac2     

— root脚本运行成功后点击“OK”后进行集群的最后配置

— 到此集群软件安装成功

十一、安装11.1.0.6 Database软件
[root@11grac1.localdomain:/tmp]$ unzip linux.x64_11gR1_database.zip
[root@11grac1.localdomain:/tmp]$ cd database
[oracle@11grac1.localdomain:/tmp/database]$ ./runInstaller 
Starting Oracle Universal Installer…

Checking Temp space: must be greater than 120 MB.   Actual 16606 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 4095 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-05-14_10-49-31PM. Please wait …[oracle@11grac1.localdomain:/tmp/database]$ Oracle Universal Installer, Version 11.1.0.6.0 Production
Copyright (C) 1999, 2007, Oracle. All rights reserved.
— 请安装如下图示依次操作即可完成Database安装

— 指定Database的安装位置

— 选择集群安装模式

[root@11grac1.localdomain:/root]$ /u01/app/oracle/product/11.1.0/dbhome_1/root.sh
Running Oracle 11g root.sh script…

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/11.1.0/dbhome_1

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The file “dbhome” already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying dbhome to /usr/local/bin …
The file “oraenv” already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying oraenv to /usr/local/bin …
The file “coraenv” already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying coraenv to /usr/local/bin …

Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.

[root@11grac2.localdomain:/root]$ /u01/app/oracle/product/11.1.0/dbhome_1/root.sh
Running Oracle 11g root.sh script…

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/11.1.0/dbhome_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
Creating y directory…
   Copying dbhome to y …
   Copying oraenv to y …
   Copying coraenv to y …

Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.


— 到此Database安装完成

十二、升级11.1.0.6 RAC到11.1.0.7
根据readme文档说明,这里我们选择Non Rolling Upgrade方式进行升级,先升级Cluster软件,再升级Database软件。
1)关闭集群(两个节点操作)
$ emctl stop dbconsole
$ srvctl stop database -d db_name
$ srvctl stop asm -n node
$ srvctl stop nodeapps -n node
— 由于我们这里只安装了集群和数据库软件没有创建数据库,因此以上步骤可以不用执行。
[root@11grac1 ~]# CRS_HOME/bin/crsctl stop crs
[root@11grac2 ~]# CRS_HOME/bin/crsctl stop crs
2)升级Cluster软件到11.1.0.7
[root@11grac1.localdomain:/tmp]$ unzip p6890831_111070_Linux-x86-64.zip
[root@11grac1.localdomain:/tmp]$ cd Disk1
[root@11grac1.localdomain:/tmp/Disk1]$ ./runInstaller 

— 选择Cluster软件安装目录

— 按照提示依次在两个节点使用root用户执行以上命令脚本

[root@11grac1.localdomain:/root]$ /u01/app/crs/11.1.0/crshome_1/bin/crsctl stop crs
Stopping resources. 
This could take several minutes.
Error while stopping resources. Possible cause: CRSD is down.
Stopping Cluster Synchronization Services. 
Unable to communicate with the Cluster Synchronization Services daemon.
[root@11grac1.localdomain:/root]$ /u01/app/crs/11.1.0/crshome_1/install/root111.sh
Creating pre-patch directory for saving pre-patch clusterware files
Completed patching clusterware files to /u01/app/crs/11.1.0/crshome_1
Relinking some shared libraries.
Relinking of patched files is complete.
Preparing to recopy patched init and RC scripts.
Recopying init and RC scripts.
Startup will be queued to init within 30 seconds.
Starting up the CRS daemons.
Waiting for the patched CRS daemons to start.
This may take a while on some systems.
.
.
.
.
.
.
.
11107 patch successfully applied.
clscfg: EXISTING configuration version 4 detected.
clscfg: version 4 is 11 Release 1.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: 11grac1 11grac1-priv 11grac1
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
clscfg -upgrade completed successfully
Creating ‘/u01/app/crs/11.1.0/crshome_1/install/paramfile.crs’ with data used for CRS configuration
Setting CRS configuration values in /u01/app/crs/11.1.0/crshome_1/install/paramfile.crs
Setting cluster unique identifier
Restarting Oracle clusterware
Stopping Oracle clusterware
Stopping resources. 
This could take several minutes.
Successfully stopped Oracle Clusterware resources 
Stopping Cluster Synchronization Services. 
Shutting down the Cluster Synchronization Services daemon. 
Shutdown request successfully issued.
Waiting for Cluster Synchronization Services daemon to stop
Waiting for Cluster Synchronization Services daemon to stop
Cluster Synchronization Services daemon has stopped
Starting Oracle clusterware
Attempting to start Oracle Clusterware stack 
Waiting for Cluster Synchronization Services daemon to start
Waiting for Cluster Synchronization Services daemon to start
Waiting for Cluster Synchronization Services daemon to start
Waiting for Cluster Synchronization Services daemon to start
Waiting for Cluster Synchronization Services daemon to start
Waiting for Cluster Synchronization Services daemon to start
Waiting for Cluster Synchronization Services daemon to start
Cluster Synchronization Services daemon has started
Waiting for Event Manager daemon to start
Event Manager daemon has started
Cluster Ready Services daemon has started

[root@11grac2.localdomain:/root]$ /u01/app/crs/11.1.0/crshome_1/bin/crsctl stop crs
Stopping resources. 
This could take several minutes.
Error while stopping resources. Possible cause: CRSD is down.
Stopping Cluster Synchronization Services. 
Unable to communicate with the Cluster Synchronization Services daemon.
[root@11grac2.localdomain:/root]$ /u01/app/crs/11.1.0/crshome_1/install/root111.sh
Creating pre-patch directory for saving pre-patch clusterware files
Completed patching clusterware files to /u01/app/crs/11.1.0/crshome_1
Relinking some shared libraries.
Relinking of patched files is complete.
Preparing to recopy patched init and RC scripts.
Recopying init and RC scripts.
Startup will be queued to init within 30 seconds.
Starting up the CRS daemons.
Waiting for the patched CRS daemons to start.
This may take a while on some systems.
.
11107 patch successfully applied.
clscfg: EXISTING configuration version 4 detected.
clscfg: version 4 is 11 Release 1.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 2: 11grac2 11grac2-priv 11grac2
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
clscfg -upgrade completed successfully
Creating ‘/u01/app/crs/11.1.0/crshome_1/install/paramfile.crs’ with data used for CRS configuration
Setting CRS configuration values in /u01/app/crs/11.1.0/crshome_1/install/paramfile.crs

— 执行完成后查看集群资源状态
[root@11grac1.localdomain:/root]$ crs_stat -t
Name           Type           Target    State     Host        
————————————————————
ora….ac1.gsd application    ONLINE    ONLINE    11grac1     
ora….ac1.ons application    ONLINE    ONLINE    11grac1     
ora….ac1.vip application    ONLINE    ONLINE    11grac1     
ora….ac2.gsd application    ONLINE    ONLINE    11grac2     
ora….ac2.ons application    ONLINE    ONLINE    11grac2     
ora….ac2.vip application    ONLINE    ONLINE    11grac2 
2)升级Database软件到11.1.0.7
[oracle@11grac1.localdomain:/tmp/Disk1]$ ./runInstaller 

— 选择Database软件安装目录

root@11grac1.localdomain:/root]$ /u01/app/oracle/product/11.1.0/dbhome_1/root.sh
Running Oracle 11g root.sh script…

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/11.1.0/dbhome_1

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The file “dbhome” already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying dbhome to /usr/local/bin …
The file “oraenv” already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying oraenv to /usr/local/bin …
The file “coraenv” already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying coraenv to /usr/local/bin …

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.

[root@11grac2.localdomain:/root]$ /u01/app/oracle/product/11.1.0/dbhome_1/root.sh
Running Oracle 11g root.sh script…

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/11.1.0/dbhome_1

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The file “dbhome” already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying dbhome to /usr/local/bin …
The file “oraenv” already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying oraenv to /usr/local/bin …
The file “coraenv” already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: y
   Copying coraenv to /usr/local/bin …

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.


— 到此集群数据库升级至11.1.0.7完成
4)检查集群数据库软件版本
oracle@11grac1.localdomain:/u01/app/crs/11.1.0/crshome_1/OPatch]$ ./opatch lsinventory
Invoking OPatch 11.1.0.6.2

Oracle Interim Patch Installer version 11.1.0.6.2
Copyright (c) 2007, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/11.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /etc/oraInst.loc
OPatch version    : 11.1.0.6.2
OUI version       : 11.1.0.7.0
OUI location      : /u01/app/oracle/product/11.1.0/dbhome_1/oui
Log file location : /u01/app/oracle/product/11.1.0/dbhome_1/cfgtoollogs/opatch/opatch2015-05-17_12-00-52PM.log

Lsinventory Output file location : /u01/app/oracle/product/11.1.0/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2015-05-17_12-00-52PM.txt

——————————————————————————–
Installed Top-level Products (2): 

Oracle Database 11g                                                  11.1.0.6.0
Oracle Database 11g Patch Set 1                                      11.1.0.7.0
There are 2 products installed in this Oracle Home.

There are no Interim patches installed in this Oracle Home.

Rac system comprising of multiple nodes
  Local node = 11grac1
  Remote node = 11grac2

——————————————————————————–

OPatch succeeded.

[oracle@11grac1.localdomain:/u01/app/oracle/product/11.1.0/dbhome_1/OPatch]$ ./opatch lsinventory
‘Invoking OPatch 11.1.0.6.2

Oracle Interim Patch Installer version 11.1.0.6.2
Copyright (c) 2007, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/11.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /etc/oraInst.loc
OPatch version    : 11.1.0.6.2
OUI version       : 11.1.0.7.0
OUI location      : /u01/app/oracle/product/11.1.0/dbhome_1/oui
Log file location : /u01/app/oracle/product/11.1.0/dbhome_1/cfgtoollogs/opatch/opatch2015-05-17_11-59-28AM.log

Lsinventory Output file location : /u01/app/oracle/product/11.1.0/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2015-05-17_11-59-28AM.txt

——————————————————————————–
Installed Top-level Products (2): 

Oracle Database 11g                                                  11.1.0.6.0
Oracle Database 11g Patch Set 1                                      11.1.0.7.0
There are 2 products installed in this Oracle Home.

There are no Interim patches installed in this Oracle Home.

Rac system comprising of multiple nodes
  Local node = 11grac1
  Remote node = 11grac2

——————————————————————————–

OPatch succeeded.

十三、创建数据库
1)dbca创建数据库
[oracle@11grac1 ~]$ dbca


— 至此数据库创建成功
2)查看集群及数据库状态
[oracle@11grac1.localdomain:/home/oracle]$ crs_stat -t
Name           Type           Target    State     Host        
————————————————————
ora….SM1.asm application    ONLINE    ONLINE    11grac1     
ora….C1.lsnr application    ONLINE    ONLINE    11grac1     
ora….ac1.gsd application    ONLINE    ONLINE    11grac1     
ora….ac1.ons application    ONLINE    ONLINE    11grac1     
ora….ac1.vip application    ONLINE    ONLINE    11grac1     
ora….SM2.asm application    ONLINE    ONLINE    11grac2     
ora….C2.lsnr application    ONLINE    ONLINE    11grac2     
ora….ac2.gsd application    ONLINE    ONLINE    11grac2     
ora….ac2.ons application    ONLINE    ONLINE    11grac2     
ora….ac2.vip application    ONLINE    ONLINE    11grac2     
ora.rac11g.db  application    ONLINE    ONLINE    11grac2     
ora….g1.inst application    ONLINE    ONLINE    11grac1     
ora….g2.inst application    ONLINE    ONLINE    11grac2     

SQL> select instance_name,status from gv$instance;

INSTANCE_NAME STATUS
—————- ————
rac11g1 OPEN
rac11g2 OPEN

十四、总结
以上我们用比较详细的步骤演示了在安装11.1.0.6 RAC并升级至11.1.0.7的整个过程,后续章节将演示如何将11.1.0.7 RAC升级到11gR2的最新版本,敬请关注。