分类目录归档:OS

关于Linux 6使用udev绑定共享磁盘的测试

--- 0.环境描述
2节点RAC配置共享存储
系统版本RedHat 6.6
10块共享存储磁盘/dev/sdb~sdk
--- 系统版本
[root@dbtest3 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.6 (Santiago)
[root@dbtest4 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.6 (Santiago)
--- 10块共享磁盘
[root@dbtest3 ~]# ls -l /dev/sd*
brw-rw---- 1 root disk 8,   0 Aug  2 22:09 /dev/sda
brw-rw---- 1 root disk 8,   1 Aug  2 22:09 /dev/sda1
brw-rw---- 1 root disk 8,   2 Aug  2 22:09 /dev/sda2
brw-rw---- 1 root disk 8,   3 Aug  2 22:09 /dev/sda3
brw-rw---- 1 root disk 8,  16 Aug  2 22:09 /dev/sdb
brw-rw---- 1 root disk 8,  32 Aug  2 22:09 /dev/sdc
brw-rw---- 1 root disk 8,  48 Aug  2 22:09 /dev/sdd
brw-rw---- 1 root disk 8,  64 Aug  2 22:09 /dev/sde
brw-rw---- 1 root disk 8,  80 Aug  2 22:09 /dev/sdf
brw-rw---- 1 root disk 8,  96 Aug  2 22:09 /dev/sdg
brw-rw---- 1 root disk 8, 112 Aug  2 22:09 /dev/sdh
brw-rw---- 1 root disk 8, 128 Aug  2 22:09 /dev/sdi
brw-rw---- 1 root disk 8, 144 Aug  2 22:09 /dev/sdj
brw-rw---- 1 root disk 8, 160 Aug  2 22:09 /dev/sdk
[root@dbtest4 ~]# ls -l /dev/sd*
brw-rw---- 1 root disk 8,   0 Aug  2 22:10 /dev/sda
brw-rw---- 1 root disk 8,   1 Aug  2 22:10 /dev/sda1
brw-rw---- 1 root disk 8,   2 Aug  2 22:10 /dev/sda2
brw-rw---- 1 root disk 8,   3 Aug  2 22:10 /dev/sda3
brw-rw---- 1 root disk 8,  16 Aug  2 22:10 /dev/sdb
brw-rw---- 1 root disk 8,  32 Aug  2 22:10 /dev/sdc
brw-rw---- 1 root disk 8,  48 Aug  2 22:10 /dev/sdd
brw-rw---- 1 root disk 8,  64 Aug  2 22:10 /dev/sde
brw-rw---- 1 root disk 8,  80 Aug  2 22:10 /dev/sdf
brw-rw---- 1 root disk 8,  96 Aug  2 22:10 /dev/sdg
brw-rw---- 1 root disk 8, 112 Aug  2 22:10 /dev/sdh
brw-rw---- 1 root disk 8, 128 Aug  2 22:10 /dev/sdi
brw-rw---- 1 root disk 8, 144 Aug  2 22:10 /dev/sdj
brw-rw---- 1 root disk 8, 160 Aug  2 22:10 /dev/sdk
--- 10块共享磁盘fdisk输出
[root@dbtest3 ~]# fdisk -l

Disk /dev/sda: 85.9 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002c572

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          26      204800   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              26         536     4096000   82  Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sda3             536       10444    79584256   8e  Linux LVM

Disk /dev/sdb: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x6ae81c6f

   Device Boot      Start         End      Blocks   Id  System

Disk /dev/sdc: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sde: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sdd: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sdf: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sdh: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sdg: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sdi: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sdk: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sdj: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/vg_dbtest1-LogVol00: 81.5 GB, 81491132416 bytes
255 heads, 63 sectors/track, 9907 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
[root@dbtest4 ~]# fdisk -l

Disk /dev/sda: 85.9 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002c572

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          26      204800   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              26         536     4096000   82  Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sda3             536       10444    79584256   8e  Linux LVM

Disk /dev/sdb: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x6ae81c6f

   Device Boot      Start         End      Blocks   Id  System

Disk /dev/sdc: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sdd: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sde: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sdf: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sdg: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sdh: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sdj: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sdi: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sdk: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/vg_dbtest1-LogVol00: 81.5 GB, 81491132416 bytes
255 heads, 63 sectors/track, 9907 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
--- 10块共享磁盘uuid
[root@dbtest3 ~]# scsi_id /dev/sdb
36000c29cadb411725a7d6daacd6ad108
[root@dbtest3 ~]# scsi_id /dev/sdc
36000c29838a242f103bb6941175efec1
[root@dbtest3 ~]# scsi_id /dev/sdd
36000c29227146322659b155492a717c3
[root@dbtest3 ~]# scsi_id /dev/sde
36000c298040617a958533e6a46671d60
[root@dbtest3 ~]# scsi_id /dev/sdf
36000c2973cf2951a3b61c87301e1c99a
[root@dbtest3 ~]# scsi_id /dev/sdg
36000c29a926c801b7f9a3b245308e092
[root@dbtest3 ~]# scsi_id /dev/sdh
36000c29944cdbb8110dc96a802e142c8
[root@dbtest3 ~]# scsi_id /dev/sdi
36000c29b1312cf84809d67bc7c8dbe28
[root@dbtest3 ~]# scsi_id /dev/sdj
36000c29d4d97c71a36232c4e0a322be0
[root@dbtest3 ~]# scsi_id /dev/sdk
36000c29d2c6230eae26892a4670d909e
[root@dbtest4 ~]# scsi_id /dev/sdb
36000c29cadb411725a7d6daacd6ad108
[root@dbtest4 ~]# scsi_id /dev/sdc
36000c29838a242f103bb6941175efec1
[root@dbtest4 ~]# scsi_id /dev/sdd
36000c29227146322659b155492a717c3
[root@dbtest4 ~]# scsi_id /dev/sde
36000c298040617a958533e6a46671d60
[root@dbtest4 ~]# scsi_id /dev/sdf
36000c2973cf2951a3b61c87301e1c99a
[root@dbtest4 ~]# scsi_id /dev/sdg
36000c29a926c801b7f9a3b245308e092
[root@dbtest4 ~]# scsi_id /dev/sdh
36000c29944cdbb8110dc96a802e142c8
[root@dbtest4 ~]# scsi_id /dev/sdi
36000c29b1312cf84809d67bc7c8dbe28
[root@dbtest4 ~]# scsi_id /dev/sdj
36000c29d4d97c71a36232c4e0a322be0
[root@dbtest4 ~]# scsi_id /dev/sdk
36000c29d2c6230eae26892a4670d909e

--- 1.使用99规则文件udev绑定共享磁盘
--- 将options=--whitelisted --replace-whitespace写入/etc/scsi_id.config配置文件
[root@dbtest3 ~]# echo "options=--whitelisted --replace-whitespace" > /etc/scsi_id.config
[root@dbtest4 ~]# echo "options=--whitelisted --replace-whitespace" > /etc/scsi_id.config
--- 在RAC两节点获取10块共享磁盘的uudi并生成udev的99规则
--- 将以下输出结果分别添加到两节点的/etc/udev/rules.d/99-oracle-asmdevices.rules文件中
[root@dbtest3 ~]# for i in b c d e f g h i j k;
> do
> echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\""
> done
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29cadb411725a7d6daacd6ad108", NAME="asm-diskb", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29838a242f103bb6941175efec1", NAME="asm-diskc", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29227146322659b155492a717c3", NAME="asm-diskd", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c298040617a958533e6a46671d60", NAME="asm-diske", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c2973cf2951a3b61c87301e1c99a", NAME="asm-diskf", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29a926c801b7f9a3b245308e092", NAME="asm-diskg", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29944cdbb8110dc96a802e142c8", NAME="asm-diskh", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29b1312cf84809d67bc7c8dbe28", NAME="asm-diski", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29d4d97c71a36232c4e0a322be0", NAME="asm-diskj", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29d2c6230eae26892a4670d909e", NAME="asm-diskk", OWNER="grid", GROUP="asmadmin", MODE="0660"
[root@dbtest4 ~]# for i in b c d e f g h i j k;
> do
> echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\""
> done
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29cadb411725a7d6daacd6ad108", NAME="asm-diskb", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29838a242f103bb6941175efec1", NAME="asm-diskc", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29227146322659b155492a717c3", NAME="asm-diskd", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c298040617a958533e6a46671d60", NAME="asm-diske", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c2973cf2951a3b61c87301e1c99a", NAME="asm-diskf", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29a926c801b7f9a3b245308e092", NAME="asm-diskg", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29944cdbb8110dc96a802e142c8", NAME="asm-diskh", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29b1312cf84809d67bc7c8dbe28", NAME="asm-diski", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29d4d97c71a36232c4e0a322be0", NAME="asm-diskj", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29d2c6230eae26892a4670d909e", NAME="asm-diskk", OWNER="grid", GROUP="asmadmin", MODE="0660"
[root@dbtest3 ~]# vi /etc/udev/rules.d/99-oracle-asmdevices.rules
[root@dbtest4 ~]# vi /etc/udev/rules.d/99-oracle-asmdevices.rules
--- 重新加载规则文件并启动udev
[root@dbtest3 ~]# udevadm control --reload-rules
[root@dbtest3 ~]# start_udev 
Starting udev: udevd[7979]: GOTO 'pulseaudio_check_usb' has no matching label in: '/lib/udev/rules.d/90-pulseaudio.rules'
                                                           [  OK  ]
[root@dbtest4 ~]# udevadm control --reload-rules
[root@dbtest4 ~]# start_udev 
Starting udev: udevd[7967]: GOTO 'pulseaudio_check_usb' has no matching label in: '/lib/udev/rules.d/90-pulseaudio.rules'
                                                           [  OK  ]
--- 检查绑定生成的ams共享磁盘
[root@dbtest3 ~]# ls -l /dev/asm-disk*
brw-rw---- 1 grid asmadmin 8,  16 Aug  3 10:33 /dev/asm-diskb
brw-rw---- 1 grid asmadmin 8,  32 Aug  3 10:33 /dev/asm-diskc
brw-rw---- 1 grid asmadmin 8,  48 Aug  3 10:33 /dev/asm-diskd
brw-rw---- 1 grid asmadmin 8,  64 Aug  3 10:33 /dev/asm-diske
brw-rw---- 1 grid asmadmin 8,  80 Aug  3 10:33 /dev/asm-diskf
brw-rw---- 1 grid asmadmin 8,  96 Aug  3 10:33 /dev/asm-diskg
brw-rw---- 1 grid asmadmin 8, 112 Aug  3 10:33 /dev/asm-diskh
brw-rw---- 1 grid asmadmin 8, 128 Aug  3 10:33 /dev/asm-diski
brw-rw---- 1 grid asmadmin 8, 144 Aug  3 10:33 /dev/asm-diskj
brw-rw---- 1 grid asmadmin 8, 160 Aug  3 10:33 /dev/asm-diskk
[root@dbtest4 ~]# ls -l /dev/asm-disk*
brw-rw---- 1 grid asmadmin 8,  16 Aug  3 10:33 /dev/asm-diskb
brw-rw---- 1 grid asmadmin 8,  32 Aug  3 10:33 /dev/asm-diskc
brw-rw---- 1 grid asmadmin 8,  48 Aug  3 10:33 /dev/asm-diskd
brw-rw---- 1 grid asmadmin 8,  64 Aug  3 10:33 /dev/asm-diske
brw-rw---- 1 grid asmadmin 8,  80 Aug  3 10:33 /dev/asm-diskf
brw-rw---- 1 grid asmadmin 8,  96 Aug  3 10:33 /dev/asm-diskg
brw-rw---- 1 grid asmadmin 8, 112 Aug  3 10:33 /dev/asm-diskh
brw-rw---- 1 grid asmadmin 8, 128 Aug  3 10:33 /dev/asm-diski
brw-rw---- 1 grid asmadmin 8, 144 Aug  3 10:33 /dev/asm-diskj
brw-rw---- 1 grid asmadmin 8, 160 Aug  3 10:33 /dev/asm-diskk
--- Linux 6使用udev绑定共享磁盘之后原有的/dev/sdb~k不再显示
[root@dbtest3 ~]# ls -l /dev/sd*
brw-rw---- 1 root disk 8, 0 Aug  3 10:33 /dev/sda
brw-rw---- 1 root disk 8, 1 Aug  3 10:33 /dev/sda1
brw-rw---- 1 root disk 8, 2 Aug  3 10:33 /dev/sda2
brw-rw---- 1 root disk 8, 3 Aug  3 10:33 /dev/sda3
[root@dbtest4 ~]# ls -l /dev/sd*
brw-rw---- 1 root disk 8, 0 Aug  3 10:33 /dev/sda
brw-rw---- 1 root disk 8, 1 Aug  3 10:33 /dev/sda1
brw-rw---- 1 root disk 8, 2 Aug  3 10:33 /dev/sda2
brw-rw---- 1 root disk 8, 3 Aug  3 10:33 /dev/sda3
--- fdisk输出也不再显示/dev/sdb~k
[root@dbtest3 ~]# fdisk -l

Disk /dev/sda: 85.9 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002c572

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          26      204800   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              26         536     4096000   82  Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sda3             536       10444    79584256   8e  Linux LVM

Disk /dev/mapper/vg_dbtest1-LogVol00: 81.5 GB, 81491132416 bytes
255 heads, 63 sectors/track, 9907 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
[root@dbtest4 ~]# fdisk -l

Disk /dev/sda: 85.9 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002c572

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          26      204800   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              26         536     4096000   82  Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sda3             536       10444    79584256   8e  Linux LVM

Disk /dev/mapper/vg_dbtest1-LogVol00: 81.5 GB, 81491132416 bytes
255 heads, 63 sectors/track, 9907 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
---【关于Linux不同版本下udev的变化和区别可以参考Tim Hall的技术博客https://oracle-base.com/articles/linux/udev-scsi-rules-configuration-in-oracle-linux】
---【在文章中Tim Hall在使用udev绑定共享磁盘时,先将共享磁盘分区格式化,然后使用/dev/sdb1~sde1进行绑定】
---【经测试使用udev绑定后/dev/sdb1~sde1也不再显示,只显示/dev/sdb~e】

---【在测试完使用99-rules配置文件绑定共享磁盘后,同时也进行了使用60-raw.rules配置文件绑定共享磁盘生成裸设备的测试】
---【经测试发现,在Linux 6上使用60-raw.rules配置文件绑定共享磁盘生成裸设备已经不再支持uuid方式】
---【经测试使使用uuid绑定共享磁盘生成裸设备,start_udev时并不会加载60-raw.rules文件生效】
---【但通过盘符绑定共享磁盘生成裸设备,start_udev时并会加载60-raw.rules文件生效】
---【但因为共享磁盘在RAC的多个节点上,同一块盘的顺序可能会不同,并且重启系统后共享磁盘的盘符也会发生变化,所以使用盘符绑定共享磁盘生成裸设备会导致盘符漂移错乱】
---【经测试还发现,使用60-raw.rules配置文件绑定共享磁盘生成裸设备,存在裸设备的缓存问题,即便60-raw.rules配置文件被修改更新,重新加载规则文件后启动udev】
---【之前配置绑定生成的裸设备依然存在,除非系统重启后才会去除,以下为测试过程结果】

---【准备绑定裸设备前的磁盘信息】
[root@dbtest3 ~]# ls -l /dev/raw/*
crw-rw---- 1 root disk 162, 0 Aug  3 11:02 /dev/raw/rawctl
[root@dbtest3 ~]# ls -l /dev/sd*
brw-rw---- 1 root disk 8,   0 Aug  3 11:02 /dev/sda
brw-rw---- 1 root disk 8,   1 Aug  3 11:02 /dev/sda1
brw-rw---- 1 root disk 8,   2 Aug  3 11:02 /dev/sda2
brw-rw---- 1 root disk 8,   3 Aug  3 11:02 /dev/sda3
brw-rw---- 1 root disk 8,  16 Aug  3 11:02 /dev/sdb
brw-rw---- 1 root disk 8,  32 Aug  3 11:02 /dev/sdc
brw-rw---- 1 root disk 8,  48 Aug  3 11:02 /dev/sdd
brw-rw---- 1 root disk 8,  64 Aug  3 11:02 /dev/sde
brw-rw---- 1 root disk 8,  80 Aug  3 11:02 /dev/sdf
brw-rw---- 1 root disk 8,  96 Aug  3 11:02 /dev/sdg
brw-rw---- 1 root disk 8, 112 Aug  3 11:02 /dev/sdh
brw-rw---- 1 root disk 8, 128 Aug  3 11:02 /dev/sdi
brw-rw---- 1 root disk 8, 144 Aug  3 11:02 /dev/sdj
brw-rw---- 1 root disk 8, 160 Aug  3 11:02 /dev/sdk
[root@dbtest4 ~]# ls -l /dev/raw/*
crw-rw---- 1 root disk 162, 0 Aug  3 11:03 /dev/raw/rawctl
[root@dbtest4 ~]# ls -l /dev/sd*
brw-rw---- 1 root disk 8,   0 Aug  3 11:03 /dev/sda
brw-rw---- 1 root disk 8,   1 Aug  3 11:03 /dev/sda1
brw-rw---- 1 root disk 8,   2 Aug  3 11:03 /dev/sda2
brw-rw---- 1 root disk 8,   3 Aug  3 11:03 /dev/sda3
brw-rw---- 1 root disk 8,  16 Aug  3 11:03 /dev/sdb
brw-rw---- 1 root disk 8,  32 Aug  3 11:03 /dev/sdc
brw-rw---- 1 root disk 8,  48 Aug  3 11:03 /dev/sdd
brw-rw---- 1 root disk 8,  64 Aug  3 11:03 /dev/sde
brw-rw---- 1 root disk 8,  80 Aug  3 11:03 /dev/sdf
brw-rw---- 1 root disk 8,  96 Aug  3 11:03 /dev/sdg
brw-rw---- 1 root disk 8, 112 Aug  3 11:03 /dev/sdh
brw-rw---- 1 root disk 8, 128 Aug  3 11:03 /dev/sdi
brw-rw---- 1 root disk 8, 144 Aug  3 11:03 /dev/sdj
brw-rw---- 1 root disk 8, 160 Aug  3 11:03 /dev/sdk

--- 2.将以下规则写入/etc/udev/rules.d/60-raw.rules 文件
---【使用uudi方式绑定共享磁盘生成裸设备】
[root@dbtest3 ~]# vi /etc/udev/rules.d/60-raw.rules 
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29cadb411725a7d6daacd6ad108", RUN+="/bin/raw /dev/raw/raw11 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29838a242f103bb6941175efec1", RUN+="/bin/raw /dev/raw/raw12 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29227146322659b155492a717c3", RUN+="/bin/raw /dev/raw/raw13 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c298040617a958533e6a46671d60", RUN+="/bin/raw /dev/raw/raw14 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c2973cf2951a3b61c87301e1c99a", RUN+="/bin/raw /dev/raw/raw15 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29a926c801b7f9a3b245308e092", RUN+="/bin/raw /dev/raw/raw16 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29944cdbb8110dc96a802e142c8", RUN+="/bin/raw /dev/raw/raw17 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29b1312cf84809d67bc7c8dbe28", RUN+="/bin/raw /dev/raw/raw18 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29d4d97c71a36232c4e0a322be0", RUN+="/bin/raw /dev/raw/raw19 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29d2c6230eae26892a4670d909e", RUN+="/bin/raw /dev/raw/raw20 %N"
KERNEL=="raw[11-20]", OWNER="grid", GROUP="asmadmin", MODE="660"
[root@dbtest4 ~]# vi /etc/udev/rules.d/60-raw.rules 
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29cadb411725a7d6daacd6ad108", RUN+="/bin/raw /dev/raw/raw11 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29838a242f103bb6941175efec1", RUN+="/bin/raw /dev/raw/raw12 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29227146322659b155492a717c3", RUN+="/bin/raw /dev/raw/raw13 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c298040617a958533e6a46671d60", RUN+="/bin/raw /dev/raw/raw14 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c2973cf2951a3b61c87301e1c99a", RUN+="/bin/raw /dev/raw/raw15 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29a926c801b7f9a3b245308e092", RUN+="/bin/raw /dev/raw/raw16 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29944cdbb8110dc96a802e142c8", RUN+="/bin/raw /dev/raw/raw17 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29b1312cf84809d67bc7c8dbe28", RUN+="/bin/raw /dev/raw/raw18 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29d4d97c71a36232c4e0a322be0", RUN+="/bin/raw /dev/raw/raw19 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id -g -u -d %p", RESULT=="36000c29d2c6230eae26892a4670d909e", RUN+="/bin/raw /dev/raw/raw20 %N"
KERNEL=="raw[11-20]", OWNER="grid", GROUP="asmadmin", MODE="660"
--- 重新加载规则文件并启动udev
[root@dbtest3 ~]# udevadm control --reload-rules
[root@dbtest3 ~]# start_udev 
Starting udev: udevd[9693]: GOTO 'pulseaudio_check_usb' has no matching label in: '/lib/udev/rules.d/90-pulseaudio.rules'
                                                           [  OK  ]
[root@dbtest4 ~]# udevadm control --reload-rules
[root@dbtest4 ~]# start_udev
Starting udev: udevd[11386]: GOTO 'pulseaudio_check_usb' has no matching label in: '/lib/udev/rules.d/90-pulseaudio.rules'
                                                           [  OK  ]
--- 并没有绑定生成裸设备文件
[root@dbtest3 ~]# ls -l /dev/raw/*
crw-rw---- 1 root disk 162, 0 Aug  3 11:20 /dev/raw/rawctl
[root@dbtest4 ~]# ls -l /dev/raw/*
crw-rw---- 1 root disk 162, 0 Aug  3 11:24 /dev/raw/rawctl

[root@dbtest4 ~]# vi /etc/udev/rules.d/60-raw.rules 

--- 3.使用盘符方式绑定共享磁盘生成裸设备
---【为了演示上面提到的裸设备缓存问题,本次测试,每次绑定5块共享磁盘,分别绑定两次,以此来确认裸设备的缓存问题确实存在】
---【注意这里两个节点在使用盘符绑定共享磁盘时,只是为了测试裸设备的缓存问题确实存在,并没有确认1节点的sdb~f和2节点的sdb~f是否一一对应同一块盘】
---【第1次使用盘符绑定共享磁盘生成裸设备】
--- 将以下规则写入/etc/udev/rules.d/60-raw.rules 文件
[root@dbtest3 ~]# vi /etc/udev/rules.d/60-raw.rules 
ACTION=="add", KERNEL=="sdb", RUN+="/bin/raw /dev/raw/raw11 %N"
ACTION=="add", KERNEL=="sdc", RUN+="/bin/raw /dev/raw/raw12 %N"
ACTION=="add", KERNEL=="sdd", RUN+="/bin/raw /dev/raw/raw13 %N"
ACTION=="add", KERNEL=="sde", RUN+="/bin/raw /dev/raw/raw14 %N"
ACTION=="add", KERNEL=="sdf", RUN+="/bin/raw /dev/raw/raw15 %N"
KERNEL=="raw[11-15]", OWNER="grid", GROUP="asmadmin", MODE="660"
[root@dbtest4 ~]# vi /etc/udev/rules.d/60-raw.rules 
ACTION=="add", KERNEL=="sdb", RUN+="/bin/raw /dev/raw/raw11 %N"
ACTION=="add", KERNEL=="sdc", RUN+="/bin/raw /dev/raw/raw12 %N"
ACTION=="add", KERNEL=="sdd", RUN+="/bin/raw /dev/raw/raw13 %N"
ACTION=="add", KERNEL=="sde", RUN+="/bin/raw /dev/raw/raw14 %N"
ACTION=="add", KERNEL=="sdf", RUN+="/bin/raw /dev/raw/raw15 %N"
KERNEL=="raw[11-15]", OWNER="grid", GROUP="asmadmin", MODE="660"

--- 重新加载规则文件并启动udev
[root@dbtest3 ~]# udevadm control --reload-rules
[root@dbtest3 ~]# start_udev
Starting udev: udevd[10586]: GOTO 'pulseaudio_check_usb' has no matching label in: '/lib/udev/rules.d/90-pulseaudio.rules'
                                                           [  OK  ]
[root@dbtest4 ~]# udevadm control --reload-rules
[root@dbtest4 ~]# start_udev
Starting udev: udevd[12262]: GOTO 'pulseaudio_check_usb' has no matching label in: '/lib/udev/rules.d/90-pulseaudio.rules'
                                                           [  OK  ]
--- 绑定成功生成裸设备文件
[root@dbtest3 ~]# ls -l /dev/raw/*
crw-rw---- 1 root disk 162, 11 Aug  3 11:31 /dev/raw/raw11
crw-rw---- 1 root disk 162, 12 Aug  3 11:31 /dev/raw/raw12
crw-rw---- 1 root disk 162, 13 Aug  3 11:31 /dev/raw/raw13
crw-rw---- 1 root disk 162, 14 Aug  3 11:31 /dev/raw/raw14
crw-rw---- 1 root disk 162, 15 Aug  3 11:31 /dev/raw/raw15
crw-rw---- 1 root disk 162,  0 Aug  3 11:31 /dev/raw/rawctl
[root@dbtest4 ~]# ls -l /dev/raw/*                  
crw-rw---- 1 root disk 162, 11 Aug  3 11:31 /dev/raw/raw11
crw-rw---- 1 root disk 162, 12 Aug  3 11:31 /dev/raw/raw12
crw-rw---- 1 root disk 162, 13 Aug  3 11:31 /dev/raw/raw13
crw-rw---- 1 root disk 162, 14 Aug  3 11:31 /dev/raw/raw14
crw-rw---- 1 root disk 162, 15 Aug  3 11:31 /dev/raw/raw15
crw-rw---- 1 root disk 162,  0 Aug  3 11:31 /dev/raw/rawctl

---【第2次使用盘符绑定共享磁盘生成裸设备】
--- 将以下规则写入/etc/udev/rules.d/60-raw.rules 文件
---【删除第1次绑定生成的裸设备文件/dev/raw/raw11~15】
---【将/etc/udev/rules.d/60-raw.rules 文件中第1次写入的绑定规则删除后添加下面的规则】
[root@dbtest3 ~]# rm -f /dev/raw/raw1*
[root@dbtest4 ~]# rm -f /dev/raw/raw1*
[root@dbtest3 ~]# vi /etc/udev/rules.d/60-raw.rules 
ACTION=="add", KERNEL=="sdg", RUN+="/bin/raw /dev/raw/raw21 %N"
ACTION=="add", KERNEL=="sdh", RUN+="/bin/raw /dev/raw/raw22 %N"
ACTION=="add", KERNEL=="sdi", RUN+="/bin/raw /dev/raw/raw23 %N"
ACTION=="add", KERNEL=="sdj", RUN+="/bin/raw /dev/raw/raw24 %N"
ACTION=="add", KERNEL=="sdk", RUN+="/bin/raw /dev/raw/raw25 %N"
KERNEL=="raw[21-25]", OWNER="grid", GROUP="asmadmin", MODE="660"
[root@dbtest4 ~]# vi /etc/udev/rules.d/60-raw.rules 
ACTION=="add", KERNEL=="sdg", RUN+="/bin/raw /dev/raw/raw21 %N"
ACTION=="add", KERNEL=="sdh", RUN+="/bin/raw /dev/raw/raw22 %N"
ACTION=="add", KERNEL=="sdi", RUN+="/bin/raw /dev/raw/raw23 %N"
ACTION=="add", KERNEL=="sdj", RUN+="/bin/raw /dev/raw/raw24 %N"
ACTION=="add", KERNEL=="sdk", RUN+="/bin/raw /dev/raw/raw25 %N"
KERNEL=="raw[21-25]", OWNER="grid", GROUP="asmadmin", MODE="660"

--- 重新加载规则文件并启动udev
[root@dbtest3 ~]# udevadm control --reload-rules
[root@dbtest3 ~]# start_udev
Starting udev: udevd[11431]: GOTO 'pulseaudio_check_usb' has no matching label in: '/lib/udev/rules.d/90-pulseaudio.rules'
                                                           [  OK  ]
[root@dbtest4 ~]# udevadm control --reload-rules
[root@dbtest4 ~]# start_udev
Starting udev: udevd[13102]: GOTO 'pulseaudio_check_usb' has no matching label in: '/lib/udev/rules.d/90-pulseaudio.rules'
                                                           [  OK  ]
--- 绑定生成裸设备文件
---【第1次绑定共享磁盘生成的5个裸设备文件/dev/raw/raw11~15依然存在】
---【重启系统后第1次绑定共享磁盘生成的5个裸设备文件/dev/raw/raw11~15消失】
[root@dbtest3 ~]# ls -l /dev/raw/*
crw-rw---- 1 root disk 162, 11 Aug  3 12:16 /dev/raw/raw11
crw-rw---- 1 root disk 162, 12 Aug  3 12:16 /dev/raw/raw12
crw-rw---- 1 root disk 162, 13 Aug  3 12:16 /dev/raw/raw13
crw-rw---- 1 root disk 162, 14 Aug  3 12:16 /dev/raw/raw14
crw-rw---- 1 root disk 162, 15 Aug  3 12:16 /dev/raw/raw15
crw-rw---- 1 root disk 162, 21 Aug  3 12:16 /dev/raw/raw21
crw-rw---- 1 root disk 162, 22 Aug  3 12:16 /dev/raw/raw22
crw-rw---- 1 root disk 162, 23 Aug  3 12:16 /dev/raw/raw23
crw-rw---- 1 root disk 162, 24 Aug  3 12:16 /dev/raw/raw24
crw-rw---- 1 root disk 162, 25 Aug  3 12:16 /dev/raw/raw25
crw-rw---- 1 root disk 162,  0 Aug  3 12:16 /dev/raw/rawctl
[root@dbtest4 ~]# ls -l /dev/raw/*
crw-rw---- 1 root disk 162, 11 Aug  3 12:16 /dev/raw/raw11
crw-rw---- 1 root disk 162, 12 Aug  3 12:16 /dev/raw/raw12
crw-rw---- 1 root disk 162, 13 Aug  3 12:16 /dev/raw/raw13
crw-rw---- 1 root disk 162, 14 Aug  3 12:16 /dev/raw/raw14
crw-rw---- 1 root disk 162, 15 Aug  3 12:16 /dev/raw/raw15
crw-rw---- 1 root disk 162, 21 Aug  3 12:16 /dev/raw/raw21
crw-rw---- 1 root disk 162, 22 Aug  3 12:16 /dev/raw/raw22
crw-rw---- 1 root disk 162, 23 Aug  3 12:16 /dev/raw/raw23
crw-rw---- 1 root disk 162, 24 Aug  3 12:16 /dev/raw/raw24
crw-rw---- 1 root disk 162, 25 Aug  3 12:16 /dev/raw/raw25
crw-rw---- 1 root disk 162,  0 Aug  3 12:16 /dev/raw/rawctl

Oracle RAC节点间配置NTP同步

Oracle RAC节点间通信对时间的一致性要求比较高,在10g时我们通常借助于NTP或是使用crontab调用脚本去执行节点间时间同步。在11g Clusterware引入一个新的进程CTSS,该进程主要负责集群的时间管理,确保每个节点的集群时间一致,如果系统的NTP进程运行,则CTSS进程处于observer模式,否则NTP进程没有运行,CTSS运行于ACTIVE模式。该文档只要演示如何使用NTP来进行RAC节点间的时间同步。

这里我们使用RAC节点1作为NTP Server,节点2作为Client与节点1进行时间同步:
rac1的IP:192.0.2.101  NTP服务端
rac2的IP:192.0.2.102  NTP客户端
 
NTP配置过程如下:
1)配置NTP服务器之前确认服务器已装好NTP所需的rpm包
[root@rac1 ~]# rpm -qa|grep ntp
ntp-4.2.2p1-9.el5_3.2
chkfontpath-1.10.1-1.1
 
2)然后再把服务器的系统时钟与硬件时间同步一下
[root@rac1 ~]# date
Wed May  6 18:14:42 CST 2015
[root@rac1 ~]# hwclock
Wed 06 May 2015 06:14:46 PM CST  -0.644785 seconds
[root@rac1 ~]# clock –systohc
 
3)编辑rac1节点的配置文件(添加如下内容)
[root@rac1 ~]# vi /etc/ntp.conf
server 192.0.2.101 prefer
restrict 192.0.2.0 mask 255.255.255.0 nomodify notrap noquery
broadcastdelay 0.008
 
4)编辑rac2节点ntp.conf文件(添加如下内容)
[root@rac2 ~]# vi /etc/ntp.conf
server 192.0.2.101 prefer 
broadcastdelay 0.008 
 
5)分别在rac1/rac2节点上修改NTPD参数文件(RAC安装时需要检测的-x参数,参考MOS Doc ID 1056693.1)
[root@rac1 ~]# cat /etc/sysconfig/ntpd
# Drop root to id ‘ntp:ntp’ by default.
OPTIONS=”-x -u ntp:ntp -p /var/run/ntpd.pid”
SYNC_HWCLOCK=yes
[root@rac2 ~]# cat /etc/sysconfig/ntpd
# Drop root to id ‘ntp:ntp’ by default.
OPTIONS=”-x -u ntp:ntp -p /var/run/ntpd.pid”
SYNC_HWCLOCK=yes
 
6)添加NTP服务器地址或者主机名
[root@rac1 ~]# cd /etc/ntp
[root@rac1 ntp]# ls
keys  ntpservers  step-tickers
[root@rac1 ntp]# cat ntpservers
#This file contains a list of ntp servers to show in the system-config-date user interface.
#It is not recommended that you modify this file by hand.
 
#clock.redhat.com
#clock2.redhat.com
rac1.oracle.com
 
[root@rac2 ~]# cd /etc/ntp
[root@rac2 ntp]# ll
total 20
-rw——- 1 root root  73 May 19  2009 keys
-rw-r–r– 1 root root 186 Jul  8  2009 ntpservers
-rw-r–r– 1 root root   0 May 19  2009 step-tickers
[root@rac2 ntp]# cat ntpservers
#This file contains a list of ntp servers to show in the system-config-date user interface.
#It is not recommended that you modify this file by hand.
#clock.redhat.com
#clock2.redhat.com
rac1.oracle.com
 
7)注意查看step-tickers内容,如果step-tickers错误也会导致ntp不同步
[root@rac1 ntp]# more step-tickers
[root@rac2 ntp]# more step-tickers
 
8)在rac1/rac2上执行chkconfig使NTP服务开机启动
[root@rac1 ~]# chkconfig ntpd on
[root@rac2 ~]# chkconfig ntpd on
 
[root@rac1 ~]# service ntpd restart
Shutting down ntpd:                                        [  OK  ]
ntpd: Synchronizing with time server:                      [FAILED]
Starting ntpd:                                             [  OK  ]
 
9)确保该端口以udp方式开放
[root@rac1 ~]# netstat -an|grep 123
udp        0      0 192.168.0.101:123           0.0.0.0:*                              
udp        0      0 192.0.2.101:123             0.0.0.0:*                              
udp        0      0 127.0.0.1:123               0.0.0.0:*                              
udp        0      0 0.0.0.0:123                 0.0.0.0:*                              
udp        0      0 fe80::20c:29ff:fe93:274:123 :::*                                   
udp        0      0 ::1:123                     :::*                                   
udp        0      0 fe80::20c:29ff:fe93:26a:123 :::*                                   
udp        0      0 :::123                      :::*                                   
unix  3      [ ]         STREAM     CONNECTED     43123  @/tmp/dbus-McFa70uJsL
 
10)查看ntp状态
[root@rac1 ~]# ntpstat
unsynchronised
  time server re-starting
   polling server every 64 s
 
11)在rac2节点启动ntp服务
[root@rac2 ~]# service ntpd restart
Shutting down ntpd:                                        [  OK  ]
ntpd: Synchronizing with time server:                      [FAILED]
Starting ntpd:                                             [  OK  ]
 
12)切换至事先配好互信关系的grid用户来验证两节点时间
[grid@rac1 ~]$ sh ssh.sh
Wed May  6 18:52:19 CST 2015
Wed May  6 18:52:19 CST 2015
Wed May  6 18:52:19 CST 2015
Wed May  6 18:52:19 CST 2015

经过验证,RAC两个节点时间已同步,但是也需要注意,生产环境下,应该考虑使用单独的Server作为NTP时间同步服务器。