RAC新增節點
RAC新增節點
概述
新增節點主要分成4個步驟走:
1, 準備好主機環境
2, GI軟體的擴充套件。(在擴充套件之前可以做一個安裝條件是否滿足的檢查)
3, Database軟體的擴充套件
4, Instance的擴充套件
在第2,3,4中用到的命令也很簡單,單有一個前提是第一步準備的linux是完全符合要求的
檢查db3是否滿足rac安裝條件 su - grid cluvfy stage -pre nodeadd -n db3 -fixup -verbose cluvfy stage -post hwos -n db3
開始擴充套件叢集軟體 su - grid cd /u01/app/11.2.0/grid/oui/bin ./addNode.sh -silent "CLUSTER_NEW_NODES={db3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={db3-vip}" 執行提示的root.sh指令碼 /u01/grid/oraInventory/orainstRoot.sh #On nodes db3 /u01/grid/crs/root.sh #On nodes db3 驗證叢集軟體擴充套件成功 cluvfy stage -post nodeadd -n db3 -verbose
為新節點安裝資料庫軟體 su - oracle cd /u01/app/oracle/product/11.2.0/db/oui/bin ./addNode.sh -silent "CLUSTER_NEW_NODES={db3}" 執行提示的root.sh指令碼 /u01/oracle/db/root.sh #On nodes db3
新增例項 dbca 或用命令列直接新增 dbca -silent -addInstance -nodeList db3 -gdbName db -instanceName db3 -sysDBAUserName sys -sysDBAPassword "oracle"
grid使用者檢查狀態 crsctl status resource -t |
接下來演示具體的過程
準備Linux環境
首先克隆出一臺乾淨的虛擬機器
給虛擬機器配置網路
192.168.1.161 db1.up.com db1 192.168.1.162 db2.up.com db2 192.168.1.173 db3.up.com db3
10.0.1.161 db1-priv.up.com db1-priv 10.0.1.162 db2-priv.up.com db2-priv 10.0.1.173 db3-priv.up.com db3-priv
192.168.1.163 db1-vip.up.com db1-vip 192.168.1.164 db2-vip.up.com db2-vip 192.168.1.174 db32-vip.up.com db3-vip
192.168.1.165 db-cluster |
按照這種網路規劃給db3節點的eth0, eth1分別配置網路
192.168.1.173
10.0.1.173
設定主機名
[[email protected] ~]# cat /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=yes HOSTNAME=db3.up.com [[email protected] ~]# hostname db3.up.com |
修改linux版本
[[email protected] ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 4.8 (Tikanga) |
修改hosts檔案配置
[[email protected] ~]# cat /etc/hosts # Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1 odd.up.com odd localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6
192.168.1.161 db1.up.com db1 192.168.1.162 db2.up.com db2 192.168.1.173 db3.up.com db3
10.0.1.161 db1-priv.up.com db1-priv 10.0.1.162 db2-priv.up.com db2-priv 10.0.1.173 db3-priv.up.com db3-priv
192.168.1.163 db1-vip.up.com db1-vip 192.168.1.164 db2-vip.up.com db2-vip 192.168.1.174 db3-vip.up.com db3-vip
192.168.1.165 db-cluster |
配置yum源
[[email protected] ~]# mkdir /iso [[email protected] ~]# mount -t iso9660 -o loop /mnt/share/LinuxSoftware/OracleLinux-R5-U8-Server-x86_64-dvd.iso /iso [[email protected] ~]# vim /etc/yum.repos.d/oel.repo [[email protected] ~]# cat /etc/yum.repos.d/oel.repo [source] name=Oracle Enterprise Linux $releasever - Source baseurl=file:///iso/Server enabled=1 gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release |
因為在裝rac的時候,系統提示一些包的缺失,需要安裝,因此,在這裡我們先安裝上這些包
[[email protected] ~]# yum install libaio-devel sysstat unixODBC unixODBC-devel
移走ntp配置
[[email protected] ~]# mv /etc/ntp.conf /etc/ntp.conf.bak
新增使用者
groupadd -g 1000 oinstall groupadd -g 1001 asmadmin groupadd -g 1002 dba groupadd -g 1003 oper groupadd -g 1004 asmdba groupadd -g 1005 asmoper useradd -u 1000 -g oinstall -G dba,oper,asmdba oracle useradd -u 1001 -g oinstall -G asmadmin,asmdba,asmoper grid passwd oracle passwd grid |
新增資料夾
mkdir -p /u01/app/11.2.0/grid mkdir -p /u01/app/grid mkdir -p /u01/app/oracle chown -R grid:oinstall /u01/ chown -R oracle:oinstall /u01/app/oracle chmod -R 775 /u01/ |
配置udev裝置
這裡就直接將節點1的檔案拷貝過來就可以了 [[email protected] ~]# cat /etc/udev/rules.d/99-oracelasm.rules KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="SATA_VBOX_HARDDISK_VB0e123a72-41df0e11_", NAME="asm-diskb", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="SATA_VBOX_HARDDISK_VB840e50f3-375627c3_", NAME="asm-diskc", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="SATA_VBOX_HARDDISK_VBb8d6b677-65c5fa86_", NAME="asm-diskd", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="SATA_VBOX_HARDDISK_VBfc5cfb52-b0c7ce00_", NAME="asm-diske", OWNER="grid", GROUP="asmadmin", MODE="0660"
[[email protected] ~]# ll /dev/asm* ls: /dev/asm*: No such file or directory [[email protected] ~]# start_udev Starting udev: [ OK ] [[email protected] ~]# ll /dev/asm* brw-rw---- 1 grid asmadmin 8, 16 Mar 6 15:15 /dev/asm-diskb brw-rw---- 1 grid asmadmin 8, 32 Mar 6 15:15 /dev/asm-diskc brw-rw---- 1 grid asmadmin 8, 48 Mar 6 15:15 /dev/asm-diskd brw-rw---- 1 grid asmadmin 8, 64 Mar 6 15:15 /dev/asm-dis |
將start_udev新增到rc.local裡
[[email protected] ~]# cat /etc/rc.local #!/bin/sh # # This script will be executed *after* all the other init scripts. # You can put your own initialization stuff in here if you don't # want to do the full Sys V style init stuff.
touch /var/lock/subsys/local mount -t vboxsf Share /mnt/share start_ude |
編輯核心檔案/etc/sysctl.conf (兩個節點執行)
kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 6815744 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 1048576 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 fs.aio-max-nr = 1048576 |
新增以上內容,然後執行sysctl -p命令載入這些核心引數
修改資源限制(兩個節點執行)
/etc/security/limits.conf 新增如下內容,新增如下內容
grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 |
編輯grid使用者的 .bash_profile檔案
export ORACLE_SID=+ASM3 export ORACLE_BASE=/u01/app/grid export ORACLE_HOME=/u01/app/11.2.0/grid export PATH=$ORACLE_HOME/bin:$PATH |
編輯oracle使用者的.bash_profile檔案
export ORACLE_SID=db3 export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db export PATH=$ORACLE_HOME/bin:$PAT |
還有一個很重要的工作,就是手工配置節點2,與1,2,的互通,這個很重要。
在節點3上操作 先切到grid使用者下。 mkdir .ssh chmod 755 .ssh ssh-keygen -t rsa 這樣才/home/grid/.ssh/目錄下就有一個id_rsa.pub這個檔案。將裡面的一行內容拷貝到節點1的/home/grid/.ssh/authorized_keys這個檔案裡,然後將這個檔案分別拷到節點2,3上。 [[email protected] .ssh]$ scp authorized_keys db3:/home/grid/.ssh [[email protected] .ssh]$ scp authorized_keys db2:/home/grid/.ssh
這樣在每個節點的grid使用者下執行 ssh db1 date ssh db2 date ssh db3 date ssh db1-priv date ssh db2-priv date ssh db3-priv date
[[email protected] ~]$ ssh db1 date Thu Mar 6 16:29:51 CST 2014 [[email protected] ~]$ ssh db2 date Thu Mar 6 16:29:51 CST 2014 [[email protected] ~]$ ssh db3 date Thu Mar 6 16:29:51 CST 2014 [[email protected] ~]$ ssh db1-priv date Thu Mar 6 16:29:51 CST 2014 [[email protected] ~]$ ssh db2-priv date Thu Mar 6 16:29:52 CST 2014 [[email protected] ~]$ ssh db3-priv date Thu Mar 6 16:29:52 CST 2014
如果是第一次執行會讓你輸入密碼,直到執行這6個命令都不需要輸入密碼,則表示grid使用者的ssh互通配置完畢。
用同樣的方法配置oracle的互通 |
至此準備工作基本做完了。
GI擴充套件
在操作之前,確保所有節點都是正常狀態。
[[email protected] ~]$ crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora.DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE db1 ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE db1 ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE db1 ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE db1 ora.db.db ora....se.type 0/2 0/1 OFFLINE OFFLINE ora....SM1.asm application 0/5 0/0 ONLINE ONLINE db1 ora....B1.lsnr application 0/5 0/0 ONLINE ONLINE db1 ora.db1.gsd application 0/5 0/0 OFFLINE OFFLINE ora.db1.ons application 0/3 0/0 ONLINE ONLINE db1 ora.db1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE db1 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE db2 ora....B2.lsnr application 0/5 0/0 ONLINE ONLINE db2 ora.db2.gsd application 0/5 0/0 OFFLINE OFFLINE ora.db2.ons application 0/3 0/0 ONLINE ONLINE db2 ora.db2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE db2 ora.eons ora.eons.type 0/3 0/ ONLINE ONLINE db1 ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE ora....network ora....rk.type 0/5 0/ ONLINE ONLINE db1 ora.oc4j ora.oc4j.type 0/5 0/0 OFFLINE OFFLINE ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE db1 ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE db1 |
準備工作完成後,就可以根據新環境將新節點增加到RAC系統了,RAC增加節點總共分為三步進行,第一步就是將clusterware軟體系統進行擴充套件,將所有11g的clusterware軟體新增到新節點,第二步就是將ORACLE資料庫的軟體擴充套件到新節點,最後一步就是使用dbca建立新節點的資料庫例項instance。
先去節點3 ,安裝cvuqdisk包
[[email protected] ~]# rpm -ivh /mnt/share/oracle11g/64/linux.x64_11gR2_grid/grid/rpm/cvuqdisk-1.0.7-1.rpm |
檢查節點3是否滿足需求
[[email protected] ~]# su - grid [[email protected] ~]$ cluvfy stage -pre nodeadd -n db3 -fixup -verbose |
[[email protected] ~]$ cluvfy stage -pre nodeadd -n db3 -fixup -verbose
Performing pre-checks for node addition
Checking node reachability...
Check: Node reachability from node "db1" Destination Node Reachable? ------------------------------------ ------------------------ db3 yes Result: Node reachability check passed from node "db1"
Checking user equivalence...
Check: User equivalence for user "grid" Node Name Comment ------------------------------------ ------------------------ db3 passed Result: User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file... Node Name Status Comment ------------ ------------------------ ------------------------ db3 passed db2 passed db1 passed
Verification of the hosts config file successful
Interface information for node "db1" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.1.161 192.168.1.0 0.0.0.0 10.0.1.1 08:00:27:6C:37:49 1500 eth0 192.168.1.163 192.168.1.0 0.0.0.0 10.0.1.1 08:00:27:6C:37:49 1500 eth1 10.0.1.161 10.0.1.0 0.0.0.0 10.0.1.1 08:00:27:25:BF:57 1500
Interface information for node "db2" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.1.162 192.168.1.0 0.0.0.0 192.168.1.1 08:00:27:1D:22:8D 1500 eth0 192.168.1.165 192.168.1.0 0.0.0.0 192.168.1.1 08:00:27:1D:22:8D 1500 eth0 192.168.1.164 192.168.1.0 0.0.0.0 192.168.1.1 08:00:27:1D:22:8D 1500 eth1 10.0.1.162 10.0.1.0 0.0.0.0 192.168.1.1 08:00:27:9F:B3:8C 1500
Interface information for node "db3" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.1.173 192.168.1.0 0.0.0.0 192.168.1.1 08:00:27:CB:D9:A9 1500 eth1 10.0.1.173 10.0.1.0 0.0.0.0 192.168.1.1 08:00:27:FA:39:E3 1500
Check: Node connectivity for interface "eth0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- db1:eth0 db1:eth0 yes db1:eth0 db2:eth0 yes db1:eth0 db2:eth0 yes db1:eth0 db2:eth0 yes db1:eth0 db3:eth0 yes db1:eth0 db2:eth0 yes db1:eth0 db2:eth0 yes db1:eth0 db2:eth0 yes db1:eth0 db3:eth0 yes db2:eth0 db2:eth0 yes db2:eth0 db2:eth0 yes db2:eth0 db3:eth0 yes db2:eth0 db2:eth0 yes db2:eth0 db3:eth0 yes db2:eth0 db3:eth0 yes Result: Node connectivity passed for interface "eth0"
Result: Node connectivity check passed
Checking CRS integrity... The Oracle clusterware is healthy on node "db1" The Oracle clusterware is healthy on node "db2"
CRS integrity check passed
Checking shared resources...
Checking CRS home location... The location "/u01/app/11.2.0/grid" is not shared but is present/creatable on all nodes
Result: Shared resources check for node addition passed
Checking node connectivity...
Checking hosts config file... Node Name Status Comment ------------ ------------------------ ------------------------ db3 passed db2 passed db1 passed
Verification of the hosts config file successful
Interface information for node "db1" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.1.161 192.168.1.0 0.0.0.0 10.0.1.1 08:00:27:6C:37:49 1500 eth0 192.168.1.163 192.168.1.0 0.0.0.0 10.0.1.1 08:00:27:6C:37:49 1500 eth1 10.0.1.161 10.0.1.0 0.0.0.0 10.0.1.1 08:00:27:25:BF:57 1500
Interface information for node "db2" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.1.162 192.168.1.0 0.0.0.0 192.168.1.1 08:00:27:1D:22:8D 1500 eth0 192.168.1.165 192.168.1.0 0.0.0.0 192.168.1.1 08:00:27:1D:22:8D 1500 eth0 192.168.1.164 192.168.1.0 0.0.0.0 192.168.1.1 08:00:27:1D:22:8D 1500 eth1 10.0.1.162 10.0.1.0 0.0.0.0 192.168.1.1 08:00:27:9F:B3:8C 1500
Interface information for node "db3" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.1.173 192.168.1.0 0.0.0.0 192.168.1.1 08:00:27:CB:D9:A9 1500 eth1 10.0.1.173 10.0.1.0 0.0.0.0 192.168.1.1 08:00:27:FA:39:E3 1500
Check: Node connectivity of subnet "192.168.1.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- db1:eth0 db1:eth0 yes db1:eth0 db2:eth0 yes db1:eth0 db2:eth0 yes db1:eth0 db2:eth0 yes db1:eth0 db3:eth0 yes db1:eth0 db2:eth0 yes db1:eth0 db2:eth0 yes db1:eth0 db2:eth0 yes db1:eth0 db3:eth0 yes db2:eth0 db2:eth0 yes db2:eth0 db2:eth0 yes db2:eth0 db3:eth0 yes db2:eth0 db2:eth0 yes db2:eth0 db3:eth0 yes db2:eth0 db3:eth0 yes Result: Node connectivity passed for subnet "192.168.1.0" with node(s) db1,db2,db3
Check: TCP connectivity of subnet "192.168.1.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- db1:192.168.1.161 db1:192.168.1.163 passed db1:192.168.1.161 db2:192.168.1.162 passed db1:192.168.1.161 db2:192.168.1.165 passed db1:192.168.1.161 db2:192.168.1.164 passed db1:192.168.1.161 db3:192.168.1.173 passed Result: TCP connectivity check passed for subnet "192.168.1.0"
Check: Node connectivity of subnet "10.0.1.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- db1:eth1 db2:eth1 yes db1:eth1 db3:eth1 yes db2:eth1 db3:eth1 yes Result: Node connectivity passed for subnet "10.0.1.0" with node(s) db1,db2,db3
Check: TCP connectivity of subnet "10.0.1.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- db1:10.0.1.161 db2:10.0.1.162 passed db1:10.0.1.161 &nb |