1. 程式人生 > >修改RAC 的IP地址 (不包含private IP地址)

修改RAC 的IP地址 (不包含private IP地址)

環境 RDBMS 11.2.0.4 

修改RAC 的IP地址,包括public、VIP 、SCAN 等IP。不包含private IP 。

步驟 
1 關閉庫,監聽,CRS等
2 修改/etc/hosts
3 OS層面修改IP
4 啟動CRS 
5 修改public 、VIP 、SCAN 等IP 
6 修改private IP -- 暫無

-- 原地址,192.168.2.x網段,  修改成192.168.1.X網段 .先不更改private 

[[email protected] ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6


# public
#192.168.2.101  host01
192.168.2.102  host02
192.168.2.107   host03

#vip
#192.168.2.103  host01-vip
192.168.2.104  host02-vip
192.168.2.108  host03-vip

#priv
#192.168.0.101  host01-priv
192.168.0.102  host02-priv
192.168.0.107  host03-priv

#scan
192.168.2.111  cluster-scan

1  關閉庫, 監聽, crs 等

-- 先禁用crs隨作業系統啟動

[[email protected] grid]# crsctl disable crs;
CRS-4621: Oracle High Availability Services autostart is disabled.
[[email protected] grid]#
[[email protected] grid]# crsctl disable crs
CRS-4621: Oracle High Availability Services autostart is disabled.
[
[email protected]
grid]#

-- 關閉資料庫 

[[email protected] ~]$ srvctl status database -d racdb
Instance racdb2 is running on node host02
Instance racdb3 is running on node host03
[[email protected] ~]$ srvctl stop database -d racdb
[[email protected] ~]$ srvctl status database -d racdb
Instance racdb2 is not running on node host02
Instance racdb3 is not running on node host03
[
[email protected]
~]$

-- 檢視scan 資訊,原來的scan資訊是192.168.2這個網段上的

[[email protected] ~]$ srvctl config scan
SCAN name: cluster-scan, Network: 1/192.168.2.0/255.255.255.0/eth1
SCAN VIP name: scan1, IP: /cluster-scan/192.168.2.111
[[email protected] ~]$ srvctl config listener
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:1521

-- 關閉監聽,關閉crs

[[email protected] ~]$ srvctl stop listener
[[email protected] grid]# crsctl stop crs
[[email protected] grid]# crsctl stop crs

2 修改/etc/hosts  

3 OS 上修改網絡卡資訊

vi /etc/sysconfig/network-scripts/ifcfg-eth1 
service network restart 

然後在虛擬機器上重新配置網絡卡連線方式,(在宿主機上新建立一個虛擬網絡卡虛擬機器橋接在該網絡卡上)

4 啟動crs

[[email protected] grid]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[[email protected] grid]#
[[email protected] grid]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[[email protected] grid]#

5 修改public 、vip 、scan ip
-- 目前看到的public IP還是192.168.2.X這個網段

[[email protected] grid]# oifcfg getif
eth1  192.168.2.0  global  public
eth2  192.168.0.0  global  cluster_interconnect
[[email protected] grid]#
[[email protected] grid]# oifcfg getif
eth1  192.168.2.0  global  public
eth2  192.168.0.0  global  cluster_interconnect
[[email protected] grid]#

-- 開始修改public 

[[email protected] grid]# oifcfg delif -global eth1
[[email protected] grid]# oifcfg setif -global eth1/192.168.1.0:public
[[email protected] grid]#
[[email protected] grid]# oifcfg delif -global eth1
[[email protected] grid]# oifcfg setif -global eth1/192.168.1.0:public
[[email protected] grid]#

-- 再次檢視public ip, public ip已經修改過來了 

[[email protected] grid]# oifcfg getif
eth2  192.168.0.0  global  cluster_interconnect
eth1  192.168.1.0  global  public
[[email protected] grid]#
[[email protected] grid]# oifcfg getif
eth2  192.168.0.0  global  cluster_interconnect
eth1  192.168.1.0  global  public
[[email protected] grid]#

-- 修改VIP 
-- 首先要停止資料庫和監聽 

[[email protected] grid]# srvctl status vip -n host02
VIP host02-vip is enabled
VIP host02-vip is not running
[[email protected] grid]# srvctl status vip -n host03
VIP host03-vip is enabled
VIP host03-vip is not running
[[email protected] grid]#

-- 檢視當前vip設定。 可以看到IP自己已經變過來了。但是顯示的網段沒有變過來。

[[email protected] grid]# olsnodes -s
host02  Active
host03  Active

[[email protected] grid]# srvctl config vip -n host03
VIP exists: /host03-vip/192.168.1.108/192.168.2.0/255.255.255.0/eth1, hosting node host03
[[email protected] grid]# srvctl config vip -n host02
VIP exists: /host02-vip/192.168.1.104/192.168.2.0/255.255.255.0/eth1, hosting node host02
[[email protected] grid]#

-- 修改,則按照以下的方法.上面看到的vip是有問題的,因為每個節點上的網段不一樣,需要修改

[[email protected] grid]# srvctl modify nodeapps -n host02 -A 192.168.1.104/255.255.255.0/eth1
[[email protected] grid]# srvctl modify nodeapps -n host03 -A 192.168.1.108/255.255.255.0/eth1
[[email protected] grid]#

-- 再次檢視vip ,可以看到網段已經變更過來了,是192.168.1.0 

[[email protected] grid]# srvctl config vip -n host02
VIP exists: /host02-vip/192.168.1.104/192.168.1.0/255.255.255.0/eth1, hosting node host02
[[email protected] grid]# srvctl config vip -n host03
VIP exists: /host03-vip/192.168.1.108/192.168.1.0/255.255.255.0/eth1, hosting node host03
[[email protected] grid]#

-- 啟動vip ,好像上面修改後,VIP就自己起來了 

[[email protected] grid]# srvctl start vip -n host02
PRKO-2420 : VIP is already started on node(s): host02
[[email protected] grid]# srvctl start vip -n host03
PRKO-2420 : VIP is already started on node(s): host03
[[email protected] grid]#

-- 修改監聽地址 
-- 啟動監聽,因為之前host03上監聽已經起來了,所以只啟動host02上的監聽

[[email protected] grid]# srvctl start listener -n host02
[[email protected] grid]# lsnrctl status

-- 檢查監聽的引數 local_listener ,remote_listener;

-- 修改scan引數 

-- 檢視scan當前的資訊,可以看到scan的ip還是192.168.2這個網段

[[email protected] grid]# srvctl config scan
SCAN name: cluster-scan, Network: 1/192.168.1.0/255.255.255.0/eth1
SCAN VIP name: scan1, IP: /cluster-scan/192.168.2.111
[[email protected] grid]#

--進行修改scan ip ,並檢視scan,發現scan ip已經修改過來了 

[[email protected] grid]# srvctl modify scan -n cluster-scan
[[email protected] grid]# srvctl config scan
SCAN name: cluster-scan, Network: 1/192.168.1.0/255.255.255.0/eth1
SCAN VIP name: scan1, IP: /cluster-scan/192.168.1.111
[[email protected] grid]#

--連線驗證

@>conn sys/[email protected]/racdb as sysdba
Connected.
[email protected]/racdb>select open_mode from v$database;

OPEN_MODE
----------------------------------------
READ WRITE

[email protected]/racdb>

END