1. 程式人生 > 實用技巧 >Oracle 11g RAC 新增刪除節點參考

Oracle 11g RAC 新增刪除節點參考

目錄

參考文件

How to Add Node/Instance or Remove Node/Instance with Oracle Clusterware and RAC (Doc ID 1332451.1)
How to Remove/Delete a Node From Grid Infrastructure Clusterware When the Node Has Failed (Doc ID 1262925.1)

刪除節點

1、刪除資料庫例項 本次實驗是刪除stuaapp02節點

  • 在另外一個節點執行
[oracle@stuaapp01 rdbms]$ dbca -silent -deleteInstance -nodeList stuaapp02 -gdbName lenovo -instanceName lenovo2 -sysDBAUserName sys -sysDBAPassword oracle
Deleting instance
1% complete
2% complete
6% complete
13% complete
20% complete
26% complete
33% complete
40% complete
46% complete
53% complete
60% complete
66% complete
Completing instance management.
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/lenovo4.log" for further details.
驗證:
SQL> select thread#,status from v$thread;

   THREAD# STATUS
---------- ------
         1 OPEN
[oracle@stuaapp01 rdbms]$  srvctl config database -d lenovo
Database unique name: lenovo
Database name: 
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/LENOVO/spfilelenovo.ora
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: lenovo
Database instances: lenovo1
Disk Groups: DATA,FRA
Mount point paths: 
Services: 
Type: RAC
Database is administrator managed

2、刪除Oracle RAC軟體

  • 1 在需要刪除的節點執行
[oracle@stuaapp02 trace]$ srvctl disable listener -l listener -n stuaapp02
[oracle@stuaapp02 trace]$ srvctl stop listener -l listener -n stuaapp02			
  • 2 在需要刪除的節點執行
[oracle@stuaapp02 trace]$ cd $ORACLE_HOME/oui/bin/
[oracle@stuaapp02 bin]$ pwd
/u01/app/oracle/product/11.2.0/dbhome_1/oui/bin
[oracle@stuaapp02 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=stuaapp02" -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB.   Actual 3855 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.	
  • 3 在需要刪除的節點執行
[oracle@stuaapp02 bin]$ cd $ORACLE_HOME/deinstall/
[oracle@stuaapp02 deinstall]$ pwd                 
/u01/app/oracle/product/11.2.0/dbhome_1/deinstall
[oracle@stuaapp02 deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...			
  • 4 在其他節點執行
[oracle@stuaapp01 rdbms]$ cd $ORACLE_HOME/oui/bin/
[oracle@stuaapp01 bin]$ pwd
/u01/app/oracle/product/11.2.0/dbhome_1/oui/bin
[oracle@stuaapp01 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=stuaapp01"
Starting Oracle Universal Installer...

3、刪除Node節點

  • 1 確認目錄
[grid@stuaapp01 ~]$ echo $ORACLE_HOME
/u01/app/11.2.0/grid
  • 2 確定需要刪除的節點名稱
[grid@stuaapp01 ~]$ olsnodes -s -t
stuaapp01       Active  Unpinned
stuaapp02       Active  Unpinned
  • 3 禁用叢集應用和守護程序

在需要刪除的節點上,使用root使用者執行指令碼

[root@stuaapp02 install]# pwd
/u01/app/11.2.0/grid/crs/install

[root@stuaapp02 install]# ./rootcrs.pl -deconfig -force 
  • 4 從叢集中刪除節點

在其他節點執行,使用root使用者執行

原文操作# crsctl delete node -n node_to_be_deleted

[root@stuaapp01 ~]# crsctl delete node -n stuaapp02
CRS-4661: Node stuaapp02 successfully deleted.
  • 5 在需要刪除的節點上,使用grid使用者執行
原文操作$ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES={node_to_be_deleted}" CRS=TRUE -silent -local

[grid@stuaapp02 bin]$ pwd
/u01/app/11.2.0/grid/oui/bin
[grid@stuaapp02 bin] ./runInstaller -updateNodelist ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=stuaapp02" CRS=TRUE -silent -local

確認檔案inventory.xml沒有被更新

  • 6 在需要刪除的節點上執行,使用grid使用者執行
[grid@stuaapp02 deinstall]$ pwd
/u01/app/11.2.0/grid/deinstall
[grid@stuaapp02 deinstall]$ ./deinstall -local

注意:如果不指定-local選項,那麼預設將會把所有的叢集資訊全部刪除,這是非常危險的操作
執行此命令過程中需要進行多次手動配置,請注意!!!!

  • 7 在其他節點執行

使用grid使用者執行

原文操作$ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES={remaining_nodes_list}" CRS=TRUE -silent
[grid@stuaapp01 bin]$ pwd
/u01/app/11.2.0/grid/oui/bin
[grid@stuaapp01 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=stuaapp01" CRS=TRUE -silent

使用oracle使用者執行

原文操作$ ./runInstaller -updateNodeList ORACLE_HOME=ORACLE_HOME"CLUSTER_NODES={remaining_nodes_list}"
[oracle@stuaapp01 bin]$ pwd
/u01/app/oracle/product/11.2.0/dbhome_1/oui/bin			
[oracle@stuaapp01 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=stuaapp01"
  • 8 驗證,使用grid使用者執行
原文操作$ cluvfy stage -post nodedel -n node_list [-verbose]

[grid@stuaapp01 ~]$ cluvfy stage -post nodedel -n stuaapp02 -verbose

新增節點

1、增加叢集,在grid使用者

[grid@stuaapp01 bin]$ cluvfy stage -pre nodeadd -n stuaapp02
[grid@pipi1 bin]$ export IGNORE_PREADDNODE_CHECKS=Y
[grid@stuaapp01 bin]$ pwd
/u01/app/11.2.0/grid/oui/bin
[grid@stuaapp01 bin]$  ./addNode.sh -silent "CLUSTER_NEW_NODES={stuaapp02}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={stuaapp02-vip}"

[root@stuaapp02 oracle]# /u01/app/11.2.0/grid/root.sh

2、增加oracle軟體 在oracle使用者

[oracle@stuaapp01 bin]$ pwd
/u01/app/oracle/product/11.2.0/dbhome_1/oui/bin

[oracle@stuaapp01 bin]$ ./addNode.sh "CLUSTER_NEW_NODES={stuaapp02}"

3、增加例項

在有例項的節點執行

dbca -silent -addInstance -nodeList stuaapp02 -gdbName lenovo -instanceName lenovo2 -sysDBAUserName sys -sysDBAPassword oracle

4、新增redolog--實驗中並沒有做這些操作

SQL> alter database add logfile thread 2 group 4 '+DATA' size 50m;
SQL> alter database add logfile thread 2 group 5 '+DATA' size 50m;
SQL> alter database add logfile thread 2 group 6 '+DATA' size 50m;
SQL> alter database enable public thread 2;
SQL> create undo tablespace undotbs2 datafile '+data' size 50m;  
SQL> alter system set undo_tablespace=undotbs2 scope=spfile sid='lenovo2';
SQL>  alter system set instance_number=2 scope=spfile sid='lenovo2';
SQL>  alter system set cluster_database_instances=2 scope=spfile sid='*';

5、新增例項到叢集

[oracle@stuaapp01 dbs]$ srvctl add instance -d lenovo -i lenovo2 -n stuaapp02
[oracle@stuaapp01 dbs]$ srvctl status database -d lenovo
Instance lenovo1 is running on node stuaapp01
Instance lenovo2 is running on node stuaapp02
[oracle@stuaapp01 dbs]$ srvctl stop instance -d lenovo -i lenovo2
[oracle@stuaapp01 dbs]$ srvctl status database -d lenovo
Instance lenovo1 is running on node stuaapp01
Instance lenovo2 is not running on node stuaapp02
[oracle@stuaapp01 dbs]$ srvctl start instance -d lenovo -i lenovo2
[oracle@stuaapp01 dbs]$ srvctl stop instance -d lenovo -i lenovo2
[oracle@stuaapp01 dbs]$ srvctl start instance -d lenovo -i lenovo2
[oracle@stuaapp01 dbs]$ exit