1. 程式人生 > >11gR2RAC更換CRS磁碟組文件 11gR2RAC更換CRS磁碟組文件

11gR2RAC更換CRS磁碟組文件 11gR2RAC更換CRS磁碟組文件

11gR2RAC更換CRS磁碟組文件

 

  1. 磁碟(pv)準備

    在生產環境中,提前從儲存上劃分一些磁碟掛載到RAC系統的兩個節點上(node1,node2).

    新增加磁碟組為(hdisk14--hdisk24)

 

1.1磁碟使用規劃

 

磁碟名稱

磁碟大小

所處儲存

計劃用途

故障組

hdisk14

500G

NEW

DATA磁碟組

DATA_0000

hdisk15

500G

NEW

DATA磁碟組

DATA_0001

hdisk16

500G

NEW

DATA磁碟組

DATA_0002

hdisk17

500G

NEW

DATA磁碟組

DATA_0003

hdisk18

500G

NEW

DATA磁碟組

DATA_0004

hdisk19

50G

NEW

TOCR磁碟組

 

hdisk20

50G

NEW

NCRS磁碟組

 

hdisk21

50G

NEW

NCRS磁碟組

 

hdisk22

200G

 

歸檔日誌

 

hdisk23

200G

 

歸檔日誌

 

hdisk24

50G

OLD

NCRS磁碟組

 

 

1.2檢查磁碟的屬性(兩個節點)

 

    lsattr -El hdisk14 | grep reserve

    lsattr -El hdisk15 | grep reserve

    lsattr -El hdisk16 | grep reserve

    lsattr -El hdisk17 | grep reserve

    lsattr -El hdisk18 | grep reserve

    lsattr -El hdisk19 | grep reserve

    lsattr -El hdisk20 | grep reserve

    lsattr -El hdisk21 | grep reserve

    lsattr -El hdisk24 | grep reserve

 

 

1.3更改磁碟的屬性以支援並行操作(兩個節點)

 

    chdev -l hdisk14 -a reserve_policy=no_reserve

    chdev -l hdisk15 -a reserve_policy=no_reserve

    chdev -l hdisk16 -a reserve_policy=no_reserve

    chdev -l hdisk17 -a reserve_policy=no_reserve

    chdev -l hdisk18 -a reserve_policy=no_reserve

    chdev -l hdisk19 -a reserve_policy=no_reserve

    chdev -l hdisk20 -a reserve_policy=no_reserve

    chdev -l hdisk21 -a reserve_policy=no_reserve

    chdev -l hdisk24 -a reserve_policy=no_reserve

 

    chdev -l hdisk14 -a reserve_lock=no

    chdev -l hdisk15 -a reserve_lock=no

    chdev -l hdisk16 -a reserve_lock=no

    chdev -l hdisk17 -a reserve_lock=no

    chdev -l hdisk18 -a reserve_lock=no

    chdev -l hdisk19 -a reserve_lock=no

    chdev -l hdisk20 -a reserve_lock=no

    chdev -l hdisk21 -a reserve_lock=no

 

 

1.4修改字元裝置的屬組、許可權(兩個節點)

 

    chown grid:dba /dev/rhdisk13

    chown grid:dba /dev/rhdisk14

    chown grid:dba /dev/rhdisk15

    chown grid:dba /dev/rhdisk16

    chown grid:dba /dev/rhdisk17

    chown grid:dba /dev/rhdisk18

    chown grid:dba /dev/rhdisk19

    chown grid:dba /dev/rhdisk20

    chown grid:dba /dev/rhdisk21

    chown grid:dba /dev/rhdisk24

 

    chmod 660 /dev/rhdisk13

    chmod 660 /dev/rhdisk14

    chmod 660 /dev/rhdisk15

    chmod 660 /dev/rhdisk16

    chmod 660 /dev/rhdisk17

    chmod 660 /dev/rhdisk18

    chmod 660 /dev/rhdisk19

    chmod 660 /dev/rhdisk20

    chmod 660 /dev/rhdisk21

    chmod 660 /dev/rhdisk24

 

 

1.5檢視磁碟的資訊

 

    ls -l /dev/rhdisk*

  1. 建立ASM磁碟組(節點1執行即可)

    建立ASM磁碟組[示意] (grid使用者) 
    [grid]$asmca

    輸入磁碟組名,採用外部冗餘,然後選擇磁碟.

  1. 資料庫開啟歸檔

 

3.1在一個節點上Oracle進行開啟歸檔操作

 

    sqlplus / as sysdba

    create pfile='/home/oracle/racdbinit.ora' from spfile;

    alter system set log_archive_dest_1='location=/arch1' sid='racdb1';

    alter system set log_archive_dest_1='location=/arch2' sid='racdb2';

 

3.2停止所有資料庫例項

 

    oracle:

    srvctl stop database -d racdb

 

3.3啟動一個例項到mount狀態

 

    sqlplus / as sysdba

    startup mount;

 

3.4開啟歸檔日誌:

 

    alter database archivelog;

 

3.5停止所有資料庫例項

 

    oracle:

    srvctl stop database -d racdb

 

3.6啟動所有資料庫例項

 

    oracle:

    srvctl start database -d racdb

 

  1. 資料庫備份

    由主機實施工程師將節點1上的/arch1使用NFS掛載到節點2的/arch2上,在節點2上使用如下mount命令掛載:

    mount -v nfs -o rw,bg,hard,rsize=32768,wsize=32768,vers=3,nointr,timeo=600,proto=tcp 192.1.2.51:/arch1 /arch1

    使用oracle使用者登入節點2,使用rman備份資料庫,備份指令碼如下:

    rman target /

    backup database format '/arch2/rman/racdb_%U';

  1. 停止資料庫

    使用oracle使用者管理所有資料庫例項:

    srvctl stop database -d racdb

  1. 更換CRS磁碟組

 

6.1檢視當前叢集的狀態

 

[[email protected] ~]$ crsctl stat res -t

--------------------------------------------------------------------------------

NAME TARGET STATE SERVER STATE_DETAILS

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.ARCH1.dg

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.CRS1.dg

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.DATA1.dg

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.LISTENER.lsnr

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.NCRS.dg

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.asm

ONLINE ONLINE node1 Started

ONLINE ONLINE node2 Started

ora.gsd

OFFLINE OFFLINE node1

OFFLINE OFFLINE node2

ora.net1.network

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.ons

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.registry.acfs

ONLINE ONLINE node1

ONLINE ONLINE node2

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE node1

ora.cvu

1 ONLINE ONLINE node1

ora.node1.vip

1 ONLINE ONLINE node1

ora.node2.vip

1 ONLINE ONLINE node2

ora.oc4j

1 OFFLINE OFFLINE

ora.racdb.db

1 ONLINE ONLINE node2 Open

2 ONLINE ONLINE node1 Open

ora.scan1.vip

1 ONLINE ONLINE node1

 

6.2新增OCR的mirror映象磁碟組

 

    在節點1上使用root使用者操作

    cd /app/grid/11.2.0/grid/bin

    /app/grid/11.2.0/grid/bin是grid使用者下ORACLE_HOME的變數值。

    ./ocrconfig -add +TOCR

    ./ocrcheck 檢測OCR的儲存狀態

    過程記錄如下:

# ./ocrconfig -add +TOCR

# ./ocrcheck

Status of Oracle Cluster Registry is as follows :

Version : 3

Total space (kbytes) : 262120

Used space (kbytes) : 3044

Available space (kbytes) : 259076

ID : 1255075770

Device/File Name : +ORC

Device/File integrity check succeeded

Device/File Name : +TOCR

Device/File integrity check succeeded

 

Device/File not configured

 

Device/File not configured

 

Device/File not configured

 

Cluster registry integrity check succeeded

 

Logical corruption check succeeded

 

6.3磁碟原有的OCR磁碟組

 

    # ./ocrconfig -replace +ORC -replacement +NCRS

# ./ocrcheck

Status of Oracle Cluster Registry is as follows :

Version : 3

Total space (kbytes) : 262120

Used space (kbytes) : 3044

Available space (kbytes) : 259076

ID : 1255075770

Device/File Name : +NCRS

Device/File integrity check succeeded

Device/File Name : +TOCR

Device/File integrity check succeeded

 

Device/File not configured

 

Device/File not configured

 

Device/File not configured

 

Cluster registry integrity check succeeded

 

Logical corruption check succeeded

  1. 遷移VoteDiks

    使用grid使用者登入到一個節點

 

7.1檢查votedisk的儲存位置

 

    [[email protected] ~]$ crsctl query css votedisk

## STATE File Universal Id File Name Disk group

-- ----- ----------------- --------- ---------

1. ONLINE a2d3e9c8b0094fcabfeee701fe3594a5 (ORC) [ORC]

Located 3 voting disk(s)

 

7.2更換votedisk的儲存位置

 

    [[email protected] ~]$ crsctl replace votedisk +NCRS

    [[email protected] ~]$ crsctl query css votedisk

## STATE File Universal Id File Name Disk group

-- ----- ----------------- --------- ---------

1. ONLINE a2d3e9c8b0094fcabfeee701fe3594a5 (ORCL:CRS1) [CRS1]

2. ONLINE 973e54e8c5c94f0fbf4b746820c14005 (ORCL:CRS2) [CRS1]

3. ONLINE 197c715135a94f4abf545095b9c8a186 (ORCL:CRS3) [CRS1]

Located 3 voting disk(s)

  1. 遷移ASM例項的spfile檔案

使用grid使用者登入到節點1進行操作

 

8.1登入asm例項

 

    [[email protected] ~]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.3.0 Production on Tue Jul 1 11:07:49 2014

Copyright (c) 1982, 2011, Oracle. All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Real Application Clusters and Automatic Storage Management options

SQL>

 

 

8.2檢測spfile的儲存位置

 

    show parameter spfile

SQL> show parameter spfile

 

NAME TYPE VALUE

-------------- ----------------------------

spfile tring     +ORC/rac-cluster/asmparameterfile/registry.253.801158513

 

8.3建立pfile

 

    SQL>create pfile='/home/grid/asminit.ora' from spfile='+ORC/rac-cluster/asmparameterfile/registry.253.801158513';

 

8.4使用pfile建立新的spfile

 

    SQL>create spfile='+NCRS' from pfile='/home/grid/asminit.ora';

  1. 重啟叢集

使用root使用者重啟crs即可,在兩個節點上執行

    /u01/app/11.2.0/grid/bin/crsctl stop crs

    /u01/app/11.2.0/grid/bin/crsctl start crs

 

[[email protected] ~]# /u01/app/11.2.0/grid/bin/crsctl stop crs

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node2'

CRS-2673: Attempting to stop 'ora.crsd' on 'node2'

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node2'

CRS-2673: Attempting to stop 'ora.CRS1.dg' on 'node2'

CRS-2673: Attempting to stop 'ora.NCRS.dg' on 'node2'

CRS-2673: Attempting to stop 'ora.registry.acfs' on 'node2'

CRS-2673: Attempting to stop 'ora.racdb.db' on 'node2'

CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'node2'

CRS-2677: Stop of 'ora.racdb.db' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.ARCH1.dg' on 'node2'

CRS-2673: Attempting to stop 'ora.DATA1.dg' on 'node2'

CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.node2.vip' on 'node2'

CRS-2677: Stop of 'ora.node2.vip' on 'node2' succeeded

CRS-2672: Attempting to start 'ora.node2.vip' on 'node1'

CRS-2676: Start of 'ora.node2.vip' on 'node1' succeeded

CRS-2677: Stop of 'ora.registry.acfs' on 'node2' succeeded

CRS-2677: Stop of 'ora.ARCH1.dg' on 'node2' succeeded

CRS-2677: Stop of 'ora.DATA1.dg' on 'node2' succeeded

CRS-2677: Stop of 'ora.CRS1.dg' on 'node2' succeeded

CRS-2677: Stop of 'ora.NCRS.dg' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'node2'

CRS-2677: Stop of 'ora.asm' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.ons' on 'node2'

CRS-2677: Stop of 'ora.ons' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.net1.network' on 'node2'

CRS-2677: Stop of 'ora.net1.network' on 'node2' succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node2' has completed

CRS-2677: Stop of 'ora.crsd' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.mdnsd' on 'node2'

CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'node2'

CRS-2673: Attempting to stop 'ora.ctssd' on 'node2'

CRS-2673: Attempting to stop 'ora.evmd' on 'node2'

CRS-2673: Attempting to stop 'ora.asm' on 'node2'

CRS-2677: Stop of 'ora.evmd' on 'node2' succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'node2' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'node2' succeeded

CRS-2677: Stop of 'ora.asm' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node2'

CRS-2677: Stop of 'ora.drivers.acfs' on 'node2' succeeded

CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'node2'

CRS-2677: Stop of 'ora.cssd' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.crf' on 'node2'

CRS-2677: Stop of 'ora.crf' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'node2'

CRS-2677: Stop of 'ora.gipcd' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'node2'

CRS-2677: Stop of 'ora.gpnpd' on 'node2' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node2' has completed

CRS-4133: Oracle High Availability Services has been stopped.

 

 

[[email protected] ~]# /u01/app/11.2.0/grid/bin/crsctl start crs

CRS-4123: Oracle High Availability Services has been started.

 

[[email protected] ~]$ crsctl stat res -t

--------------------------------------------------------------------------------

NAME TARGET STATE SERVER STATE_DETAILS

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.ARCH1.dg

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.CRS1.dg

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.DATA1.dg

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.LISTENER.lsnr

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.NCRS.dg

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.asm

ONLINE ONLINE node1 Started

ONLINE ONLINE node2 Started

ora.gsd

OFFLINE OFFLINE node1

OFFLINE OFFLINE node2

ora.net1.network

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.ons

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.registry.acfs

ONLINE ONLINE node1

ONLINE ONLINE node2

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE node1

ora.cvu

1 ONLINE ONLINE node1

ora.node1.vip

1 ONLINE ONLINE node1

ora.node2.vip

1 ONLINE ONLINE node2

ora.oc4j

1 OFFLINE OFFLINE

ora.racdb.db

1 ONLINE OFFLINE Instance Shutdown,S

TARTING

2 ONLINE ONLINE node1 Open

ora.scan1.vip

1 ONLINE ONLINE node1

 

 

  1. 啟動資料庫

使用oracle使用者啟動所有例項,在一個節點上執行

    srvctl start database -d racdb

  1. DATA磁碟組中新增磁碟

 

11.1檢視當前DATA磁碟的故障組的情況

 

select name,group_number,disk_number,state,failgroup,path from v$asm_disk;

 

11.2向每個故障組中新增一個500G的磁碟

 

alter diskgroup DATA add failgroup DATA_0000 disk '/dev/rhdisk14';

alter diskgroup DATA add failgroup DATA_0001 disk '/dev/rhdisk15';

alter diskgroup DATA add failgroup DATA_0002 disk '/dev/rhdisk16';

alter diskgroup DATA add failgroup DATA_0003 disk '/dev/rhdisk17';

alter diskgroup DATA add failgroup DATA_0004 disk '/dev/rhdisk18';

 

11.3檢視ASM例項reblance的進度

 

select * from v$asm_operation;

當顯示如下的時候,說明reblance成功。

SQL> select * from v$asm_operation;

no rows selected

 

至此所有的遷移工作完成。