1. 程式人生 > >Oracle 11G RAC 基本管理知識

Oracle 11G RAC 基本管理知識

在這裡記錄下RAC的基本管理手段,一點一滴新增和備註資訊。

各種RAC詞彙

RAC:Real application clusters

CRS:Cluster ready service

CSS:Cluster Synchronization Services

OCR:Oracle cluster register

Votingdisk:表決磁碟

一、管理日誌:

集群后臺程序日誌路徑:/u01/app/11.2.0/grid/log/rac1

這裡可以看到許多後臺程序日誌目錄,根據各個元件情況對應檢視日誌檔案。

[[email protected] rac1]$ ll
total 112
drwxr-x--- 2 grid oinstall  4096 Jan  7 17:25 admin
drwxrwxr-t 4 root oinstall  4096 Jan  7 17:25 agent
-rw-rw-r-- 1 root root     46979 Jan 14 11:05 alertrac1.log
drwxr-x--- 2 grid oinstall  4096 Jan 14 11:29 client
drwxr-x--- 2 root oinstall  4096 Jan  8 12:41 crsd
drwxr-x--- 2 grid oinstall  4096 Jan  7 17:26 cssd
drwxr-x--- 2 root oinstall  4096 Jan 14 11:54 ctssd
drwxr-x--- 2 grid oinstall  4096 Jan  8 14:24 diskmon
drwxr-x--- 2 grid oinstall  4096 Jan  7 17:30 evmd
drwxr-x--- 2 grid oinstall  4096 Jan  7 17:26 gipcd
drwxr-x--- 2 root oinstall  4096 Jan  7 17:25 gnsd
drwxr-x--- 2 grid oinstall  4096 Jan 14 11:02 gpnpd
drwxr-x--- 2 grid oinstall  4096 Jan  7 17:26 mdnsd
drwxr-x--- 2 root oinstall  4096 Jan  7 17:26 ohasd
drwxrwxr-t 5 grid oinstall  4096 Jan 13 00:04 racg
drwxr-x--- 2 grid oinstall  4096 Jan 14 11:03 srvm

二、管理命令:

crsctl 命令

crsctl check crs

[[email protected] rac1]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

crsctl check css

[[email protected]

rac1]$ crsctl check css
CRS-4529: Cluster Synchronization Services is online

crsctl check cluster  -all

[[email protected] rac1]$ crsctl check cluster  -all
**************************************************************
rac1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
rac2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

crsctl start/stop resources

crsctl stop crs

檢視元件資源的互相依賴關係

crsctl stat res ora.orcl.db -p

crsctl stat res ora.scan1.vip -p

crs_stat命令

crs_stat  -t 檢視元件資源服務

[[email protected] rac1]$ crs_stat  -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora.BACK.dg    ora....up.type ONLINE    ONLINE    rac1
ora.DATA.dg    ora....up.type ONLINE    ONLINE    rac1
ora.FILES.dg   ora....up.type ONLINE    ONLINE    rac1
ora....ER.lsnr ora....er.type ONLINE    ONLINE    rac1
ora....N1.lsnr ora....er.type ONLINE    ONLINE    rac1
ora.asm        ora.asm.type   ONLINE    ONLINE    rac1
ora.eons       ora.eons.type  ONLINE    ONLINE    rac1
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE
ora....network ora....rk.type ONLINE    ONLINE    rac1
ora.oc4j       ora.oc4j.type  OFFLINE   OFFLINE
ora.ons        ora.ons.type   ONLINE    ONLINE    rac1
ora.orcl.db    ora....se.type ONLINE    ONLINE    rac1
ora....SM1.asm application    ONLINE    ONLINE    rac1
ora....C1.lsnr application    ONLINE    ONLINE    rac1
ora.rac1.gsd   application    OFFLINE   OFFLINE
ora.rac1.ons   application    ONLINE    ONLINE    rac1
ora.rac1.vip   ora....t1.type ONLINE    ONLINE    rac1
ora....SM2.asm application    ONLINE    ONLINE    rac2
ora....C2.lsnr application    ONLINE    ONLINE    rac2
ora.rac2.gsd   application    OFFLINE   OFFLINE
ora.rac2.ons   application    ONLINE    ONLINE    rac2
ora.rac2.vip   ora....t1.type ONLINE    ONLINE    rac2
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    rac1


crs_stat -v 檢視指定元件資源服務

[[email protected] rac1]$ crs_stat -v ora.scan1.vip
NAME=ora.scan1.vip
TYPE=ora.scan_vip.type
RESTART_ATTEMPTS=0
RESTART_COUNT=0
FAILURE_THRESHOLD=0
FAILURE_COUNT=0
TARGET=ONLINE
STATE=ONLINE on rac

crsctl register resource

OCR

ocrconfig -showbackup

[[email protected] rac1]$ ocrconfig -showbackup

rac2     2015/01/13 22:41:57     /u01/app/11.2.0/grid/cdata/rac-cluster/backup00.ocr

rac2     2015/01/13 18:41:57     /u01/app/11.2.0/grid/cdata/rac-cluster/backup01.ocr

rac2     2015/01/13 14:41:56     /u01/app/11.2.0/grid/cdata/rac-cluster/backup02.ocr

rac1     2015/01/12 15:33:05     /u01/app/11.2.0/grid/cdata/rac-cluster/day.ocr

rac1     2015/01/07 21:45:05     /u01/app/11.2.0/grid/cdata/rac-cluster/week.ocr
PROT-25: Manual backups for the Oracle Cluster Registry are not available

匯出匯入OCR

ocrconfig -export

ocrconfig -import

恢復OCR

ocrconfig –restore

ocrcheck

[[email protected] rac1]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2568
         Available space (kbytes) :     259552
         ID                       : 1018222697
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user

VOTINGDISK

crsctl query css votedisk
[[email protected] rac1]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   69b8a9f606a74f0cbf2ba5e9bded9d9e (ORCL:VOL1) [DATA]
 2. ONLINE   715323d6e1a84f39bf492e94c7bb208b (ORCL:VOL2) [DATA]
 3. ONLINE   64ffd38b5a2d4f6dbf4a47adef5c9340 (ORCL:VOL3) [DATA]
Located 3 voting disk(s).

srvctl 命令

啟動和關閉例項

srvctl stop database -d db_unique_name

關閉時可以檢視例項關閉日誌情況

tail -f /u01/app/oracle/diag/rdbms/orcl/orcl1/trace/alert_orcl1.log

用crs_stat -t 檢視元件資源

ora.orcl.db    ora....se.type OFFLINE   OFFLINE
或者執行

[[email protected] ~]$ crs_stat -v ora.orcl.db
NAME=ora.orcl.db
TYPE=ora.database.type
RESTART_ATTEMPTS=2
RESTART_COUNT=0
[email protected](rac1)=orcl1
[email protected](rac2)=orcl2
[email protected](rac1)=orcl1
[email protected](rac2)=orcl2
FAILURE_THRESHOLD=1
FAILURE_COUNT=0
TARGET=OFFLINE
STATE=OFFLINE

srvctl start database -d db_unique_name

啟動時也可以按照此方法檢視。

檢視DB狀態

srvctl status database -d db_unique_name

[[email protected] ~]$ srvctl status database -d orcl
Instance orcl1 is not running on node rac1
Instance orcl2 is not running on node rac2


三、系統程序

rac程序、ASM例項程序

ps -U grid -f

[[email protected] ~]# ps -U grid -f
UID        PID  PPID  C STIME TTY          TIME CMD
grid      2800     1  0 11:02 ?        00:00:21 /u01/app/11.2.0/grid/bin/oraagent.bin
grid      2813     1  0 11:02 ?        00:00:00 /u01/app/11.2.0/grid/bin/gipcd.bin
grid      2818     1  0 11:02 ?        00:00:00 /u01/app/11.2.0/grid/bin/mdnsd.bin
grid      2832     1  0 11:02 ?        00:00:14 /u01/app/11.2.0/grid/bin/gpnpd.bin
grid      2893     1  4 11:02 ?        00:07:01 /u01/app/11.2.0/grid/bin/ocssd.bin
grid      2909     1  0 11:02 ?        00:00:12 /u01/app/11.2.0/grid/bin/diskmon.bin -d -f
grid      2989     1  0 11:02 ?        00:00:26 /u01/app/11.2.0/grid/bin/evmd.bin
grid      3064     1  0 11:03 ?        00:00:01 asm_pmon_+ASM1
grid      3066     1  2 11:03 ?        00:03:30 asm_vktm_+ASM1
grid      3070     1  0 11:03 ?        00:00:00 asm_gen0_+ASM1
grid      3072     1  0 11:03 ?        00:00:05 asm_diag_+ASM1
grid      3074     1  0 11:03 ?        00:00:00 asm_ping_+ASM1
grid      3076     1  0 11:03 ?        00:00:00 asm_psp0_+ASM1
grid      3078     1  0 11:03 ?        00:00:28 asm_dia0_+ASM1
grid      3080     1  0 11:03 ?        00:00:15 asm_lmon_+ASM1
grid      3083     1  0 11:03 ?        00:00:11 asm_lmd0_+ASM1
grid      3087     1  0 11:03 ?        00:00:34 asm_lms0_+ASM1
grid      3091     1  0 11:03 ?        00:00:00 asm_lmhb_+ASM1
grid      3093     1  0 11:03 ?        00:00:00 asm_mman_+ASM1
grid      3095     1  0 11:03 ?        00:00:00 asm_dbw0_+ASM1
grid      3097     1  0 11:03 ?        00:00:00 asm_lgwr_+ASM1
grid      3099     1  0 11:03 ?        00:00:00 asm_ckpt_+ASM1
grid      3101     1  0 11:03 ?        00:00:00 asm_smon_+ASM1
grid      3103     1  0 11:03 ?        00:00:03 asm_rbal_+ASM1
grid      3105     1  0 11:03 ?        00:00:03 asm_gmon_+ASM1
grid      3107     1  0 11:03 ?        00:00:00 asm_mmon_+ASM1
grid      3109     1  0 11:03 ?        00:00:01 asm_mmnl_+ASM1
grid      3111     1  0 11:03 ?        00:00:10 /u01/app/11.2.0/grid/bin/oclskd.bin
grid      3115     1  0 11:03 ?        00:00:00 asm_lck0_+ASM1
grid      3161     1  0 11:03 ?        00:00:00 asm_asmb_+ASM1
grid      3163     1  0 11:03 ?        00:00:00 oracle+ASM1_asmb_+asm1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=be
grid      3189     1  0 11:03 ?        00:00:00 oracle+ASM1_ocr (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
grid      3234  2989  0 11:03 ?        00:00:00 /u01/app/11.2.0/grid/bin/evmlogger.bin -o /u01/app/11.2.0/grid/evm/l
grid      3280     1  0 11:03 ?        00:00:09 /u01/app/11.2.0/grid/bin/oraagent.bin
grid      3411     1  0 11:03 ?        00:00:00 /u01/app/11.2.0/grid/opmn/bin/ons -d
grid      3412  3411  0 11:03 ?        00:00:00 /u01/app/11.2.0/grid/opmn/bin/ons -d
grid      3427     1  0 11:03 ?        00:00:15 /u01/app/11.2.0/grid/jdk/jre//bin/java -Doracle.supercluster.cluster
grid      3512     1  0 11:03 ?        00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER -inherit
grid      3514     1  0 11:03 ?        00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
grid      7850  7846  0 13:24 pts/1    00:00:00 -bash
grid      7873  7850  2 13:24 pts/1    00:00:18 /u01/app/11.2.0/grid/jdk/jre/bin/java -DORACLE_HOME=/u01/app/11.2.0/
grid      8025  7873  0 13:24 pts/1    00:00:00 /u01/app/11.2.0/grid/bin/sqlplus -S -N
grid      8034  8025  0 13:24 ?        00:00:00 oracle+ASM1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))

檢視Oracle 、root

[[email protected] ~]# ps -U root -f|grep /u01
root      2477     1  1 11:00 ?        00:02:10 /u01/app/11.2.0/grid/bin/ohasd.bin reboot
root      2843     1  1 11:02 ?        00:01:55 /u01/app/11.2.0/grid/bin/cssdmonitor
root      2862     1  1 11:02 ?        00:01:43 /u01/app/11.2.0/grid/bin/cssdagent
root      2896     1  0 11:02 ?        00:00:04 /u01/app/11.2.0/grid/bin/orarootagent.bin
root      2974     1  0 11:02 ?        00:00:06 /u01/app/11.2.0/grid/bin/octssd.bin reboot
root      3145     1  0 11:03 ?        00:01:07 /u01/app/11.2.0/grid/bin/crsd.bin reboot
root      3186     1  0 11:03 ?        00:00:10 /u01/app/11.2.0/grid/bin/oclskd.bin
root      3284     1  0 11:03 ?        00:00:44 /u01/app/11.2.0/grid/bin/orarootagent.bin


[[email protected] ~]# ps -U oracle -f
UID        PID  PPID  C STIME TTY          TIME CMD
oracle    3563     1  0 11:03 ?        00:00:18 /u01/app/11.2.0/grid/bin/oraagent.bin
oracle    3679     1  0 11:04 ?        00:00:01 ora_pmon_orcl1
oracle    3681     1  2 11:04 ?        00:03:33 ora_vktm_orcl1
oracle    3685     1  0 11:04 ?        00:00:00 ora_gen0_orcl1
oracle    3687     1  0 11:04 ?        00:00:06 ora_diag_orcl1
oracle    3689     1  0 11:04 ?        00:00:00 ora_dbrm_orcl1
oracle    3691     1  0 11:04 ?        00:00:00 ora_ping_orcl1
oracle    3693     1  0 11:04 ?        00:00:00 ora_psp0_orcl1
oracle    3695     1  0 11:04 ?        00:00:00 ora_acms_orcl1
oracle    3697     1  0 11:04 ?        00:00:33 ora_dia0_orcl1
oracle    3699     1  0 11:04 ?        00:00:15 ora_lmon_orcl1
oracle    3701     1  0 11:04 ?        00:00:12 ora_lmd0_orcl1
oracle    3705     1  0 11:04 ?        00:01:07 ora_lms0_orcl1
oracle    3709     1  0 11:04 ?        00:00:00 ora_rms0_orcl1
oracle    3711     1  0 11:04 ?        00:00:00 ora_lmhb_orcl1
oracle    3713     1  0 11:04 ?        00:00:00 ora_mman_orcl1
oracle    3715     1  0 11:04 ?        00:00:01 ora_dbw0_orcl1
oracle    3717     1  0 11:04 ?        00:00:01 ora_lgwr_orcl1
oracle    3719     1  0 11:04 ?        00:00:03 ora_ckpt_orcl1
oracle    3721     1  0 11:04 ?        00:00:00 ora_smon_orcl1
oracle    3723     1  0 11:04 ?        00:00:00 ora_reco_orcl1
oracle    3725     1  0 11:04 ?        00:00:00 ora_rbal_orcl1
oracle    3727     1  0 11:04 ?        00:00:00 ora_asmb_orcl1
oracle    3729     1  0 11:04 ?        00:00:02 ora_mmon_orcl1
oracle    3731     1  0 11:04 ?        00:00:01 ora_mmnl_orcl1
oracle    3733     1  0 11:04 ?        00:00:00 ora_d000_orcl1
oracle    3735     1  0 11:04 ?        00:00:00 ora_s000_orcl1
oracle    3737     1  0 11:04 ?        00:00:11 /u01/app/11.2.0/grid/bin/oclskd.bin
grid      3739     1  0 11:04 ?        00:00:00 oracle+ASM1_asmb_orcl1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle    3742     1  0 11:04 ?        00:00:04 ora_lck0_orcl1
oracle    3747     1  0 11:04 ?        00:00:00 ora_mark_orcl1
oracle    3753     1  0 11:04 ?        00:00:00 ora_rsmn_orcl1
oracle    3813     1  0 11:04 ?        00:00:00 ora_arc0_orcl1
oracle    3815     1  0 11:04 ?        00:00:00 ora_arc1_orcl1
oracle    3817     1  0 11:04 ?        00:00:00 ora_arc2_orcl1
oracle    3819     1  0 11:04 ?        00:00:00 ora_arc3_orcl1
oracle    3837     1  0 11:05 ?        00:00:00 ora_gtx0_orcl1
oracle    3839     1  0 11:05 ?        00:00:00 ora_rcbg_orcl1
oracle    3841     1  0 11:05 ?        00:00:00 ora_qmnc_orcl1
oracle    3860     1  0 11:05 ?        00:00:09 oracleorcl1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle    3874     1  0 11:05 ?        00:00:00 ora_q000_orcl1
oracle    3878     1  0 11:05 ?        00:00:00 ora_q001_orcl1
oracle    3904     1  0 11:05 ?        00:00:03 ora_cjq0_orcl1
oracle    4046     1  0 11:10 ?        00:00:00 ora_smco_orcl1
oracle    5499     1  0 12:04 ?        00:00:00 ora_pz99_orcl1
oracle    8421     1  0 13:40 ?        00:00:00 ora_w000_orcl1


相關推薦

Oracle 11G RAC 基本管理知識

在這裡記錄下RAC的基本管理手段,一點一滴新增和備註資訊。 各種RAC詞彙 RAC:Real application clusters CRS:Cluster ready service CSS:Cluster Synchronization Services OCR:Or

oracle 11g RAC基本操作(一)------啟動與關閉

執行 同時 man sources monit vip nag 查看數據庫 resource 啟動RAC 手工啟動按照HAS, cluster, database的順序啟動,具體命令如下: 啟動HAS(High Availability Servi

Oracle 11G RAC資料庫基本測試和使用

檢查RAC狀態 主節點測試各個節點rac執行是否正常。顯示rac節點詳細資訊 $ srvctl config database -d rac Database unique name: rac Database name: rac Oracle home: /u

Oracle 11g RAC 基礎知識詳解

    Infiniband常被用來實現遠端記憶體直接訪問(RDMA remote direct memory access architecture)。這是一個高速互聯,常與高效能運算(HPC)環境聯絡在一起。RDMA可以在叢集的節點間使用並行、直接、記憶體到記憶體的傳輸,它需要專門的RDMA介面卡、交換

Oracle 11g RAC 管理常用命令

1、檢查 CRS 狀態[[email protected] ~]$ crsctl check crsCRS-4638: Oracle High Availability Services is onlineCRS-4537: Cluster Ready Servi

oracle 11g rac dbca建庫時提示創建監聽

oracle 監聽 listener oracle rac Oracle 11g rac dbca建庫時提示創建監聽在安裝oracle 11g rac時,使用dbca建庫的過程中提示需要創建監聽:Default Listener "LISTENER" is not configured in

oracle 11g rac 筆記(VMware 和esxi主機都可以使用)

oracle 11g rac這個只是筆記,防止丟失,沒事見整理在vmware安裝目錄 創建磁盤:vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 D:\VMWARE\racsharedisk\ocr.vmdkvmware-vdiskmanager.ex

Oracle 11g單實例RMAN恢復到Oracle 11g RAC

oracle 遷移 oracle rac 一、環境說明操作系統版本: RHEL 6.5 x641. 源數據庫服務器Oracle版本: Oracle 11g 11.2.0.4 64位(單機)Oracle_SID: orcl db_name : orcl背景:一臺生產oracle10g(10.2

oracle 11g rac 修改字符集

can data 64bit 復數 查看字符集 str edit root sql 系統版本:Oracle Linux Server release 5.7數據庫版本:Oracle Database 11g Enterprise Edition Release 11.2.0

oracle 11g RAC crfclust.bdb過大的處理

oracle ora.crf過大find / -type f -size +500M | xargs du -hm | sort -nrora.crf服務是為Cluster Health Monitor(以下簡稱CHM)提供服務的,用來自動收集操作系統的資源(CPU、內存、SWAP、進程、I/O以及網絡等

oracle 11g rac 監聽無法啟動

right dom ice 文件 gen roo ssa disk inux 1.數據庫啟動集群報錯 [root@db1 bin]# ./crs_stat -t -v Name Type R/RA F/FT Targe

ORACLE 11G RAC ASM磁盤組全部丟失後的恢復

實例 ice mat dns 禁用 buffers bit allocated event 一、環境描述(1)Oracle 11.2.0.3 RAC ON Oracle Linux 6 x86_64,只有一個ASM外部冗余磁盤組——DATA;(2)OCR,VOTEDISK,

轉載:細說oracle 11g rac 的ip地址

捕獲 ted 失效 服務 修改 機器 發生 操作 自己 本文轉載自:細說oracle 11g rac 的ip地址 http://blog.sina.com.cn/s/blog_4fe6d4250102v5fa.html 以前搭建oracle rac的時候(自己摸索搭建),對

Oracle 11g rac新增刪除叢集資料庫

部落格文章除註明轉載外,均為原創。轉載請註明出處。本文連結地址:http://blog.chinaunix.net/uid-31396856-id-5790357.html好記性不如爛筆頭,     記錄新增叢集資料庫和刪除叢集資料庫的關鍵步驟:主要是通過srvctl命令來管理叢集的

Oracle 11g RAC的體系結構與啟動順序

參考:https://blog.csdn.net/zhang123456456/article/details/53872060  CSSD(心跳):      ASM SPFILE(不是通過ASM例項,通過ASM驅動直接從磁碟讀取。普通ASM檔案) -&

Redhat 6.1 配置Linux multipath安裝oracle 11g rac

一、安裝配置儲存節點(略) 二、資料庫節點連線儲存節點 2.1、資料庫節點安裝ISCSI啟動器 yum install iscsi* 2.2、配置啟動器 vim /etc/iscsi/initiatorname.iscsi 2.3、發起連線 # iscsiadm

[轉帖]Oracle 11G RAC For Windows 2008 R2部署手冊 Oracle 11G RAC For Windows 2008 R2部署手冊(親測,成功實施多次)

Oracle 11G RAC For Windows 2008 R2部署手冊(親測,成功實施多次)   https://www.cnblogs.com/yhfssp/p/7821593.html   總體規劃 伺服器規劃

Oracle 11G RAC 生成AWR報告總結

1.生成單例項 AWR 報告: @$ORACLE_HOME/rdbms/admin/awrrpt.sql 2.生成 Oracle RAC AWR 報告: @$ORACLE_HOME/rdbms/admin/awrgrpt.sql 3.生成 RAC 環境中特定資料庫

Oracle 11g RAC關機 開機操作步驟

首先關閉防火牆iptables -F 兩臺都要操作 CRS-4639: Could not contactOracle High Availability Services 原因,crs沒有啟動 方法1、oracle中的bug, 啟動之前需要執行 /bin/dd if=/

這幾天安裝oracle 11g RAC ,但是遇到問題了,一直沒有安裝好

執行一個指令碼時候,遇到問題:Failed to upgrade Oracle Cluster Registry configuration 安裝了好幾次,都是相同的錯誤,我嚴重懷疑是oracle指令碼的問題在google查了一把,果然有這個問題的報告,指令碼是有問題的為什麼