1. 程式人生 > >11g RAC叢集啟動關閉、各種資源檢查、配置資訊檢視彙總。

11g RAC叢集啟動關閉、各種資源檢查、配置資訊檢視彙總。

簡要:
一:叢集的啟動與關閉

1. rac叢集的手動啟動
[[email protected] bin]# ./crsctl start cluster -all
2. 檢視rac叢集的狀態
[[email protected] bin]# ./crsctl stat res -t
3. rac叢集的關閉
[[email protected] bin]# ./crscrl stop cluster -all
————————————————————————————————
二:叢集的各種資源狀態的檢查

1. 檢查叢集的執行狀況
[[email protected]

bin]# ./crsctl check cluster
2. 檢查叢集的資料庫例項執行狀態 
[[email protected] bin]# ./srvctl status database -d orcldb
3. 檢查節點asm例項執行狀態
[[email protected] bin]# ./srvctl status asm
4. 檢查節點應用程式執行狀態
[[email protected] bin]# ./srvctl status nodeapps
5. 檢查節點監聽執行狀態
[[email protected] bin]# ./srvctl status listener
6. 檢查scan監聽執行狀態
[
[email protected]
bin]# ./srvctl status scan
7. 檢查所有叢集節點間的時鐘同步(非root使用者執行)
[[email protected] ~]$ cluvfy comp clocksync -verbose

—————————————————————————————————
三: 叢集的各種配置資訊檢視

1. 檢視資料庫配置資訊。
[[email protected] bin]# ./srvctl config database -d orcldb
2. 檢視應用程式配置資訊
[[email protected]

bin]# ./srvctl config nodeapps
3. 檢視asm配置資訊 
[[email protected] bin]# ./srvctl config asm
4. 檢視監聽配置資訊 
[[email protected] bin]# ./srvctl config listener
5. 檢視scan配置資訊
[[email protected] bin]# ./srvctl config scan
6. 檢視RAC登錄檔磁碟配置資訊
[[email protected] bin]# ./ocrcheck
7. 檢視RAC仲裁磁碟配置資訊
[[email protected] bin]# ./crsctl query css votedisk

————————————————————-————————
下面是詳細操作輸出資訊

11g rac r2啟動(預設開機會自動啟動)關閉需要用root使用者維護。

--執行命令的路徑
[[email protected] bin]# pwd 
/u01/app/11.2.0/grid/bin


一: 11g rac叢集的正常啟動與關閉。

--rac叢集的手動啟動
[[email protected] bin]# ./crsctl start cluster -all 
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node2'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node1'
CRS-2676: Start of 'ora.cssdmonitor' on 'node2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'node2'
CRS-2672: Attempting to start 'ora.diskmon' on 'node2'
CRS-2676: Start of 'ora.cssdmonitor' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'node1'
CRS-2672: Attempting to start 'ora.diskmon' on 'node1'
CRS-2676: Start of 'ora.diskmon' on 'node2' succeeded
CRS-2676: Start of 'ora.diskmon' on 'node1' succeeded
CRS-2676: Start of 'ora.cssd' on 'node2' succeeded
CRS-2676: Start of 'ora.cssd' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'node2'
CRS-2672: Attempting to start 'ora.ctssd' on 'node1'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'node1'
CRS-2676: Start of 'ora.ctssd' on 'node2' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'node2'
CRS-2676: Start of 'ora.ctssd' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'node1'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'node2'
CRS-2676: Start of 'ora.evmd' on 'node2' succeeded
CRS-2676: Start of 'ora.evmd' on 'node1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'node1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'node2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'node2'
CRS-2672: Attempting to start 'ora.asm' on 'node1'
CRS-2676: Start of 'ora.asm' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'node1'
CRS-2676: Start of 'ora.crsd' on 'node1' succeeded
CRS-2676: Start of 'ora.asm' on 'node2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'node2'
CRS-2676: Start of 'ora.crsd' on 'node2' succeeded

-- 檢視rac叢集的狀態
[[email protected] bin]# ./crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS 
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE node1 
ONLINE ONLINE node2 
ora.FLASH.dg
ONLINE ONLINE node1 
ONLINE ONLINE node2 
ora.GRIDDG.dg
ONLINE ONLINE node1 
ONLINE ONLINE node2 
ora.ORCLDB.lsnr
ONLINE ONLINE node1 
ONLINE ONLINE node2 
ora.asm
ONLINE ONLINE node1 Started 
ONLINE ONLINE node2 Started 
ora.gsd
OFFLINE OFFLINE node1 
OFFLINE OFFLINE node2 
ora.net1.network
ONLINE ONLINE node1 
ONLINE ONLINE node2 
ora.ons
ONLINE ONLINE node1 
ONLINE ONLINE node2 
ora.registry.acfs
ONLINE ONLINE node1 
ONLINE ONLINE node2 
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE node1 
ora.cvu
1 ONLINE ONLINE node1 
ora.node1.vip
1 ONLINE ONLINE node1 
ora.node2.vip
1 ONLINE ONLINE node2 
ora.oc4j
1 ONLINE ONLINE node1 
ora.orcldb.db
1 ONLINE ONLINE node1 Open 
2 ONLINE ONLINE node2 Open 
ora.scan1.vip
1 ONLINE ONLINE node1


--rac叢集的關閉
[[email protected] bin]# ./crscrl stop cluster -all
-bash: ./crscrl: No such file or directory
[[email protected] bin]# ./crsctl stop cluster -all
CRS-2673: Attempting to stop 'ora.crsd' on 'node1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'node1'
CRS-2673: Attempting to stop 'ora.cvu' on 'node1'
CRS-2673: Attempting to stop 'ora.ORCLDB.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.GRIDDG.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'node1'
CRS-2673: Attempting to stop 'ora.orcldb.db' on 'node1'
CRS-2677: Stop of 'ora.cvu' on 'node1' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'node1'
CRS-2677: Stop of 'ora.ORCLDB.lsnr' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.node1.vip' on 'node1'
CRS-2677: Stop of 'ora.scan1.vip' on 'node1' succeeded
CRS-2677: Stop of 'ora.orcldb.db' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.FLASH.dg' on 'node1'
CRS-2677: Stop of 'ora.node1.vip' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.crsd' on 'node2'
CRS-2677: Stop of 'ora.registry.acfs' on 'node1' succeeded
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node2'
CRS-2673: Attempting to stop 'ora.GRIDDG.dg' on 'node2'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'node2'
CRS-2673: Attempting to stop 'ora.orcldb.db' on 'node2'
CRS-2673: Attempting to stop 'ora.ORCLDB.lsnr' on 'node2'
CRS-2677: Stop of 'ora.FLASH.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.GRIDDG.dg' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node1'
CRS-2677: Stop of 'ora.ORCLDB.lsnr' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.node2.vip' on 'node2'
CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
CRS-2677: Stop of 'ora.node2.vip' on 'node2' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'node1' succeeded
CRS-2677: Stop of 'ora.GRIDDG.dg' on 'node2' succeeded
CRS-2677: Stop of 'ora.registry.acfs' on 'node2' succeeded
CRS-2677: Stop of 'ora.orcldb.db' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'node2'
CRS-2673: Attempting to stop 'ora.FLASH.dg' on 'node2'
CRS-2677: Stop of 'ora.DATA.dg' on 'node2' succeeded
CRS-2677: Stop of 'ora.FLASH.dg' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node2'
CRS-2677: Stop of 'ora.asm' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'node2'
CRS-2677: Stop of 'ora.ons' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'node2'
CRS-2677: Stop of 'ora.net1.network' on 'node2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node2' has completed
CRS-2673: Attempting to stop 'ora.ons' on 'node1'
CRS-2677: Stop of 'ora.ons' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'node1'
CRS-2677: Stop of 'ora.net1.network' on 'node1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node1' has completed
CRS-2677: Stop of 'ora.crsd' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'node2'
CRS-2673: Attempting to stop 'ora.evmd' on 'node2'
CRS-2673: Attempting to stop 'ora.asm' on 'node2'
CRS-2677: Stop of 'ora.crsd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'node1'
CRS-2673: Attempting to stop 'ora.evmd' on 'node1'
CRS-2673: Attempting to stop 'ora.asm' on 'node1'
CRS-2677: Stop of 'ora.evmd' on 'node2' succeeded
CRS-2677: Stop of 'ora.evmd' on 'node1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'node2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'node1' succeeded
CRS-2677: Stop of 'ora.asm' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node2'
CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'node1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'node2'
CRS-2677: Stop of 'ora.cssd' on 'node1' succeeded
CRS-2677: Stop of 'ora.cssd' on 'node2' succeeded

———————————————————————————
二: 11g rac叢集的各種資源狀態檢查命令。

1. 檢視crsctl幫助資訊。
[[email protected] bin]# ./crsctl 
Usage: crsctl <command> <object> [<options>]
command: enable|disable|config|start|stop|relocate|replace|status|add|delete|modify|getperm|setperm|check|set|get|unset|debug|lsmodules|query|pin|unpin|discover|release|request
For complete usage, use:
crsctl [-h | --help]
For detailed help on each command and object and its options use:
crsctl <command> <object> -h e.g. crsctl relocate resource -h

--幫助詳細介紹
[[email protected] bin]# ./crsctl -h
Usage: crsctl add - add a resource, type or other entity
crsctl check - check a service, resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource, type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource, type or other entity
crsctl query - query service state
crsctl pin - pin the nodes in the node list
crsctl relocate - relocate a resource, server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource, server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource, server or other entity
crsctl unpin - unpin the nodes in the node list
crsctl unset - unset an entity value, restoring its default

2. 檢查叢集的執行狀況
[[email protected] bin]# ./crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

3. 檢查叢集的資料庫例項執行狀態 
[[email protected] bin]# ./srvctl status database -d orcldb
Instance orcldb1 is running on node node1
Instance orcldb2 is running on node node2

4. 檢查節點應用程式執行狀態
[[email protected] bin]# ./srvctl status nodeapps
VIP node1-vip is enabled
VIP node1-vip is running on node: node1
VIP node2-vip is enabled
VIP node2-vip is running on node: node2
Network is enabled
Network is running on node: node1
Network is running on node: node2
GSD is disabled
GSD is not running on node: node1
GSD is not running on node: node2
ONS is enabled
ONS daemon is running on node: node1
ONS daemon is running on node: node2

5. 檢查節點asm例項執行狀態
[[email protected] bin]# ./srvctl status asm
ASM is running on node2,node1

6. 檢查節點監聽執行狀態
[[email protected] bin]# ./srvctl status listener
Listener ORCLDB is enabled
Listener ORCLDB is running on node(s): node2,node1

7. 檢查scan監聽執行狀態
[[email protected] bin]# ./srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node node1

8. 檢查所有叢集節點間的時鐘同步(非root使用者執行)
[[email protected] ~]$ cluvfy comp clocksync -verbose

Verifying Clock Synchronization across the cluster nodes

Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed

Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes
Node Name Status 
------------------------------------ ------------------------
node1 passed 
Result: CTSS resource check passed
Querying CTSS for time offset on all nodes...
Result: Query of CTSS for time offset passed

Check CTSS state started...
Check: CTSS state
Node Name State 
------------------------------------ ------------------------
node1 Active 
CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Reference Time Offset Limit: 1000.0 msecs
Check: Reference Time Offset
Node Name Time Offset Status 
------------ ------------------------ ------------------------
node1 0.0 passed

Time offset is within the specified limits on the following set of nodes: 
"[node1]" 
Result: Check of clock time offsets passed
Oracle Cluster Time Synchronization Services check passed
Verification of Clock Synchronization across the cluster nodes was successful.

——————————————————————————————————————————
三 叢集各種配置資訊檢查

1. 檢視srvctl幫助資訊。
[[email protected] bin]# ./srvctl
Usage: srvctl <command> <object> [<options>]
commands: enable|disable|start|stop|relocate|status|add|remove|modify|getenv|setenv|unsetenv|config|convert|upgrade
objects: database|instance|service|nodeapps|vip|network|asm|diskgroup|listener|srvpool|server|scan|scan_listener|oc4j|home|filesystem|gns|cvu
For detailed help on each command and object and its options use:
srvctl <command> -h or
srvctl <command> <object> -h

-- srvctl命令詳細介紹
[[email protected] bin]# ./srvctl -h
Usage: srvctl [-V]
Usage: srvctl add database -d <db_unique_name> -o <oracle_home> [-c {RACONENODE | RAC | SINGLE} [-e <server_list>] [-i <inst_name>] [-w <timeout>]] [-m <domain_name>] [-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-t <stop_options>] [-n <db_name>] [-y {AUTOMATIC | MANUAL | NORESTART}] [-g "<serverpool_list>"] [-x <node_name>] [-a "<diskgroup_list>"] [-j "<acfs_path_list>"]
Usage: srvctl config database [-d <db_unique_name> [-a] ] [-v]
Usage: srvctl start database -d <db_unique_name> [-o <start_options>] [-n <node>]
Usage: srvctl stop database -d <db_unique_name> [-o <stop_options>] [-f]
Usage: srvctl status database -d <db_unique_name> [-f] [-v]
Usage: srvctl enable database -d <db_unique_name> [-n <node_name>]
Usage: srvctl disable database -d <db_unique_name> [-n <node_name>]
Usage: srvctl modify database -d <db_unique_name> [-n <db_name>] [-o <oracle_home>] [-u <oracle_user>] [-e <server_list>] [-w <timeout>] [-m <domain>] [-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-t <stop_options>] [-y {AUTOMATIC | MANUAL | NORESTART}] [-g "<serverpool_list>" [-x <node_name>]] [-a "<diskgroup_list>"|-z] [-j "<acfs_path_list>"] [-f]
Usage: srvctl remove database -d <db_unique_name> [-f] [-y]
Usage: srvctl getenv database -d <db_unique_name> [-t "<name_list>"]
Usage: srvctl setenv database -d <db_unique_name> {-t <name>=<val>[,<name>=<val>,...] | -T <name>=<val>}
Usage: srvctl unsetenv database -d <db_unique_name> -t "<name_list>"
Usage: srvctl convert database -d <db_unique_name> -c RAC [-n <node>]
Usage: srvctl convert database -d <db_unique_name> -c RACONENODE [-i <inst_name>] [-w <timeout>]
Usage: srvctl relocate database -d <db_unique_name> {[-n <target>] [-w <timeout>] | -a [-r]} [-v]
Usage: srvctl upgrade database -d <db_unique_name> -o <oracle_home>
Usage: srvctl downgrade database -d <db_unique_name> -o <oracle_home> -t <to_version>
Usage: srvctl add instance -d <db_unique_name> -i <inst_name> -n <node_name> [-f]
Usage: srvctl start instance -d <db_unique_name> {-n <node_name> [-i <inst_name>] | -i <inst_name_list>} [-o <start_options>]
Usage: srvctl stop instance -d <db_unique_name> {-n <node_name> | -i <inst_name_list>} [-o <stop_options>] [-f]
Usage: srvctl status instance -d <db_unique_name> {-n <node_name> | -i <inst_name_list>} [-f] [-v]
Usage: srvctl enable instance -d <db_unique_name> -i "<inst_name_list>"
Usage: srvctl disable instance -d <db_unique_name> -i "<inst_name_list>"
Usage: srvctl modify instance -d <db_unique_name> -i <inst_name> { -n <node_name> | -z }
Usage: srvctl remove instance -d <db_unique_name> -i <inst_name> [-f] [-y]
Usage: srvctl add service -d <db_unique_name> -s <service_name> {-r "<preferred_list>" [-a "<available_list>"] [-P {BASIC | NONE | PRECONNECT}] | -g <pool_name> [-c {UNIFORM | SINGLETON}] } [-k <net_num>] [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC | MANUAL}] [-q {TRUE|FALSE}] [-x {TRUE|FALSE}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z <failover_retries>] [-w <failover_delay>] [-t <edition>] [-f]
Usage: srvctl add service -d <db_unique_name> -s <service_name> -u {-r "<new_pref_inst>" | -a "<new_avail_inst>"} [-f]
Usage: srvctl config service -d <db_unique_name> [-s <service_name>] [-v]
Usage: srvctl enable service -d <db_unique_name> -s "<service_name_list>" [-i <inst_name> | -n <node_name>]
Usage: srvctl disable service -d <db_unique_name> -s "<service_name_list>" [-i <inst_name> | -n <node_name>]
Usage: srvctl status service -d <db_unique_name> [-s "<service_name_list>"] [-f] [-v]
Usage: srvctl modify service -d <db_unique_name> -s <service_name> -i <old_inst_name> -t <new_inst_name> [-f]
Usage: srvctl modify service -d <db_unique_name> -s <service_name> -i <avail_inst_name> -r [-f]
Usage: srvctl modify service -d <db_unique_name> -s <service_name> -n -i "<preferred_list>" [-a "<available_list>"] [-f]
Usage: srvctl modify service -d <db_unique_name> -s <service_name> [-g <pool_name>] [-c {UNIFORM | SINGLETON}] [-P {BASIC|NONE}] [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC | MANUAL}][-q {true|false}] [-x {true|false}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z <integer>] [-w <integer>] [-t <edition>]
Usage: srvctl relocate service -d <db_unique_name> -s <service_name> {-i <old_inst_name> -t <new_inst_name> | -c <current_node> -n <target_node>} [-f]
Usage: srvctl remove service -d <db_unique_name> -s <service_name> [-i <inst_name>] [-f]
Usage: srvctl start service -d <db_unique_name> [-s "<service_name_list>" [-n <node_name> | -i <inst_name>] ] [-o <start_options>]
Usage: srvctl stop service -d <db_unique_name> [-s "<service_name_list>" [-n <node_name> | -i <inst_name>] ] [-f]
Usage: srvctl add nodeapps { { -n <node_name> -A <name|ip>/<netmask>/[if1[|if2...]] } | { -S <subnet>/<netmask>/[if1[|if2...]] } } [-e <em-port>] [-l <ons-local-port>] [-r <ons-remote-port>] [-t <host>[:<port>][,<host>[:<port>]...]] [-v]
Usage: srvctl config nodeapps [-a] [-g] [-s]
Usage: srvctl modify nodeapps {[-n <node_name> -A <new_vip_address>/<netmask>[/if1[|if2|...]]] | [-S <subnet>/<netmask>[/if1[|if2|...]]]} [-u {static|dhcp|mixed}] [-e <em-port>] [ -l <ons-local-port> ] [-r <ons-remote-port> ] [-t <host>[:<port>][,<host>[:<port>]...]] [-v]
Usage: srvctl start nodeapps [-n <node_name>] [-g] [-v]
Usage: srvctl stop nodeapps [-n <node_name>] [-g] [-f] [-r] [-v]
Usage: srvctl status nodeapps
Usage: srvctl enable nodeapps [-g] [-v]
Usage: srvctl disable nodeapps [-g] [-v]
Usage: srvctl remove nodeapps [-f] [-y] [-v]
Usage: srvctl getenv nodeapps [-a] [-g] [-s] [-t "<name_list>"]
Usage: srvctl setenv nodeapps {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"} [-v]
Usage: srvctl unsetenv nodeapps -t "<name_list>" [-v]
Usage: srvctl add vip -n <node_name> -k <network_number> -A <name|ip>/<netmask>/[if1[|if2...]] [-v]
Usage: srvctl config vip { -n <node_name> | -i <vip_name> }
Usage: srvctl disable vip -i <vip_name> [-v]
Usage: srvctl enable vip -i <vip_name> [-v]
Usage: srvctl remove vip -i "<vip_name_list>" [-f] [-y] [-v]
Usage: srvctl getenv vip -i <vip_name> [-t "<name_list>"]
Usage: srvctl start vip { -n <node_name> | -i <vip_name> } [-v]
Usage: srvctl stop vip { -n <node_name> | -i <vip_name> } [-f] [-r] [-v]
Usage: srvctl relocate vip -i <vip_name> [-n <node_name>] [-f] [-v]
Usage: srvctl status vip { -n <node_name> | -i <vip_name> } [-v]
Usage: srvctl setenv vip -i <vip_name> {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"} [-v]
Usage: srvctl unsetenv vip -i <vip_name> -t "<name_list>" [-v]
Usage: srvctl add network [-k <net_num>] -S <subnet>/<netmask>/[if1[|if2...]] [-w <network_type>] [-v]
Usage: srvctl config network [-k <network_number>]
Usage: srvctl modify network [-k <network_number>] [-S <subnet>/<netmask>[/if1[|if2...]]] [-w <network_type>] [-v]
Usage: srvctl remove network {-k <network_number> | -a} [-f] [-v]
Usage: srvctl add asm [-l <lsnr_name>]
Usage: srvctl start asm [-n <node_name>] [-o <start_options>]
Usage: srvctl stop asm [-n <node_name>] [-o <stop_options>] [-f]
Usage: srvctl config asm [-a]
Usage: srvctl status asm [-n <node_name>] [-a] [-v]
Usage: srvctl enable asm [-n <node_name>]
Usage: srvctl disable asm [-n <node_name>]
Usage: srvctl modify asm [-l <lsnr_name>] 
Usage: srvctl remove asm [-f]
Usage: srvctl getenv asm [-t <name>[, ...]]
Usage: srvctl setenv asm -t "<name>=<val> [,...]" | -T "<name>=<value>"
Usage: srvctl unsetenv asm -t "<name>[, ...]"
Usage: srvctl start diskgroup -g <dg_name> [-n "<node_list>"]
Usage: srvctl stop diskgroup -g <dg_name> [-n "<node_list>"] [-f]
Usage: srvctl status diskgroup -g <dg_name> [-n "<node_list>"] [-a] [-v]
Usage: srvctl enable diskgroup -g <dg_name> [-n "<node_list>"]
Usage: srvctl disable diskgroup -g <dg_name> [-n "<node_list>"]
Usage: srvctl remove diskgroup -g <dg_name> [-f]
Usage: srvctl add listener [-l <lsnr_name>] [-s] [-p "[TCP:]<port>[, ...][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]"] [-o <oracle_home>] [-k <net_num>]
Usage: srvctl config listener [-l <lsnr_name>] [-a]
Usage: srvctl start listener [-l <lsnr_name>] [-n <node_name>]
Usage: srvctl stop listener [-l <lsnr_name>] [-n <node_name>] [-f]
Usage: srvctl status listener [-l <lsnr_name>] [-n <node_name>] [-v]
Usage: srvctl enable listener [-l <lsnr_name>] [-n <node_name>]
Usage: srvctl disable listener [-l <lsnr_name>] [-n <node_name>]
Usage: srvctl modify listener [-l <lsnr_name>] [-o <oracle_home>] [-p "[TCP:]<port>[, ...][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]"] [-u <oracle_user>] [-k <net_num>]
Usage: srvctl remove listener [-l <lsnr_name> | -a] [-f]
Usage: srvctl getenv listener [-l <lsnr_name>] [-t <name>[, ...]]
Usage: srvctl setenv listener [-l <lsnr_name>] -t "<name>=<val> [,...]" | -T "<name>=<value>"
Usage: srvctl unsetenv listener [-l <lsnr_name>] -t "<name>[, ...]"
Usage: srvctl add scan -n <scan_name> [-k <network_number>] [-S <subnet>/<netmask>[/if1[|if2|...]]]
Usage: srvctl config scan [-i <ordinal_number>]
Usage: srvctl start scan [-i <ordinal_number>] [-n <node_name>]
Usage: srvctl stop scan [-i <ordinal_number>] [-f]
Usage: srvctl relocate scan -i <ordinal_number> [-n <node_name>]
Usage: srvctl status scan [-i <ordinal_number>] [-v]
Usage: srvctl enable scan [-i <ordinal_number>]
Usage: srvctl disable scan [-i <ordinal_number>]
Usage: srvctl modify scan -n <scan_name>
Usage: srvctl remove scan [-f] [-y]
Usage: srvctl add scan_listener [-l <lsnr_name_prefix>] [-s] [-p [TCP:]<port>[/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]] 
Usage: srvctl config scan_listener [-i <ordinal_number>]
Usage: srvctl start scan_listener [-n <node_name>] [-i <ordinal_number>]
Usage: srvctl stop scan_listener [-i <ordinal_number>] [-f]
Usage: srvctl relocate scan_listener -i <ordinal_number> [-n <node_name>]
Usage: srvctl status scan_listener [-i <ordinal_number>] [-v]
Usage: srvctl enable scan_listener [-i <ordinal_number>]
Usage: srvctl disable scan_listener [-i <ordinal_number>]
Usage: srvctl modify scan_listener {-u|-p [TCP:]<port>[/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]} 
Usage: srvctl remove scan_listener [-f] [-y]
Usage: srvctl add srvpool -g <pool_name> [-l <min>] [-u <max>] [-i <importance>] [-n "<server_list>"] [-f]
Usage: srvctl config srvpool [-g <pool_name>]
Usage: srvctl status srvpool [-g <pool_name>] [-a]
Usage: srvctl status server -n "<server_list>" [-a]
Usage: srvctl relocate server -n "<server_list>" -g <pool_name> [-f]
Usage: srvctl modify srvpool -g <pool_name> [-l <min>] [-u <max>] [-i <importance>] [-n "<server_list>"] [-f]
Usage: srvctl remove srvpool -g <pool_name>
Usage: srvctl add oc4j [-v]
Usage: srvctl config oc4j
Usage: srvctl start oc4j [-v]
Usage: srvctl stop oc4j [-f] [-v]
Usage: srvctl relocate oc4j [-n <node_name>] [-v]
Usage: srvctl status oc4j [-n <node_name>] [-v]
Usage: srvctl enable oc4j [-n <node_name>] [-v]
Usage: srvctl disable oc4j [-n <node_name>] [-v]
Usage: srvctl modify oc4j -p <oc4j_rmi_port> [-v] [-f]
Usage: srvctl remove oc4j [-f] [-v]
Usage: srvctl start home -o <oracle_home> -s <state_file> -n <node_name>
Usage: srvctl stop home -o <oracle_home> -s <state_file> -n <node_name> [-t <stop_options>] [-f]
Usage: srvctl status home -o <oracle_home> -s <state_file> -n <node_name>
Usage: srvctl add filesystem -d <volume_device> -v <volume_name> -g <dg_name> [-m <mountpoint_path>] [-u <user>]
Usage: srvctl config filesystem -d <volume_device>
Usage: srvctl start filesystem -d <volume_device> [-n <node_name>]
Usage: srvctl stop filesystem -d <volume_device> [-n <node_name>] [-f]
Usage: srvctl status filesystem -d <volume_device> [-v]
Usage: srvctl enable filesystem -d <volume_device>
Usage: srvctl disable filesystem -d <volume_device>
Usage: srvctl modify filesystem -d <volume_device> -u <user>
Usage: srvctl remove filesystem -d <volume_device> [-f]
Usage: srvctl start gns [-l <log_level>] [-n <node_name>] [-v]
Usage: srvctl stop gns [-n <node_name>] [-f] [-v]
Usage: srvctl config gns [-a] [-d] [-k] [-m] [-n <node_name>] [-p] [-s] [-V] [-q <name>] [-l] [-v]
Usage: srvctl status gns [-n <node_name>] [-v]
Usage: srvctl enable gns [-n <node_name>] [-v]
Usage: srvctl disable gns [-n <node_name>] [-v]
Usage: srvctl relocate gns [-n <node_name>] [-v]
Usage: srvctl add gns -d <domain> -i <vip_name|ip> [-v]
Usage: srvctl modify gns {-l <log_level> | [-i <ip_address>] [-N <name> -A <address>] [-D <name> -A <address>] [-c <name> -a <alias>] [-u <alias>] [-r <address>] [-V <name>] [-p <parameter>:<value>[,<parameter>:<value>...]] [-F <forwarded_domains>] [-R <refused_domains>] [-X <excluded_interfaces>] [-v]}
Usage: srvctl remove gns [-f] [-v]
Usage: srvctl add cvu [-t <check_interval_in_minutes>]
Usage: srvctl config cvu
Usage: srvctl start cvu [-n <node_name>]
Usage: srvctl stop cvu [-f]
Usage: srvctl relocate cvu [-n <node_name>]
Usage: srvctl status cvu [-n <node_name>]
Usage: srvctl enable cvu [-n <node_name>]
Usage: srvctl disable cvu [-n <node_name>]
Usage: srvctl modify cvu -t <check_interval_in_minutes>
Usage: srvctl remove cvu [-f]

2. 檢視資料庫配置資訊。

-- config database幫助資訊
[[email protected] bin]# ./srvctl config database -h
Displays the configuration for the database.
Usage: srvctl config database [-d <db_unique_name> [-a] ] [-v]
-d <db_unique_name> Unique name for the database
-a Print detailed configuration information
-v Verbose output
-h Print usage

-- 檢視資料庫配置資訊。
[[email protected] bin]# ./srvctl config database -d orcldb 
Database unique name: orcldb
Database name: orcldb
Oracle home: /u01/app/oracle/product/11.2.0/db_1
Oracle user: oracle
Spfile: +DATA/orcldb/spfileorcldb.ora
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: orcldb
Database instances: orcldb1,orcldb2
Disk Groups: DATA,FLASH
Mount point paths: 
Services: 
Type: RAC
Database is administrator managed

3. 檢視應用程式配置資訊
[[email protected] bin]# ./srvctl config nodeapps
Network exists: 1/10.0.0.0/255.0.0.0/eth0, type static
VIP exists: /node1-vip/10.100.25.10/10.0.0.0/255.0.0.0/eth0, hosting node node1
VIP exists: /node2-vip/10.100.25.11/10.0.0.0/255.0.0.0/eth0, hosting node node2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016

4. 檢視asm配置資訊 
[[email protected] bin]# ./srvctl config asm
ASM home: /u01/app/11.2.0/grid
ASM listener: ORCLDB

5. 檢視監聽配置資訊 
[[email protected] bin]# ./srvctl config listener
Name: ORCLDB
Network: 1, Owner: grid
Home: <CRS home>
End points: TCP:1521

6. 檢視scan配置資訊
[[email protected] bin]# ./srvctl config scan
SCAN name: scan-cluster.localdomain, Network: 1/10.0.0.0/255.0.0.0/eth0
SCAN VIP name: scan1, IP: /scan-cluster.localdomain/10.100.25.100
[[email protected] bin]# ./srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521

7. 檢視RAC登錄檔磁碟配置資訊
[[email protected] bin]# ./ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3100
Available space (kbytes) : 259020
ID : 1970085021
Device/File Name : +GRIDDG
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured

Cluster registry integrity check succeeded
Logical corruption check succeeded

8. 檢視RAC仲裁磁碟配置資訊
[[email protected] bin]# ./crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 97b3037ba6684f0bbf04fa53aa7efb37 (ORCL:VOL1) [GRIDDG]
Located 1 voting disk(s).