1. 程式人生 > 其它 >ORACLE 12C R2 RAC 安裝配置指南

ORACLE 12C R2 RAC 安裝配置指南

ORACLE 12C R2 RAC 安裝配置指南

>> fromzhuhaiqing.info

ASM磁碟空間最低要求

求12C R2相比前一版本,OCR的磁碟佔用需求有了明顯增長。
為了方便操作,設定如下:
External: 1個卷x40G
Normal: 3個卷x30G
Hight: 5個卷x25G
Flex: 3個卷x30G
OCR+VOLTING+MGMT儲存通常放到一個磁碟組,且選擇Normal的冗餘方式,也即最少3塊asm磁碟80G空間。

作業系統安裝

作業系統安裝時把“Server with GUI“和”Compatibility Libraries”勾上,其他都不用選擇。
版本採用CentOS 7、RHEL 7或者Oracle Linux 7

安裝oracle預安裝包

wget http://yum.oracle.com/public-yum-ol7.repo -P /etc/yum.repos.d/
yum install -y oracle-rdbms-server-12cR1-preinstall

建立使用者和組

oracle使用者和dba、oinstall組已經在上一步建立完畢。
rac所有節點的oracle使用者和grid使用者的uid和gid必須一致,所以建立的時候最好制定uid和gid。

groupadd --gid 54323 asmdba
groupadd --gid 54324 asmoper
groupadd --gid 54325 asmadmin
groupadd --gid 54326 oper
groupadd --gid 54327 backupdba
groupadd --gid 54328 dgdba
groupadd --gid 54329 kmdba
usermod --uid 54321 --gid oinstall --groups dba,oper,asmdba,asmoper,backupdba,dgdba,kmdba oracle
useradd --uid 54322 --gid oinstall --groups dba,asmadmin,asmdba,asmoper grid

安裝目錄

mkdir -p /u01/app/12.2.0/grid
mkdir -p /u01/app/grid
mkdir -p /u01/app/oracle
chown -R grid:oinstall /u01
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/

環境變數

grid環境變數

cat <<EOF >>/home/grid/.bash_profile
ORACLE_SID=+ASM1
ORACLE_HOME=/u01/12.2.0/grid
PATH=$ORACLE_HOME/bin:$PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

export ORACLE_SID CLASSPATH ORACLE_HOME LD_LIBRARY_PATH PATH

EOF

在節點2,ORACLE_SID=+ASM2

oracle環境變數

cat <<EOF >>/home/oracle/.bash_profile
ORACLE_SID=starboss1
ORACLE_HOME=/u01/app/oracle/product/12.2.0/db_1
ORACLE_HOSTNAME=rac01
PATH=$ORACLE_HOME/bin:$PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

export ORACLE_SID ORACLE_HOME ORACLE_HOSTNAME PATH LD_LIBRARY_PATH CLASSPATH
EOF

在節點2,ORACLE_SID=starboss2,ORACLE_HOSTNAME=rac02

修改logind.conf

# vi /etc/systemd/logind.conf
RemoveIPC=no
# systemctl daemon-reload
# systemctl restart systemcd-logind

載入pam_limits.so模組

echo "session required pam_limits.so" >> /etc/pam.d/login

禁用selinux

setenforce 0
vi /etc/sysconfig/selinux

禁用防火牆

# systemctl stop firewalld && systemctl disable firewalld

修改ulimit

cat <<EOF >> /etc/security/limits.d/99-grid-oracle-limits.conf
oracle soft nproc 16384
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
grid soft nproc 16384
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
EOF

建立自定義的ulimit

cat <<EOF >> /etc/profile.d/oracle-grid.sh
if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -u 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi
if [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -u 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi
EOF

修改共享記憶體分割槽大小

將如下引數新增到/etc/fstab,具體大小數值根據實際情況調整,因為這個數值和實體記憶體以及MEMORY_TARGET有關。
echo “shm /dev/shm tmpfs size=12g 0 0” >> /etc/fstab
修改後,只需重新對shm進行掛載即可:
mount -o remount /dev/shm

多路徑

# yum install device-mapper-multipath
# cp /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf /etc/

獲取scsi id
# /usr/lib/udev/scsi_id --whitelisted --replace-whitespace –-device=/dev/sda
# vi /etc/multipath.conf
multipaths {
multipath {
wwid 36000d310012522000000000000000006
alias vol01
}
multipath {
wwid 36000d310012522000000000000000005
alias vol02
}
}
# systemctl start multipathd.service
# multipath -ll

配置ASM磁碟

ASMlib方式

安裝ASMLib
Oracle Linux 7
yum install -y kmod-oracleasm
CentOS 7
yum install -y http://mirror.centos.org/centos/7/os/x86_64/Packages/kmod-oracleasm-2.0.8-17.el7.centos.x86_64.rpm

yum install -y http://download.oracle.com/otn_software/asmlib/oracleasmlib-2.0.12-1.el7.x86_64.rpm
yum install -y http://public-yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/oracleasm-support-2.1.8-3.1.el7.x86_64.rpm

其他版本下載:
http://www.oracle.com/technetwork/server-storage/linux/asmlib/index-101839.html
ASM磁碟配置
12C R2中對磁碟組空間要求比12C R1更大。

[root@rac01 ~]# /etc/init.d/oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@rac01 ~]# reboot

用fdisk在共享磁碟上建立主分割槽:
[root@rac01 ~]# fdisk /dev/sdd
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x86f899a0.

Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-39976959, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-39976959, default 39976959):
Using default value 39976959
Partition 1 of type Linux and of size 19.1 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

在叢集的任意節點建立asm磁碟:
[root@rac01 ~]# /etc/init.d/oracleasm createdisk OCR01 /dev/sdd1
Marking disk "OCR01" as an ASM disk: [ OK ]
[root@rac01 ~]# /etc/init.d/oracleasm createdisk OCR02 /dev/sde1
Marking disk "OCR02" as an ASM disk: [ OK ]
[root@rac01 ~]# /etc/init.d/oracleasm createdisk OCR03 /dev/sdf1
Marking disk "OCR03" as an ASM disk: [ OK ]
[root@rac01 ~]# /etc/init.d/oracleasm createdisk DATA01 /dev/sdb1
Marking disk "DATA01" as an ASM disk: [ OK ]
[root@rac01 ~]# /etc/init.d/oracleasm createdisk DATA02 /dev/sdc1
Marking disk "DATA02" as an ASM disk: [ OK ]
分別兩個節點執行:
[root@rac01 ~]# /etc/init.d/oracleasm scandisks
[root@rac01 ~]# /etc/init.d/oracleasm listdisks

注:
如果需要清空磁碟,重新部署asm,需要使用dd命令,如:
dd if=/dev/zero of=/dev/sdb1 bs=8192 count=128000

UDEV方式

centos6和centos7有所不同,具體如下:

確認在所有RAC節點上已經安裝了必要的UDEV包
[root@rh2 ~]# rpm -qa|grep udev
udev-095-14.21.el5

CentOS 6/Oracle Linux 6/RHEL 6

1.通過scsi_id獲取裝置的塊裝置的唯一標識名,假設系統上已有LUN sdc-sdp
for i in c d e f g h i j k l m n o p ;
do
echo "sd$i" "`scsi_id -g -u -s /block/sd$i` ";
done

sdc 1IET_00010001
sdd 1IET_00010002
sde 1IET_00010003
sdf 1IET_00010004

以上列出於塊裝置名對應的唯一標識名

2.建立必要的UDEV配置檔案,
首先切換到配置檔案目錄
[root@rh2 ~]# cd /etc/udev/rules.d
定義必要的規則配置檔案
[root@rh2 rules.d]# touch 99-oracle-asmdevices.rules
[root@rh2 rules.d]# cat 99-oracle-asmdevices.rules
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010001", NAME="ocr1", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010002", NAME="ocr2", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010003", NAME="asm-disk1", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010004", NAME="asm-disk2", OWNER="grid", GROUP="asmadmin", MODE="0660"

Result 為/sbin/scsi_id -g -u -s %p的輸出, 按順序填入剛才獲取的唯一標識名即可
OWNER一般為grid,GROUP為asmadmin,MODE即為磁碟讀寫許可權,採用0660即可
NAME為UDEV對映後的裝置名,
建議為OCR和VOTE DISK建立獨立的DISKGROUP,為了容易區分將該DISKGROUP專用的裝置命名為ocr1..ocrn的形式
其餘磁碟可以根據其實際用途或磁碟組名來命名

3.將該規則檔案拷貝到其他節點上
[root@rh2 rules.d]# scp 99-oracle-asmdevices.rules Other_node:/etc/udev/rules.d

4.在所有節點上啟動udev服務,或者重啟伺服器即可

[root@rh2 rules.d]# /sbin/udevcontrol reload_rules
[root@rh2 rules.d]# /sbin/start_udev
Starting udev: [ OK ]

5.檢查裝置是否到位

[root@rh2 rules.d]# cd /dev
[root@rh2 dev]# ls -l ocr*
brw-rw---- 1 grid asmadmin 8, 32 Jul 10 17:31 ocr1
brw-rw---- 1 grid asmadmin 8, 48 Jul 10 17:31 ocr2
[root@rh2 dev]# ls -l asm-disk*
brw-rw---- 1 grid asmadmin 8, 64 Jul 10 17:31 asm-disk1
brw-rw---- 1 grid asmadmin 8, 80 Jul 10 17:31 asm-disk2
brw-rw---- 1 grid asmadmin 8, 96 Jul 10 17:31 asm-disk3
brw-rw---- 1 grid asmadmin 8, 112 Jul 10 17:31 asm-disk4

CentOS 7/Oracle Linux 7/RHEL 7

獲取塊裝置id
# /usr/lib/udev/scsi_id -g -u -d /dev/sdb1
14f504e46494c45526a75744363422d796357662d4b436a65
# /usr/lib/udev/scsi_id -g -u -d /dev/sdc1
14f504e46494c455254535a7a414d2d62494b6f2d5a6f6a42
# /usr/lib/udev/scsi_id -g -u -d /dev/sdd1
14f504e46494c45526566324e626c2d4770654c2d6b443064
# /usr/lib/udev/scsi_id -g -u -d /dev/sde1
14f504e46494c455266326e7547552d384953442d6135576a
# /usr/lib/udev/scsi_id -g -u -d /dev/sdf1
14f504e46494c4552774263526f742d534a75392d36374f69

建立引數檔案
touch /etc/scsi_id.config
options=-g

# cat /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="14f504e46494c45526a75744363422d796357662d4b436a65", SYMLINK+="asm-disk1", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="14f504e46494c455254535a7a414d2d62494b6f2d5a6f6a42", SYMLINK+="asm-disk2", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="14f504e46494c45526566324e626c2d4770654c2d6b443064", SYMLINK+="asm-disk3", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="14f504e46494c455266326e7547552d384953442d6135576a", SYMLINK+="asm-disk4", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="14f504e46494c4552774263526f742d534a75392d36374f69", SYMLINK+="asm-disk5", OWNER="grid", GROUP="asmadmin", MODE="0660"


載入並重新整理塊裝置分割槽表
# /sbin/partprobe /dev/sdb1
# /sbin/partprobe /dev/sdc1
# /sbin/partprobe /dev/sdd1
# /sbin/partprobe /dev/sde1
# /sbin/partprobe /dev/sdf1

udev測試
# /sbin/udevadm test /block/sdb/sdb1
# /sbin/udevadm test /block/sdc/sdc1
# /sbin/udevadm test /block/sdd/sdd1
# /sbin/udevadm test /block/sde/sde1
# /sbin/udevadm test /block/sdf/sdf1

啟動服務
# /sbin/udevadm control --reload-rules

檢查連線生成
[root@udev ~]# ls -l /dev/asm-disk*
lrwxrwxrwx 1 root root 4 Aug 22 13:19 /dev/asm-disk1 -> sdb1
lrwxrwxrwx 1 root root 4 Aug 22 13:19 /dev/asm-disk2 -> sdc1
lrwxrwxrwx 1 root root 4 Aug 22 13:19 /dev/asm-disk3 -> sdd1
lrwxrwxrwx 1 root root 4 Aug 22 13:19 /dev/asm-disk4 -> sde1
lrwxrwxrwx 1 root root 4 Aug 22 13:19 /dev/asm-disk5 -> sdf1

禁用ntp

/sbin/service ntpd stop

chkconfig ntpd off

mv /etc/ntp.conf /etc/ntp.conf.org
rm /var/run/ntpd.pid

停止avahi-daemon服務

systemctl stop avahi-dnsconfd
systemctl stop avahi-daemon
systemctl disable avahi-dnsconfd
systemctl disable avahi-daemon

IP配置

如果不安裝DNS服務,通過hosts檔案來解析,則只能配置一個SCAN IP,只能連線rac某一個節點,無法實現負載均衡。DNS的配置參加後面的介紹。

#public,接業務交換機,bond
192.168.245.134 rac01
192.168.245.140 rac02

#private,直連心跳,bond
10.0.1.1 rac01-priv
10.0.1.2 rac02-priv

#virtual
192.168.245.136 rac01-vip
192.168.245.142 rac02-vip

#scan-ip,oracle rac service
192.168.245.135 rac-cluster-scan

安裝cvuqdisk軟體包

安裝包在資料庫安裝軟體壓縮包的rpm資料夾下。

rpm -ivh cvuqdisk-1.0.10-1.rpm

安裝GI

從Oracle Grid Infrastructure 12c Release 2 (12.2)開始,GI 安裝方式變成了image-based方式,Oracle 提供的Grid 安裝檔案是直接已經安裝好的ORACLE_HOME,
因此我們需要把GRID 的安裝檔案直接解壓到預先建立好的GIRD ORACLE_HOME 中,然後執行gridSetup.sh 啟動圖形介面,開始配置GRID。
# su - grid
$cd /u01/12.2.0/grid
$unzip /oracle_soft/grid_12201.zip
$ ./gridSetup.sh
選擇“Configure Oracle Grid Infrastructure for a New Cluster”,點選Next

如果我們沒有在環境中配置DNS和GNS服務,檢查就會報DNS和resolve.conf的錯誤,因為我們採用的是hosts檔案解析,略過即可。

在一臺機器上覆制安裝檔案,並開始GI的安裝,系統會複製檔案到其它節點機型同步安裝。

安裝的最後用root使用者執行指令碼,必須先在本地節點挨個啟動執行,成功之後,才能再在其他節點並行。
[root@rac01 ~]# sh /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac01 ~]# sh /u01/app/12.2.0/grid/root.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/12.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/grid/crsdata/rac01/crsconfig/rootcrs_rac01_2017-08-16_02-48-07PM.log
2017/08/16 14:48:16 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2017/08/16 14:48:16 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
2017/08/16 14:48:59 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2017/08/16 14:48:59 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2017/08/16 14:49:04 CLSRSC-363: User ignored prerequisites during installation
2017/08/16 14:49:04 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2017/08/16 14:49:05 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2017/08/16 14:49:06 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'.
2017/08/16 14:49:12 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'.
2017/08/16 14:49:13 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'.
2017/08/16 14:49:13 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'.
2017/08/16 14:49:44 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2017/08/16 14:49:51 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2017/08/16 14:49:52 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2017/08/16 14:49:57 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2017/08/16 14:50:12 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2017/08/16 14:50:35 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2017/08/16 14:50:40 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac01'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac01' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/08/16 14:51:20 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2017/08/16 14:51:25 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac01'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac01' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.evmd' on 'rac01'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac01'
CRS-2676: Start of 'ora.mdnsd' on 'rac01' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac01'
CRS-2676: Start of 'ora.gpnpd' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac01'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac01'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac01' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac01'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac01'
CRS-2676: Start of 'ora.diskmon' on 'rac01' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac01' succeeded

Disk groups created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-170816PM025203.log for details.

2017/08/16 14:52:52 CLSRSC-482: Running command: '/u01/app/12.2.0/grid/bin/ocrconfig -upgrade grid oinstall'
CRS-2672: Attempting to start 'ora.crf' on 'rac01'
CRS-2672: Attempting to start 'ora.storage' on 'rac01'
CRS-2676: Start of 'ora.storage' on 'rac01' succeeded
CRS-2676: Start of 'ora.crf' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac01'
CRS-2676: Start of 'ora.crsd' on 'rac01' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk 252d21a926494fd5bfdcbc163b9fd646.
Successful addition of voting disk 6f00d3b3ba454f14bfc15f10a6466e3e.
Successful addition of voting disk 5aed4ef45df94ff1bf4934d8883d39a3.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 252d21a926494fd5bfdcbc163b9fd646 (/dev/oracleasm/disks/OCR03) [DATA]
2. ONLINE 6f00d3b3ba454f14bfc15f10a6466e3e (/dev/oracleasm/disks/OCR02) [DATA]
3. ONLINE 5aed4ef45df94ff1bf4934d8883d39a3 (/dev/oracleasm/disks/OCR01) [DATA]
Located 3 voting disk(s).
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac01'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac01'
CRS-2677: Stop of 'ora.crsd' on 'rac01' succeeded
CRS-2673: Attempting to stop 'ora.storage' on 'rac01'
CRS-2673: Attempting to stop 'ora.crf' on 'rac01'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac01'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac01'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac01'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac01' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rac01' succeeded
CRS-2677: Stop of 'ora.crf' on 'rac01' succeeded
CRS-2677: Stop of 'ora.storage' on 'rac01' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac01'
CRS-2677: Stop of 'ora.mdnsd' on 'rac01' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac01' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac01'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac01' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac01'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac01'
CRS-2677: Stop of 'ora.evmd' on 'rac01' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac01' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac01'
CRS-2677: Stop of 'ora.cssd' on 'rac01' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac01'
CRS-2677: Stop of 'ora.gipcd' on 'rac01' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac01' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2017/08/16 14:54:18 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac01'
CRS-2672: Attempting to start 'ora.evmd' on 'rac01'
CRS-2676: Start of 'ora.mdnsd' on 'rac01' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac01'
CRS-2676: Start of 'ora.gpnpd' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac01'
CRS-2676: Start of 'ora.gipcd' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac01'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac01'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac01'
CRS-2676: Start of 'ora.diskmon' on 'rac01' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac01'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac01'
CRS-2676: Start of 'ora.ctssd' on 'rac01' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac01'
CRS-2676: Start of 'ora.asm' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac01'
CRS-2676: Start of 'ora.storage' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac01'
CRS-2676: Start of 'ora.crf' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac01'
CRS-2676: Start of 'ora.crsd' on 'rac01' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: rac01
CRS-6016: Resource auto-start has completed for server rac01
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2017/08/16 14:57:01 CLSRSC-343: Successfully started Oracle Clusterware stack
2017/08/16 14:57:01 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac01'
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac01'
CRS-2676: Start of 'ora.asm' on 'rac01' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'rac01'
CRS-2676: Start of 'ora.DATA.dg' on 'rac01' succeeded
2017/08/16 15:01:35 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2017/08/16 15:04:36 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

asmca配置資料盤

GI安裝完成後,我們需要使用asmca來建立存放業務資料庫的asm磁碟組,為oracle資料庫的安裝做準備。
#su - grid
$/u01/app/12.2.0/grid/bin/asmca
在asm配置助手中,選擇磁碟組選單,可以看到已經mount的OCR磁碟組,並且2個節點的ASM實力全部是UP狀態。

點選Create按鈕,建立業務資料庫的ASM磁碟組,取名DATA,然後選擇前面建立的磁碟,並點選OK完成建立。

完成後的結果。

安裝oracle

grid安裝完成後,下一步工作是安裝oracle資料庫軟體和業務資料庫例項。
將linuxx64_12201_database.zip上傳到rac01的任意目錄,解壓後,用oracle使用者啟動runInstaller。
資料庫開始安裝後,oracle會將軟體同步複製到其餘節點進行同步安裝。

第五步,預設是“Policy managed”,如無特殊需求,可以選擇”Admin managed“
第六步,先填寫oracle密碼,然後點選setup,程式會自動設定各個節點的oracle使用者免密登陸。

記憶體較大的情況下,一般不要勾選自動記憶體管理,只調整oracle可用記憶體即可。

由於沒有安裝DNS並使用GNS服務,resove.conf錯誤可以直接忽略。

安裝完後的叢集狀態

[grid@rac01 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE rac01 STABLE
ONLINE ONLINE rac02 STABLE
ora.DATA.dg
ONLINE ONLINE rac01 STABLE
ONLINE ONLINE rac02 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE rac01 STABLE
ONLINE ONLINE rac02 STABLE
ora.OCR.dg
ONLINE ONLINE rac01 STABLE
ONLINE ONLINE rac02 STABLE
ora.chad
ONLINE ONLINE rac01 STABLE
ONLINE ONLINE rac02 STABLE
ora.net1.network
ONLINE ONLINE rac01 STABLE
ONLINE ONLINE rac02 STABLE
ora.ons
ONLINE ONLINE rac01 STABLE
ONLINE ONLINE rac02 STABLE
ora.proxy_advm
OFFLINE OFFLINE rac01 STABLE
OFFLINE OFFLINE rac02 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac01 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE rac01 169.254.107.91 10.0. 0.1,STABLE
ora.asm
1 ONLINE ONLINE rac01 Started,STABLE
2 ONLINE ONLINE rac02 Started,STABLE
3 OFFLINE OFFLINE STABLE
ora.cvu
1 ONLINE ONLINE rac01 STABLE
ora.mgmtdb
1 ONLINE ONLINE rac01 Open,STABLE
ora.qosmserver
1 ONLINE ONLINE rac01 STABLE
ora.rac01.vip
1 ONLINE ONLINE rac01 STABLE
ora.rac02.vip
1 ONLINE ONLINE rac02 STABLE
ora.scan1.vip
1 ONLINE ONLINE rac01 STABLE
ora.starboss.db
1 ONLINE ONLINE rac02 Open,HOME=/u01/app/oracle/product/12.2.0/dbhome_1,STABLE
2 ONLINE ONLINE rac01 Open,HOME=/u01/app/oracle/product/12.2.0/dbhome_1,STABLE

---------------以下為附錄-----------------

RAC資料庫叢集啟動、停止

RAC資料庫目前是全自動的,當作業系統啟動時,ASM裝置會自動掛載,資料庫也會隨之自動啟動。
如果需要手動啟動或者停止資料庫,請參照如下說明。

啟動、停止oracle資料庫例項

監聽:
[root@RAC01 ~]$ srvctl start listener --啟動監聽
[root@RAC01 ~]$ srvctl stop listener --停止監聽

資料庫
[root@RAC01 ~]$ srvctl start database -d starboss --啟動資料庫
[root@RAC01 ~]$ srvctl stop database -d starboss --停止資料庫
或者
[root@RAC01 ~]$ srvctl stop database -d starboss -o immediate --停止資料庫
[root@RAC01 ~]$ srvctl start database -d starboss -o open/mount/'read only' --啟動到開啟、掛載、只讀模式

啟停Oracle RAC叢集

這個操作會停止資料庫,並停止rac其他所有的叢集服務(如asm例項、vip、監聽以及rac高可用環境):
[root@rac01 ~]$ crsctl start cluster -all --啟動
[root@rac01 ~]$ crsctl stop cluster -all --停止

增加swap分割槽大小

[root@rac02 grid]# free -m
total used free shared buff/cache available
Mem: 11757 136 5078 8 6542 11539
Swap: 6015 0 6015
[root@rac02 grid]# mkdir /swap
[root@rac02 grid]# dd if=/dev/zero of=/swap/swap bs=1024 count=6291456 #一個block是1k,6291456就是6G
6291456+0 records in
6291456+0 records out
6442450944 bytes (6.4 GB) copied, 8.93982 s, 721 MB/s
[root@rac02 grid]# /sbin/mkswap /swap/swap
Setting up swapspace version 1, size = 6291452 KiB
no label, UUID=35c98431-eb56-4ad7-99cd-d3414cce75ca
[root@rac02 grid]# /sbin/swapon /swap/swap
swapon: /swap/swap: insecure permissions 0644, 0600 suggested.
[root@rac02 grid]# free -m
total used free shared buff/cache available
Mem: 11757 141 5074 8 6542 11534
Swap: 12159 0 12159

檢查決策盤

[grid@rac01 ~]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 95b79a3ef6274fdebfe1d1323f0cc829 (/dev/oracleasm/disks/OCR03) [OCR]
2. ONLINE 404499d583f04f15bf24c89a4269bbe9 (/dev/oracleasm/disks/OCR02) [OCR]
3. ONLINE 6e010b265aee4f15bfd1d4260ab5ac9c (/dev/oracleasm/disks/OCR01) [OCR]
Located 5 voting disk(s).

RAC服務檢查

grid使用者任意節點執行如下命令
[grid@rac01 ~]$ crsctl check cluster -all
**************************************************************
rac01:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
rac02:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

手動切換SCAN到其他節點

/u01/app/12.2.0/grid/bin/srvctl relocate scan_listener -i 1 -n rac02
執行完成後,scan_listener和scan_vip都會切換到指定節點

設定EM訪問

SQL> exec DBMS_XDB_CONFIG.SETHTTPSPORT(5501)
SQL> exec DBMS_XDB_CONFIG.SETHTTPPORT(5500)

DNS配置

使用dns來解析scanip而不是hosts檔案的好處是,hosts檔案只能配置一個scanip,這意味著外部程式只能連線rac叢集的一個節點,而使用dns則可以配置多個scanip,且scanip可以被解析到任意節點,從而達到負載均衡。

配置DNS服務端

[root@rac-dns ~]# cat /etc/named.conf
……

options {
listen-on port 53 { 192.168.32.119; };    //dns伺服器地址
// listen-on-v6 port 53 { ::1; };    //ipv6,註釋掉
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-query { any; };

…….

新增正向和反向配置檔案資訊

[root@rac-dns ~]# cat /etc/named.rfc1912.zones
… …

zone "32.168.192.in-addr.arpa" IN {   //反向解析的名稱必須是這個格式,且ip反寫
type master;
file "32.168.192.in-addr.arpa";   //檔名任意
allow-update { none; };
};

zone "oracle.local" IN {
type master;
file "oracle.local.zone";
allow-update { none; };
};

正向解析

[root@rac-dns ~]# cat /var/named/oracle.local.zone
$TTL 86400
@ IN SOA dns.oracle.local. root.oracle.local.(
42 ; serial (d. adams)
3H ; refresh
15M ; retry
1W ; expiry
1D ) ; minimum
@ IN NS dns.oracle.local.
dns IN A 192.168.32.119
rac01 IN A 192.168.32.110
rac02 IN A 192.168.32.113
rac-cluster-scan IN A 192.168.32.120
rac-cluster-scan IN A 192.168.32.121
rac-cluster-scan IN A 192.168.32.122
rac01-vip IN A 192.168.32.115
rac02-vip IN A 192.168.32.116

* 本機的dns解析也需要寫進去

反向解析

[root@rac-dns ~]# cat /var/named/32.168.192.in-addr.arpa
$TTL 86400
@ IN SOA dns.oracle.local. root.oracle.local. (
1997022700 ; Serial
28800 ; Refresh
14400 ; Retry
3600000 ; Expire
86400 ) ; Minimum
@ IN NS dns.oracle.local.
110 IN PTR rac01.oracle.local.
113 IN PTR rac02.oracle.local.
120 IN PTR rac-cluster-scan.oracle.local.
121 IN PTR rac-cluster-scan.oracle.local.
122 IN PTR rac-cluster-scan.oracle.local.
115 IN PTR rac01-vip.oracle.local.
116 IN PTR rac02-vip.oracle.local.

注:
1. 第一列為ip,第四列為對應的域名
2. 反向解析檔名前面的ip必須是倒著寫的,否則客戶機會提示解析不到。

啟動dns服務

# systemctl start named
[root@rac-dns ~]# systemctl status named
‚óè named.service - Berkeley Internet Name Domain (DNS)
Loaded: loaded (/usr/lib/systemd/system/named.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2017-08-25 15:05:32 CST; 1s ago
Process: 13679 ExecStop=/bin/sh -c /usr/sbin/rndc stop > /dev/null 2>&1 || /bin/kill -TERM $MAINPID (code=exited, status=0/SUCCESS)
Process: 13691 ExecStart=/usr/sbin/named -u named $OPTIONS (code=exited, status=0/SUCCESS)
Process: 13688 ExecStartPre=/bin/bash -c if [ ! "$DISABLE_ZONE_CHECKING" == "yes" ]; then /usr/sbin/named-checkconf -z /etc/named.conf; else echo "Checking of zone files is disabled"; fi (code=exited, status=0/SUCCESS)
Main PID: 13694 (named)
CGroup: /system.slice/named.service
└─13694 /usr/sbin/named -u named

Aug 25 15:05:32 rac-dns named[13694]: zone 0.in-addr.arpa/IN: loaded serial 0
Aug 25 15:05:32 rac-dns systemd[1]: Started Berkeley Internet Name Domain (DNS).
Aug 25 15:05:32 rac-dns named[13694]: zone 1.0.0.127.in-addr.arpa/IN: loaded serial 0
Aug 25 15:05:32 rac-dns named[13694]: zone localhost/IN: loaded serial 0
Aug 25 15:05:32 rac-dns named[13694]: zone 32.168.192.in-addr.arpa/IN: loaded serial 1997022700
Aug 25 15:05:32 rac-dns named[13694]: zone localhost.localdomain/IN: loaded serial 0
Aug 25 15:05:32 rac-dns named[13694]: zone 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa/IN: loaded serial 0
Aug 25 15:05:32 rac-dns named[13694]: zone oracle.local/IN: loaded serial 42
Aug 25 15:05:32 rac-dns named[13694]: all zones loaded
Aug 25 15:05:32 rac-dns named[13694]: running

客戶端hosts檔案

啟用dns解析的話,客戶機HOST配置大致如下
cat /etc/hosts
#public
192.168.32.110 rac01.oracle.local rac01
192.168.32.113 rac02.oracle.local rac02

#private
10.0.0.1 rac01-priv
10.0.0.2 rac02-priv

#virtual
192.168.32.115 rac01-vip.oracle.local rac01-vip
192.168.32.116 rac02-vip.oracle.local rac02-vip

DNS測試

客戶機新增dns伺服器配置,只需將搜尋域和dns伺服器地址新增到resolve.conf即可:
# echo "search oracle.local" >> /etc/resolv.conf
# echo "nameserver 192.168.32.119" >> /etc/resolv.conf

正向測試
[root@rac02 ~]# nslookup rac01.oracle.local
Server: 192.168.32.119
Address: 192.168.32.119#53

Name: rac01.oracle.local
Address: 192.168.32.110

反向測試
[root@rac02 ~]# nslookup 192.168.32.110
Server: 192.168.32.119
Address: 192.168.32.119#53

110.32.168.192.in-addr.arpa name = rac01.oracle.local.

GNS配置

配置GNS,可以讓系統自動分配VIP,這裡只需要在DNS伺服器上安裝DHCP服務即可。這個我覺得對於實際的RAC叢集帶來的收益並不大,只能說是錦上添花吧,而且部署GNS還得增加伺服器消耗。

檢查軟體包

# rpm --query dhcp

dhcp-3.0.5-18.el5

配置DHCP服務

# vi /etc/dhcp/dhcpd.conf

ddns-update-style interim;

ignore client-updates;

subnet 192.168.32.0 netmask 255.255.255.0 {

  option routers 192.168.32.1;            # 客戶端預設閘道器

  option subnet-mask 255.255.255.0;        # 客戶端子網掩碼.

  option broadcast-address 192.168.32.255;     # 廣播地址.

  option domain-name "oracle.local";       #DNS搜尋域

  option domain-name-servers 192.168.32.119;   # DNS伺服器地址

  range 192.168.32.2 192.168.32.254;        # DHCP分配的地址範圍

  default-lease-time 21600;            # DHCP地址預設租期

  max-lease-time 43200;             # DHCP地址最大租期

}

啟動DHCP服務並設定自啟動

[root@rac-dns ~]# systemctl enable dhcpd.service && systemctl start dhcpd.service

在GI中配置GNS

在安裝GI的第三步“Grid Plug and Play”中

勾選“Configure GNS”“Configure nodes Virtual IPs as assigned by the Dynamic Networks” “Create a new GNS”

GNS VIP Adress:GNS伺服器IP地址

GNS Sub Domain:DNS搜尋域 並在節點的/etc/hosts檔案中註釋掉關於vip的配置

喜歡請讚賞一下啦^_^

微信讚賞

支付寶讚賞