Oracle- RHEL平臺安裝Oracle 12C+ RAC
RHEL平臺安裝Oracle 12C+ RAC
單例項檔案系統部署跳過GI和ASM配置部分即可
叢集配置分類
- Oracle Standalone Clusters :是本地化所有 Oracle Grid Infrastructure 服務和 Oracle ASM 的叢集,並需要直接訪問共享儲存。
- Oracle Cluster Domain :是 Oracle Clusterware 12c Release 2 中新引入的叢集架構的部署選項。
- Oracle Member Clusters :使用 Oracle 域服務叢集的集中式服務,並可用於管理資料庫或應用程式。
- Oracle Extended Clusters :是由位於多個不同地理位置的多個節點組成的叢集。
1. 檢查清單
1.1 硬體條件
1)檢查記憶體
# grep MemTotal /proc/meminfo
2)檢查swap空間
grep SwapTotal /proc/meminfo
3)檢查/tmp目錄空間
df -h /tmp
4)檢查共享記憶體(/dev/shm)
df -h /dev/shm
cluvfy Fails with Error:"PRVE-0427 : Failed To Retrieve The Size Of In-memory File System Mounted As /dev/shm " (文件 ID 2243758.1)
5)本地磁碟容量
建立軟體安裝目錄
-
新建物理卷
[root@temp ~]# pvscan PV /dev/sda2 VG rootvg lvm2 [<63.50 GiB / 48.80 GiB free] Total: 1 [<63.50 GiB] / in use: 1 [<63.50 GiB] / in no VG: 0 [0 ] [root@temp ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 64G 0 disk ├─sda1 8:1 0 512M 0 part /boot └─sda2 8:2 0 63.5G 0 part ├─rootvg-rootlv 253:0 0 8G 0 lvm / ├─rootvg-swaplv 253:1 0 4G 0 lvm [SWAP] ├─rootvg-homelv 253:2 0 512M 0 lvm /home ├─rootvg-varlv 253:3 0 2G 0 lvm /var └─rootvg-usrlv 253:4 0 200M 0 lvm /usr/local sdb 8:16 0 64G 0 disk sr0 11:0 1 1024M 0 rom [root@temp ~]# pvcreate /dev/sdb Physical volume "/dev/sdb" successfully created. [root@temp ~]# pvscan PV /dev/sda2 VG rootvg lvm2 [<63.50 GiB / 48.80 GiB free] PV /dev/sdb lvm2 [64.00 GiB] Total: 2 [<127.50 GiB] / in use: 1 [<63.50 GiB] / in no VG: 1 [64.00 GiB]
-
建立VG
[root@temp ~]# vgscan Reading volume groups from cache. Found volume group "rootvg" using metadata type lvm2 [root@temp ~]# vgcreate upsvg /dev/sdb Volume group "upsvg" successfully created [root@temp ~]# vgs VG #PV #LV #SN Attr VSize VFree rootvg 1 5 0 wz--n- <63.50g 48.80g upsvg 1 0 0 wz--n- <64.00g <64.00g [root@temp ~]#
-
建立LV
[root@temp ~]# lvscan ACTIVE '/dev/rootvg/rootlv' [8.00 GiB] inherit ACTIVE '/dev/rootvg/swaplv' [4.00 GiB] inherit ACTIVE '/dev/rootvg/homelv' [512.00 MiB] inherit ACTIVE '/dev/rootvg/varlv' [2.00 GiB] inherit ACTIVE '/dev/rootvg/usrlv' [200.00 MiB] inherit [root@temp ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert homelv rootvg -wi-ao---- 512.00m rootlv rootvg -wi-ao---- 8.00g swaplv rootvg -wi-ao---- 4.00g usrlv rootvg -wi-ao---- 200.00m varlv rootvg -wi-ao---- 2.00g [root@temp ~]# [root@temp ~]# lvcreate -L 48G --name upslv upsvg Logical volume "upslv" created. [root@temp ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert homelv rootvg -wi-ao---- 512.00m rootlv rootvg -wi-ao---- 8.00g swaplv rootvg -wi-ao---- 4.00g usrlv rootvg -wi-ao---- 200.00m varlv rootvg -wi-ao---- 2.00g upslv upsvg -wi-a----- 48.00g [root@temp ~]# lvscan ACTIVE '/dev/upsvg/upslv' [48.00 GiB] inherit ACTIVE '/dev/rootvg/rootlv' [8.00 GiB] inherit ACTIVE '/dev/rootvg/swaplv' [4.00 GiB] inherit ACTIVE '/dev/rootvg/homelv' [512.00 MiB] inherit ACTIVE '/dev/rootvg/varlv' [2.00 GiB] inherit ACTIVE '/dev/rootvg/usrlv' [200.00 MiB] inherit [root@temp ~]#
-
格式化
[root@temp ~]# mkfs.xfs /dev/upsvg/upslv meta-data=/dev/upsvg/upslv isize=512 agcount=4, agsize=3145728 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=12582912, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=6144, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@temp ~]#
-
掛載目錄
mkdir -p /ups mount /dev/upsvg/upslv /ups cp /etc/fstab{,_$(date +%Y%m%d)} echo "/dev/mapper/upsvg-upslv /ups xfs defaults 0 0" >> /etc/fstab
6)儲存容量需求
ASM儲存平衡級別:
1.2 軟體條件
2. 系統環境配置
2.1 rhel6系統配置
配置hosts檔案
OPT_TIME=$(date +%Y%m%d)
cp /etc/host{,$OPT_TIME}
cat > /etc/hosts <<-EOF
127.0.0.1 loopback localhost.localdomain localhost
::1 loopback
# Public
192.168.10.166 node1
192.168.10.167 node1
# Private
172.168.0.156 node1-priv
172.168.0.157 node2-priv
# Virtual
192.168.10.168 node1-vip
192.168.10.169 node2-vip
# SCAN
192.168.10.170 orc12c-scan
192.168.10.171 orc12c-scan
192.168.10.172 orc12c-scan
EOF
安裝系統軟體包
cat > /tmp/os_pkgs.txt <<-EOF
bc
binutils
compat-libcap1
compat-libstdc++-33
compat-libstdc++-33.i686
e2fsprogs
e2fsprogs-libs
glibc.i686
glibc
glibc-devel.i686
glibc-devel
ksh
libaio
libaio.i686
libaio-devel
libaio-devel.i686
libX11.i686
libX11
libXau.i686
libXau
libXi.i686
libXi
libXtst.i686
libXtst
libgcc.i686
libgcc
libstdc++.i686
libstdc++
libstdc++-devel.i686
libstdc++-devel
libxcb.i686
libxcb
libXrender.i686
libXrender
libXrender-devel.i686
libXrender-devel
make
net-tools
nfs-utils
smartmontools
sysstat
libXext
gcc
gcc-c++
# --
cloog-ppl
cpp
glibc-headers
kernel-headers
mpfr
ppl
EOF
# while read line; do
# rpm -q --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" $line
# done < /tmp/os_pkgs.txt
WORK_DIR='/tmp'
OS_YUM_PACKAGES=$(cat /tmp/os_pkgs.txt)
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" ${OS_YUM_PACKAGES}
if [[ "$?" != "0" ]]; then
# 缺依賴包,需安裝
[[ -z "${ISO_FILE}" ]] && ISO_FILE=$(ls "${WORK_DIR}"/*.iso|awk -F"/" 'NR==1{print $NF}')
if [[ -f "${ISO_FILE}" ]]; then
# 缺ISO檔案則掛載cdrom,否則掛載ISO
mount -o loop ${ISO_FILE} /mnt
else
mount -o loop /dev/cdrom /mnt
fi
if [[ -f "/mnt/RPM-GPG-KEY-redhat-release" ]]; then
# 掛載成功則安裝依賴包
${CP} -rpf /etc/yum.repos.d ${MODULE_BACKUP_DIR}
echo -e '[Server]\nname=LocalYUM' > /etc/yum.repos.d/oracle.repo
echo 'baseurl=file:///mnt' >> /etc/yum.repos.d/oracle.repo
echo -e 'enabled=1\ngpgcheck=0' >> /etc/yum.repos.d/oracle.repo
echo 'gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release' >> /etc/yum.repos.d/oracle.repo
yum -y install ${OS_YUM_PACKAGES}
umount /mnt
${RM} -f "/etc/yum.repos.d/oracle.repo"
else
# 掛載失敗則使用系統自帶的yum配置進行安裝
yum -y install ${OS_YUM_PACKAGES}
fi
fi
-- 檢查確認
for i in $(cat /tmp/os_pkgs.txt); do
rpm -q --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" ${i} |grep 'is not installed'
done
關閉selinux
if [[ "$(getenforce)" = "Enforcing" ]]; then
cp /etc/selinux/config{,$OPT_TIME}
setenforce 0
sed -i "/^SELINUX=enforcing/c\#SELINUX=enforcing\nSELINUX=disable" /etc/selinux/config
fi
關閉防火牆
chkconfig iptables off
service iptables stop
配置系統引數
# the value of size in Kb
MEMTOTAL=$(grep MemTotal /proc/meminfo|awk '{print $2}')
SHMMAX=$(( MEMTOTAL * 1024 * 8 / 10 ))
SHMMNI=4096
PAGESIZE=$(getconf PAGE_SIZE)
SHMALL=$(( SHMMAX / 4096 ))
cat > /etc/sysctl.d/97-oracle-database-sysctl.conf<<-EOF
# -- The number of asynchronous IO requests at the same time(as per Note 579108.1),for example 1048576 = 1024 * 1024
fs.aio-max-nr = 3145728
# -- 512 * processes (for example 6815744 for 13312 processes)
fs.file-max = 6815744
# is maximum of sga in bytes
kernel.shmmax = ${SHMMAX}
kernel.shmall = ${SHMALL}
kernel.shmmni = ${SHMMNI}
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 16384 4194304
kernel.panic_on_oops = 1
vm.min_free_kbytes = 524288
EOF
/sbin/sysctl -p /etc/sysctl.d/98-oracle.conf
配置使用者資源限制
修改完成後儲存, 退出當前使用者並重新登入(不用重啟伺服器), 當前修改就會生效
MEMTOTAL=$(grep MemTotal /proc/meminfo|awk '{print $2}')
MEMLOCK=$(( MEMTOTAL * 9/10 ))
cat > /etc/security/limits.d/99-grid-oracle-limits.conf << EOF
oracle soft nproc 16384 #Ora bug 15971421
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
# setting for memlock hard limit is maximum of 128GB on x86_64 or 3GB on x86 OR 90 % of RAM
oracle soft memlock ${MEMLOCK}
oracle hard memlock ${MEMLOCK}
grid soft nproc 16384 # Ora bug 15971421
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock ${MEMLOCK}
grid hard memlock ${MEMLOCK}
EOF
引數說明
# 1. nproc 預設值的計算方法
# 計算公式為:
default_nproc = max_threads / 2;
# 其中, max_threads = mempages / (8 * THREAD_SIZE / PAGE_SIZE);
# mempages是機器的物理頁面個數, THREAD_SIZE=8K, 所以, 計算公式為:
default_nproc = max_threads / 2
= (mempages * PAGE_SIZE) / ( 2 * 8 * THREAD_SIZE )
= total_memory / 128K;
# 計算本機預設nproc配置:
cat /proc/meminfo | grep MemTotal
MemTotal: 115571480 kB
echo "115571480 / 128" | bc
902902
ulimit -u
902682
# 再rhel6.8中default_nproc == (total_memory / 128K / 2)
[root@appServer config]# echo $(($(awk '/MemTotal/{print $2}' /proc/meminfo ) /256))
15298
[root@appServer config]# ulimit -u
15187
[root@appServer config]#
# ulimit -a 命令的輸出資訊:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited # 一個程序的資料段的最大值
scheduling priority (-e) 0
file size (blocks, -f) unlimited # Shell建立檔案的最大體積, 1block = 512bytes
pending signals (-i) 1031426 # 最多允許多少個待處理的訊號
max locked memory (kbytes, -l) 64 # 每個程序可以鎖住的實體記憶體的最大值
max memory size (kbytes, -m) unlimited # 每個程序可以使用的常駐記憶體的最大值
open files (-n) 65535 # 每個程序可以同時開啟的最大檔案數, 不能是unlimited
pipe size (512 bytes, -p) 8 # 管道的最大值, 1block = 512bytes
POSIX message queues (bytes, -q) 819200 # POSIX的訊息佇列的最大值
real-time priority (-r) 0
stack size (kbytes, -s) 10240 # 單個程序能夠使用的最大棧大小
cpu time (seconds, -t) unlimited # 單個程序的最大CPU時間, 也就是可使用CPU的秒數, 到硬極限時, 這個程序就會立即自殺; 到軟極限時, 每秒傳送一次限制超時訊號SIGXCPU
max user processes (-u) 131072 # 單個使用者可同時執行的最大程序數, 不能是unlimited
virtual memory (kbytes, -v) unlimited # 每個程序可使用的最大虛擬記憶體
file locks (-x) unlimited # 每個程序能鎖住的最大檔案個數
-H 設定某個給定資源的硬極限. 如果使用者擁有root許可權, 可以增大硬極限. 任何使用者均可減少硬極限
-S 設定某個給定資源的軟極限, 軟極限可增大到硬極限的值
建立使用者
# 建立使用者組
groupadd --gid 54321 oinstall
groupadd --gid 54322 dba
groupadd --gid 54323 asmdba
groupadd --gid 54324 asmoper
groupadd --gid 54325 asmadmin
groupadd --gid 54326 oper
groupadd --gid 54327 backupdba
groupadd --gid 54328 dgdba
groupadd --gid 54329 kmdba
groupadd --gid 54330 racdba
# 建立使用者
useradd --uid 54321 --gid oinstall --groups dba,oper,asmdba,asmoper,backupdba,dgdba,kmdba,racdba --comment "Oracle Software Owner" oracle
useradd --uid 54322 --gid oinstall --groups dba,asmadmin,asmdba,asmoper --comment "Grid Infrastructure Owner" grid
# 設定使用者密碼
OS_PASS=123456
echo "${OS_PASS}"|passwd --stdin oracle
echo "${OS_PASS}"|passwd --stdin grid
建立軟體目錄
mkdir -p /oracle/app/grid
mkdir -p /oracle/app/12.2/grid
mkdir -p /oracle/app/oraInventory
chown -R grid:oinstall /oracle/app
mkdir -p /oracle/app/database
chown oracle:oinstall /oracle/app/database
chmod -R 775 /oracle/app
編輯使用者環境變數
注意:根據實際修改個節點ORACLE_SID值
su - grid
vi ~/.bash_profile
export ORACLE_SID=+ASM1
export ORACLE_BASE=/oracle/app/grid
export ORACLE_HOME=/oracle/app/12.2/grid
export PATH=${ORACLE_HOME}/bin:${ORACLE_HOME}/OPatch:${PATH}
umask 022
su - oracle
vi ~/.bash_profile
export ORACLE_SID=orcl1
export ORACLE_BASE=/oracle/app/database
export ORACLE_HOME=${ORACLE_BASE}/product/db_1
export GRID_HOME=/oracle/app/12.2/grid
export LD_LIBRARY_PATH=${ORACLE_HOME}/lib
export PATH=${PATH}:${ORACLE_HOME}/bin:${ORACLE_HOME}/OPatch:${ORACLE_HOME}/suptools/oratop:${GRID_HOME}/bin
export TNS_ADMIN=${GRID_HOME}/network/admin
umask 022
對於 12cR2 以後的 Oracle 版本, oratop位於資料夾 ${ORACLE_HOME}/suptools/oratop 下。此外, 它還與 Oracle 跟蹤檔案分析器(TFA) 捆綁在一起
配置profile
cat > /etc/profile.d/oracle-grid.sh << EOF
#Setting the appropriate ulimits for oracle and grid user
if [ \$USER = "oracle" ]; then
if [ \$SHELL = "/bin/ksh" ]; then
ulimit -u 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi
if [ \$USER = "grid" ]; then
if [ \$SHELL = "/bin/ksh" ]; then
ulimit -u 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi
EOF
關閉 transparent_hugepage
# rhel6
TH_CONF='/etc/grub.conf'
# rhel7 /etc/default/grub
if [[ -f "${TH_CONF}" ]] ; then
cp -p ${TH_CONF}{,$OPT_TIME}
sed -ri 's/^(GRUB_CMDLINE_LINUX=.*quiet)\"$/\1 transparent_hugepage=never\"/' ${TH_CONF}
fi
配置認證模組
cp /etc/pam.d/login{,$OPT_TIME}
cat >> /etc/pam.d/login <<-EOF
session required pam_limits.so
EOF
配置NTP
# 檢視當前時區
cat /etc/sysconfig/clock
# 新增ntp伺服器,根據實際環境修改服務地址
vi /etc/ntp.conf
server 127.127.0.1
# 配置ntpd的引數,我們主要強調的是要配置成"微調的模式" 也就是在options中要加入-x的選項
vi /etc/sysconfig/ntpd
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
# 開機啟動NTPD服務
chkconfig ntpd on
# 重啟服務,使最新配置生效
service ntpd restart
# 檢查是否生效
ntpq -p
ps -ef |grep ntp|grep -v grep
# 安裝完成後需要使用如下命令確認
$ crsctl check ctss
# 若當前環境沒有NTP伺服器時,將/etc/resolv.conf檔案移除或重新命名。以便使用Oracle ctss服務同步叢集事件。
配置/dev/shm(AMM記憶體管理)
在Linux環境中,該配置會影響資料庫automatic memory management機制。因此,在AMM的環境中需要確認/dev/shm可用且其值大於MEMORY_TARGET
。所有的SGA 記憶體都是在/dev/shm 下分配。
cat /etc/fstab
tmpfs /dev/shm tmpfs defaults,size=337920M 0 0
mount -o remount /dev/shm
配置SSH互信
export SSH='ssh -o PasswordAuthentication=no -o StrictHostKeyChecking=no'
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys
ssh -o stricthostkeychecking=no node2 cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys node2:~/.ssh/authorized_keys
scp -r ~/.ssh /home/grid/
scp -r ~/.ssh /home/oracle/
chown grid:oinstall /home/grid/.ssh -R
chown oracle:oinstall /home/oracle/.ssh -R
# 檢查確認
export SSH='ssh -o ConnectTimeout=3 -o ConnectionAttempts=5 -o PasswordAuthentication=no -o StrictHostKeyChecking=no'
for hosts in $(grep -Ev "^#|\-vip|scan" /etc/hosts|awk '{print $NF}'); do
${SSH} ${hosts} date
done
2.2 rhel7系統配置
配置虛擬檔案系統
vi /etc/fstab
tmpfs /dev/shm tmpfs defaults,rw,exec,size=2G 0 0
-- 重新裝載
# mount -o remount /dev/shm
disable Transparent HugePages
# 檢查
cat /sys/kernel/mm/transparent_hugepage/enabled
1. For Oracle Linux 7 and Red Hat Enterprise Linux 7, add or modify the transparent_hugepage=never parameter
in the /etc/default/grub file:transparent_hugepage=never
For example:
RUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto quiet numa=off transparent_hugepage=never"
GRUB_DISABLE_RECOVERY="true"
# fdisl -l /dev/sda 檢查確認磁碟分割槽型別
# 備份配置檔案
## For an MBR (BIOS-based) system:
cp /etc/default/grub /etc/default/grub-backup
cp /boot/grub2/grub.cfg /boot/grub2/grub.cfg-backup
## For a GPT (UEFI-based) system
cp /etc/default/grub /etc/default/grub-backup
cp /boot/efi/EFI/redhat/grub.cfg /boot/efi/EFI/redhat/grub.cfg-backup
2. Run the grub2–mkconfig command to regenerate the grub.cfg file.
## Please ensure to take a backup of the existing /boot/grub2/grub.cfg before rebuilding.
On BIOS-based machines: ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
On UEFI-based machines: ~]# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
3. Restart the system to make the changes permanent.
systemctl reboot
# 4. 檢查確認
cat /proc/cmdline
cat /sys/kernel/mm/transparent_hugepage/enabled
若仍無法關閉時
If Transparent Huge Pages (THP) is still not disabled, continue and use one of the options below.
Option 1: (Recommended) create a customized tuned profile with disabled THP
With this resolution we will create a customized version of the currently running profile. The customized version will disable THP.
Find out which profile is active, create a copy. In the following example we currently use the throughput-performance profile:
# tuned-adm active
Current active profile: throughput-performance
To create customized profile, create a new directory in /etc/tuned directory with desired profile name.
# mkdir /etc/tuned/myprofile-nothp
Then create a new tuned.conf file for myprofile-nothp, and insert the new tuning info:
# cat /etc/tuned/myprofile-nothp/tuned.conf
[main]
include=throughput-performance
[vm]
transparent_hugepages=never
Make the script executable:
# chmod +x /etc/tuned/myprofile-nothp/tuned.conf
Enable myprofile like so:
# tuned-adm profile myprofile-nothp
This change will immediately take effect and persist reboots.
To verify if THP are disabled or not, run below command:
# cat /sys/kernel/mm/transparent_hugepage/enabled
Option 2: (Alternative) Disable tuned services
This resolution will disable the tuned services.
# systemctl stop tuned
# systemctl disable tuned
OR
# tuned-adm off
Now add "transparent_hugepage=never" kernel parameter in grub2 configuration file as explained in steps 1-3 above.
Reboot the server for changes to take effect.
安裝系統 rpm包
-- Packages for Red Hat Enterprise Linux 7:
gcc
gcc-c++
bc
binutils
compat-libcap1
compat-libstdc++-33
dtrace-modules
dtrace-modules-headers
dtrace-modules-provider-headers
dtrace-utils
elfutils-libelf
elfutils-libelf-devel
fontconfig-devel
glibc
glibc-devel
ksh
libaio
libaio-devel
libdtrace-ctf-devel
libX11
libXau
libXi
libXtst
libXrender
libXrender-devel
libgcc
librdmacm-devel
libstdc++
libstdc++-devel
libxcb
make
net-tools (for Oracle RAC and Oracle Clusterware)
nfs-utils (for Oracle ACFS)
python (for Oracle ACFS Remote)
python-configshell (for Oracle ACFS Remote)
python-rtslib (for Oracle ACFS Remote)
python-six (for Oracle ACFS Remote)
targetcli (for Oracle ACFS Remote)
smartmontools
sysstat
unixODBC
unixODBC-devel
rpm -q --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" gcc gcc-c++ bc binutils compat-libcap1 compat-libstdc++-33 dtrace-modules dtrace-modules-headers dtrace-modules-provider-headers dtrace-utils elfutils-libelf elfutils-libelf-devel fontconfig-devel glibc glibc-devel ksh libaio libaio-devel libdtrace-ctf-devel libX11 libXau libXi libXtst libXrender libXrender-devel libgcc librdmacm-devel libstdc++ libstdc++-devel libxcb make net-tools smartmontools sysstat unixODBC unixODBC-devel
yum -y install compat-libstdc++-33.i686 glibc.i686 glibc-devel.i686 libaio.i686 libaio-devel.i686 libgcc.i686 libstdc++.i686 libstdc++-devel.i686 libXi.i686 libXtst.i686 unixODBC.i686 unixODBC-devel.i686
yum -y install gcc gcc-c++ bc binutils compat-libcap1 compat-libstdc++-33 dtrace-modules dtrace-modules-headers dtrace-modules-provider-headers dtrace-utils elfutils-libelf elfutils-libelf-devel fontconfig-devel glibc glibc-devel ksh libaio libaio-devel libdtrace-ctf-devel libX11 libXau libXi libXtst libXrender libXrender-devel libgcc librdmacm-devel libstdc++ libstdc++-devel libxcb make net-tools smartmontools sysstat unixODBC unixODBC-devel
配置/etc/hosts檔案
cp /etc/hosts /etc/hosts_$(date +%Y%d%m)
cat > /etc/hosts << EOF
127.0.0.1 loopback localhost.localdomain localhost localhost4
::1 loopback localhost.localdomain localhost localhost6
# Public
192.168.10.173 o19c1 o19c1.example.com
192.168.10.174 o19c2 o19c2.example.com
# Private
172.168.0.158 o19c1-priv o19c1.example.com-priv
172.168.0.159 o19c2-priv o19c2.example.com-priv
# Virtual
192.168.10.175 o19c1-vip o19c1.example.com-vip
192.168.10.176 o19c2-vip o19c2.example.com-vip
# SCAN
#192.168.10.177 o19c-scan o19c-scan.example.com
#192.168.10.178 o19c-scan o19c-scan.example.com
#192.168.10.179 o19c-scan o19c-scan.example.com
EOF
ssh配置 (LoginGraceTime 0)
cp /etc/ssh/sshd_config /etc/ssh/sshd_config_$(date +%Y%m%d)
or
cp /etc/ssh/sshd_config{,_$(date +%Y%m%d)}
grep '^LoginGraceTime' /etc/ssh/sshd_config
echo "LoginGraceTime 0" >> /etc/ssh/sshd_config
關閉selinux
if [[ "$(getenforce)" = "Enforcing" ]]; then
cp /etc/selinux/config /etc/selinux/config_$(date +%Y%m%d)
setenforce 0
# sed -i "/^SELINUX=enforcing/c\#SELINUX=enforcing\nSELINUX=disable" /etc/selinux/config
sed -i "/^SELINUX=enforcing/a\SELINUX=disable" /etc/selinux/config
fi
關閉防火牆
if [[ "$(ps -ef |grep -v grep|grep -ci firewalld)" = "1" ]]; then
systemctl stop firewalld
systemctl disable firewalld
fi
配置PAM
vi /etc/pam.d/login
session required pam_limits.so
cp /etc/pam.d/login{,_$(date +%Y%m%d)}
echo "session required pam_limits.so" >> /etc/pam.d/login
禁用avahi-daemon服務
--檢視服務狀態
systemctl status avahi-daemon
--停止服務
systemctl stop avahi-daemon
--禁止服務自啟動
systemctl disable avahi-daemon
if [[ -f "/etc/systemd/system/dbus-org.freedesktop.Avahi.service" ]]; then
systemctl stop avahi-dnsconfd
systemctl stop avahi-daemon
systemctl disable avahi-dnsconfd
systemctl disable avahi-daemon
fi
遮蔽RemoveIPC引數 [ rhel 7.2 BUG Doc ID 2081410.1 ]
if [[ -f "/etc/systemd/logind.conf" ]]; then
cp /etc/systemd/logind.conf /etc/systemd/logind.conf_$(date +%Y%m%d)
sed -i "/#RemoveIPC=/a\RemoveIPC=no" /etc/systemd/logind.conf
fi
grep '^RemoveIPC' /etc/systemd/logind.conf
[[ "$?" -eq "1" ]] && cp /etc/systemd/logind.conf{,_$(date +%Y%m%d)} && sed -i "/#RemoveIPC=/a\RemoveIPC=no" /etc/systemd/logind.conf
# 重啟伺服器或重啟systemd-logind
systemctl daemon-reload
systemctl restart systemd-logind
配置系統資源限制
# the value of size in Kb
MEMTOTAL=$(grep MemTotal /proc/meminfo|awk '{print $2}')
MEMLOCK=$(( MEMTOTAL * 9/10 ))
cat > /etc/security/limits.d/99-grid-oracle-limits.conf << EOF
oracle soft nproc 16384 #Ora bug 15971421
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
# setting for memlock hard limit is maximum of 128GB on x86_64 or 3GB on x86 OR 90 % of RAM
oracle soft memlock ${MEMLOCK}
oracle hard memlock ${MEMLOCK}
grid soft nproc 16384 #Ora bug 15971421
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
grid soft memlock ${MEMLOCK}
grid hard memlock ${MEMLOCK}
EOF
配置系統核心引數
# the value of size in Kb
MEMTOTAL=$(grep MemTotal /proc/meminfo|awk '{print $2}')
SHMMAX=$(( MEMTOTAL * 1024 * 8 / 10 ))
SHMMNI=4096
PAGESIZE=$(getconf PAGE_SIZE)
SHMALL=$(( SHMMAX / 4096 ))
cat > /etc/sysctl.d/97-oracle-database-sysctl.conf << EOF
# -- The number of asynchronous IO requests at the same time(as per Note 579108.1),for example 1048576 = 1024 * 1024
fs.aio-max-nr = 3145728
# -- 512 * processes (for example 6815744 for 13312 processes)
fs.file-max = 6815744
# is maximum of sga in bytes
kernel.shmmax = ${SHMMAX}
kernel.shmall = ${SHMALL}
kernel.shmmni = ${SHMMNI}
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 16384 4194304
kernel.panic_on_oops = 1
vm.min_free_kbytes = 524288
EOF
# /sbin/sysctl --system
-- 檢查確認
/sbin/sysctl -a |grep net.core.[wr]mem_max
Table A-1 Minimum Operating System Resource Parameter Settings
Parameter | Value | File |
---|---|---|
semmsl semmns semopm semmni |
250 32000 100 128 |
/proc/sys/kernel/sem |
shmall | Greater than or equal to the value of shmmax, in pages. | /proc/sys/kernel/shmall |
shmmax | Half the size of physical memory in bytesSee My Oracle Support Note 567506.1 for additional information about configuring shmmax. | /proc/sys/kernel/shmmax |
shmmni | 4096 | /proc/sys/kernel/shmmni |
panic_on_oops | 1 | /proc/sys/kernel/panic_on_oops |
file-max | 6815744 | /proc/sys/fs/file-max |
aio-max-nr | 1048576Note: This value limits concurrent outstanding requests and should be set to avoid I/O subsystem failures. | /proc/sys/fs/aio-max-nr |
ip_local_port_range | Minimum: 9000Maximum: 65500 | /proc/sys/net/ipv4/ip_local_port_range |
rmem_default | 262144 | /proc/sys/net/core/rmem_default |
rmem_max | 4194304 | /proc/sys/net/core/rmem_max |
wmem_default | 262144 | /proc/sys/net/core/wmem_default |
wmem_max | 1048576 | /proc/sys/net/core/wmem_max |
UDP和TCP 核心引數配置(臨時)
# 檢查
cat /proc/sys/net/ipv4/ip_local_port_range
# 配置
echo 9000 65500 > /proc/sys/net/ipv4/ip_local_port_range
#
/etc/rc.d/init.d/network restart
配置網路引數nozeroconf
cp /etc/sysconfig/network /etc/sysconfig/network_$(date +%Y%m%d)
cat >> /etc/sysconfig/network << EOF
NOZEROCONF=yes
EOF
cat >> /etc/sysconfig/network-scripts/ifcfg-lo << EOF
MTU=16436
EOF
系統時間校對
RHEL建議使用NTP [Tips on Troubleshooting NTP / chrony Issues](Doc ID 2068875.1)
# 檢驗時間和時區確認正確
date
# 檢視當前時區
timedatectl status
# 修改時區(若需要)
timedatectl set-timezone Asia/ShangHai
# 關閉chrony服務,移除chrony配置檔案(虛機不配置時間同步服務,使用ctss)
systemctl list-unit-files|grep chronyd
systemctl status chronyd
systemctl disable chronyd
systemctl stop chronyd
# 移除方式備份chrony配置檔案
mv /etc/chrony.conf /etc/chrony.conf_bak
# chronyd服務配置時間同步
# 配置時間伺服器(若需要)
vi /etc/chrony.conf
server 192.168.10.166 iburst
# 檢查確認
chronyc sources -v
chronyc sourcestats -v
# NTP服務方式配置同步時間
# 1. 關閉並禁用chrony服務
systemctl list-unit-files|grep chronyd
systemctl status chronyd
systemctl disable chronyd
systemctl stop chronyd
# 2. 刪除其配置檔案
mv /etc/chrony.conf /etc/chrony.conf_bak
# 3. 安裝NTP服務
yum install -y ntp ntpdate
systemctl enable ntpd
systemctl start ntpd
# 手動同步時間
ntpdate -u <NTP Server>
# 4. 配置NTP,開啟微調模式
# 編輯/etc/sysconfig/ntpd,在-g後面加上-x 和 -p引數
# Command line options for ntpd
OPTIONS="-g -x -p /var/run/ntpd.pid"
# 5. 檢查確認
ntpq -p
ntpstat # 檢視這臺伺服器是否連線到NTP伺服器
建立使用者組和使用者
# 1. 建立使用者組
/usr/sbin/groupadd -g 54321 oinstall
/usr/sbin/groupadd -g 54322 dba
/usr/sbin/groupadd -g 54323 oper
/usr/sbin/groupadd -g 54324 backupdba
/usr/sbin/groupadd -g 54325 dgdba
/usr/sbin/groupadd -g 54326 kmdba
/usr/sbin/groupadd -g 54327 asmdba
/usr/sbin/groupadd -g 54328 asmoper
/usr/sbin/groupadd -g 54329 asmadmin
/usr/sbin/groupadd -g 54330 racdba
# 2. 建立使用者
/usr/sbin/useradd -u 54321 -g oinstall -G dba,oper,asmdba,backupdba,dgdba,kmdba,racdba --comment "Oracle Software Owner" oracle
/usr/sbin/useradd -u 54322 -g oinstall -G asmadmin,asmdba,asmoper,dba --comment "Oracle Software Owner" grid
$ id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba), 54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54327(asmdba),54330(racdba)
$ id grid
uid=54331(grid) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54327(asmdba),54328(asmoper),54329(asmadmin),54330(racdba)
# 3. 配置密碼
echo "oracle"|passwd --stdin oracle
echo "grid"|passwd --stdin grid
建立軟體安裝目錄
# 1. 軟體安裝目錄
mkdir -p /ups/oracle/grid
mkdir -p /ups/oracle/19c/grid
mkdir -p /ups/oracle/oraInventory
chown -R grid:oinstall /ups/oracle
mkdir -p /ups/oracle/database/db_1
chown -R oracle:oinstall /ups/oracle/database
chmod -R 775 /ups/oracle
# 安裝檔案目錄(臨時儲存)
mkdir -p /ups/soft
chown -R grid:oinstall /ups/soft
chmod 775 /ups/soft
#
cat >> /etc/oraInst.loc <<EOF
inventory_loc=/ups/oracle/oraInventory
inst_group=oinstall
EOF
chown grid:oinstall /etc/oraInst.loc
chmod 644 /etc/oraInst.loc
配置使用者環境變數
su - grid
vi ~/.bash_profile
export ORACLE_SID=+ASM1
export ORACLE_BASE=/ups/oracle/grid
export ORACLE_HOME=/ups/oracle/19c/grid
export PATH=${ORACLE_HOME}/bin:${ORACLE_HOME}/OPatch:${PATH}
export INVENTORY_LOCATION=/ups/oracle/oraInventory
umask 022
if [ -t 0 ]; then
stty intr ^C
fi
su - oracle
vi ~/.bash_profile
export ORACLE_SID=o19cdb1
export ORACLE_BASE=/ups/oracle/database
export ORACLE_HOME=${ORACLE_BASE}/db_1
export GRID_HOME=/ups/oracle/19c/grid
export LD_LIBRARY_PATH=${ORACLE_HOME}/lib
export PATH=${ORACLE_HOME}/bin:${ORACLE_HOME}/OPatch:${ORACLE_HOME}/suptools/oratop:${ORACLE_HOME}/perl/bin:${GRID_HOME}/bin:${HOME}/scripts/bin:${PATH}
export TNS_ADMIN=${GRID_HOME}/network/admin
export SQLPATH=${ORACLE_HOME}/rdbms/admin:${HOME}/scripts/sql
umask 022
if [ -t 0 ]; then
stty intr ^C
fi
配置profile
cat > /etc/profile.d/oracle-grid.sh << EOF
#Setting the appropriate ulimits for oracle and grid user
if [ \$USER = "oracle" ]; then
if [ \$SHELL = "/bin/ksh" ]; then
ulimit -u 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi
if [ \$USER = "grid" ]; then
if [ \$SHELL = "/bin/ksh" ]; then
ulimit -u 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi
EOF
配置ssh互相
export SSH='ssh -o PasswordAuthentication=no -o StrictHostKeyChecking=no'
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys
ssh -o stricthostkeychecking=no 192.168.10.174 cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys 192.168.10.174:~/.ssh/authorized_keys
scp -r ~/.ssh /home/grid/
scp -r ~/.ssh /home/oracle/
chown grid:oinstall /home/grid/.ssh -R
chown oracle:oinstall /home/oracle/.ssh -R
# 檢查確認
export SSH='ssh -o ConnectTimeout=3 -o ConnectionAttempts=5 -o PasswordAuthentication=no -o StrictHostKeyChecking=no'
for ip in $(grep -Ev "^#|localhost|vip|scan" /etc/hosts); do
${SSH} ${ip} date
done
# 另一個方式使用Oracle自帶指令碼建立
配置dns
# yum 安裝
yum -y install unbound
yum install -y bind-utils
# 配置配置檔案/etc/unbound/unbound.conf
vi /etc/unbound/unbound.conf
……
38 # interface: 0.0.0.0
39 interface: 0.0.0.0
……
//找到38行,複製去掉註釋行,開啟監聽全網功能。
177 # access-control: 0.0.0.0/0 refuse
178 access-control: 192.168.10.0/24 allow
179 # access-control: 127.0.0.0/8 allow
// 找到配置檔案/etc/unbound/unbound.conf的第177行,預設為註釋行,且內容為拒絕訪問。複製本行內容到下面一行,去掉註釋“#“,改refuse為allow。然後儲存退出,重啟服務即可。
155 # do-ip6: yes
156 do-ip6: no
//找到155行內容,在其下複製一行並去除註釋,改yes為no,重啟服務即可去除對Ipv6的監聽
# 建立解析檔案
cat > /etc/unbound/local.d/example.conf << EOF
local-zone: "example.com." static
local-data: "example.com. 86400 IN SOA ns.example.com. root 1 1D 1H 1W 1H"
local-data: "ns.example.com. IN A 192.168.10.173"
local-data: "o19c1.example.com. IN A 192.168.10.173"
local-data: "o19c-scan.example.com. IN A 192.168.10.177"
local-data: "o19c-scan.example.com. IN A 192.168.10.178"
local-data: "o19c-scan.example.com. IN A 192.168.10.179"
local-data-ptr: "192.168.10.177 o19c-scan.example.com."
local-data-ptr: "192.168.10.178 o19c-scan.example.com."
local-data-ptr: "192.168.10.179 o19c-scan.example.com."
EOF
# 啟動服務及檢查
systemctl start unbound
systemctl restart unbound
systemctl status unbound
netstat -tunlp |grep unbound
ss -tunlp|grep unbound
cat /etc/resolv.conf
# Generated by NetworkManager
search example.com
nameserver 192.168.10.173
-- 驗證
nslookup o19c-scan
[root@o19c2 network-scripts]# nslookup o19c-scan
Server: 192.168.10.173
Address: 192.168.10.173#53
Name: o19c-scan.example.com
Address: 192.168.10.178
Name: o19c-scan.example.com
Address: 192.168.10.179
Name: o19c-scan.example.com
Address: 192.168.10.177
# 若不啟用DNS,則執行下面命令
mv /etc/resolv.conf /etc/resolv.conf_orig
關閉automatic NUMA balancing
# 檢查 Automatic NUMA balancing (0:未啟用, 1:已啟用)
sysctl -e kernel.numa_balanceing
# 關閉 automatic NUMA memory balancing 特性
cat >> /etc/sysctl.d/97-oracle-database-sysctl.conf <<-EOF
kernel.numa_balancing = 0
EOF
2.3 配置共享儲存
虛擬機器建立共享磁碟
# vmware 配置
set path=%path%;C:\Upgs\VMware\VMware Workstation
vmware-vdiskmanager.exe -c -s 2g -a lsilogic -t 4 "o19c-crs1.vmdk"
vmware-vdiskmanager.exe -c -s 2g -a lsilogic -t 4 "o19c-crs2.vmdk"
vmware-vdiskmanager.exe -c -s 2g -a lsilogic -t 4 "o19c-crs3.vmdk"
vmware-vdiskmanager.exe -c -s 32g -a lsilogic -t 4 "o19c-mgmt.vmdk"
vmware-vdiskmanager.exe -c -s 24g -a lsilogic -t 4 "o19c-data.vmdk"
vmware-vdiskmanager.exe -c -s 8g -a lsilogic -t 4 "o19c-data1.vmdk"
-- 編輯o19c1.vmx和o19c2.vmx檔案,可以使用uuid
#
# ----------------------------------------------------------------
# SHARED DISK SECTION - (BEGIN)
# ----------------------------------------------------------------
# - The goal in meeting the hardware requirements is to have a
# shared storage for the two nodes. The way to achieve this in
# VMware is the creation of a NEW SCSI BUS. It has to be of
# type "virtual" and we must have the disk.locking = "false"
# option.
# - Just dataCacheMaxSize = "0" should be sufficient with the
# diskLib.* parameters, although I include all parameters for
# documentation purposes.
# - maxUnsyncedWrites should matter for sparse disks only, and
# I certainly do not recommend using sparse disks for
# clustering.
# - dataCacheMaxSize=0 should disable cache size completely, so
# other three dataCache options should do nothing (no harm,
# but nothing good either).
# ----------------------------------------------------------------
#
diskLib.dataCacheMaxSize = "0"
diskLib.dataCacheMaxReadAheadSize = "0"
diskLib.dataCacheMinReadAheadSize = "0"
diskLib.dataCachePageSize = "4096"
diskLib.maxUnsyncedWrites = "0"
disk.locking = "false"
# ----------------------------------------------------------------
# Create one HBA
# ----------------------------------------------------------------
scsi1.present = "TRUE"
scsi1.virtualDev = "lsilogic"
scsi1.sharedBus = "virtual"
scsi1:0.present = "TRUE"
scsi1:0.present = "TRUE"
scsi1:0.fileName = "..\asmdisk\o19cdb-crs1.vmdk"
scsi1:0.mode = "independent-persistent"
scsi1:0.redo = ""
scsi1:1.present = "TRUE"
scsi1:1.fileName = "..\asmdisk\o19cdb-crs2.vmdk"
scsi1:1.mode = "independent-persistent"
scsi1:1.redo = ""
scsi1:2.present = "TRUE"
scsi1:2.fileName = "..\asmdisk\o19cdb-crs3.vmdk"
scsi1:2.mode = "independent-persistent"
scsi1:2.redo = ""
scsi1:3.present = "TRUE"
scsi1:3.fileName = "..\asmdisk\o19cdb-data1.vmdk"
scsi1:3.mode = "independent-persistent"
scsi1:3.redo = ""
scsi1:4.present = "TRUE"
scsi1:4.fileName = "..\asmdisk\o19cdb-data2.vmdk"
scsi1:4.mode = "independent-persistent"
scsi1:4.redo = ""
scsi1:5.present = "TRUE"
scsi1:5.fileName = "..\asmdisk\o19cdb-data3.vmdk"
scsi1:5.mode = "independent-persistent"
scsi1:5.redo = ""
#
# ----------------------------------------------------------------
# SHARED DISK SECTION - (END)
# ----------------------------------------------------------------
#
配置UDEV
cd /dev
for i in $(lsblk |grep disk|awk '{print $1}'|grep -v sda); do
echo "KERNEL==\"sd*\", SUBSYSTEM==\"block\", PROGRAM==\"/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\",RESULT==\"`/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$i`\", SYMLINK+=\"asm-$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\"" >>/etc/udev/rules.d/99-oracle-asmdevices.rules
done
vi /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="36000c291d17055c53a62f7abcf88455a", SYMLINK+="asm-crs1", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="36000c29d08a0342cdca087c34777f9f7", SYMLINK+="asm-crs2", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="36000c299b5061370eef6a9d6edcfb4ee", SYMLINK+="asm-crs3", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="36000c299792d931c3c73d16c6d25d7ac", SYMLINK+="asm-mgmt", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="36000c295f9d043986fc840f111d477d8", SYMLINK+="asm-data", OWNER="grid", GROUP="asmadmin", MODE="0660"
# 重啟生效
/sbin/partprobe /dev/sdd
/sbin/udevadm control --reload-rules
/sbin/udevadm trigger --type=devices --action=change
Verifying the Disk I/O Scheduler on Linux
# cat /sys/block/${ASM_DISK}/queue/scheduler
noop [deadline] cfq
cd /dev
for i in $(lsblk |grep disk|awk '{print $1}'|grep -v sda); do
echo $i; cat /sys/block/${i}/queue/scheduler
done
cat >> /etc/udev/rules.d/60-oracle-schedulers.rules <<EOF
ACTION=="add|change", KERNEL=="sd[b-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="deadline"
EOF
/sbin/udevadm control --reload-rules
/sbin/udevadm trigger --type=devices --action=change
格式化共享磁碟(可選)
for dsk in $(ls -tr /dev/mapper/asm_*);do
dd if=/dev/zero of=${dsk} bs=1024k count=100
done
3. 軟體安裝
3.1 安裝GI軟體
安裝方式的改變
- 在 12.2 中,不再需要額外的空間來解壓下載的 zip 檔案。GI gold image 解壓到的目錄就 是GRID HOME。確保使用umask 是 022 的使用者 grid 進行解壓
- 執行 $GRID_HOME/gridSetup.sh 開始安裝,而不是之前版本的 runInstaller。
前期準備及確認
# 1. 解壓軟體到oracle home
su - grid
unzip -qo /ups/soft/linuxx64_12201_grid_home.zip -d /oracle/app/12.2/grid
# 2. 安裝cvu包
cd /oracle/app/12.2/grid/cv/rpm
rpm -ivh cvuqdisk-1.0.10-1.rpm
scp /oracle/app/12.2/grid/cv/rpm/cvuqdisk-1.0.10-1.rpm node2:/tmp/
ssh -o stricthostkeychecking=no node2
rpm -ivh /tmp/cvuqdisk-1.0.10-1.rpm
# 3. 檢查環境
cd /oracle/app/12.2/grid/
./runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup -verbose > ~/grid_env_check.txt
安裝配置同時不更新應用PSU補丁二進位制檔案
圖形介面安裝
# 4. 安裝gi軟體
su - grid
cd /oracle/app/12.2/grid/
./gridSetup.sh
預設選擇Private&Asm,則為Flex 叢集,若只選擇為Private,則為傳統的標準叢集
選擇YES
將GIMR使用單獨的DG
選擇合適的磁碟及其 冗餘方式
注意:在建立RAC的過程中,建立OCR/CRS磁碟組時,不用去給磁碟組定義failure group,預設即可(每個磁碟建立一個failure group)。因為normal冗餘的磁碟組正常只需要兩個failur group即可,但是當normal級別的磁碟組用於存放OCR時候,則必須至少包含三個 failure group。節點必須能夠隨時訪問超過一半的votingdisk,不然則被叢集逐出(腦裂)
各個節點依次順序執行root指令碼
su - root
sh /u01/app/oraInventory/orainstRoot.sh && sh /u01/app/12.2.0.1/grid/root.sh
靜默安裝
其中-silent指的是靜默安裝,-ignorePrereq忽略prerequisite的檢查結果,showProgress顯示進度
命令列引數方式
# 安裝gi軟體
su - grid
cd /oracle/app/12.2/grid/
./gridSetup.sh -ignorePrereq -skipPrereqs -waitforcompletion -silent \
-responseFile ${SOFT_HOME_DIR}/install/response/gridsetup.rsp \
INVENTORY_LOCATION=${ORA_INVENTORY} \
SELECTED_LANGUAGES=en,en_US,zh_CN \
oracle.install.option=${INSTALL_OPTION} \
ORACLE_BASE=${SOFT_BASE_DIR} \
oracle.install.asm.OSDBA=${OSASMDBA_GROUP} \
oracle.install.asm.OSOPER=${OSASMOPER_GROUP}
oracle.install.asm.OSASM=${OSASM_GROUP} \
oracle.install.crs.config.scanType=LOCAL_SCAN \
oracle.install.crs.config.gpnp.scanName=${SCAN_NAME} \
oracle.install.crs.config.gpnp.scanPort=${SCAN_PORT} \
oracle.install.crs.config.ClusterConfiguration=${CLUSTER_CONFIGURATION} \
oracle.install.crs.config.configureAsExtendedCluster=false \
oracle.install.crs.config.clusterName=${CLUSTER_NAME} \
oracle.install.crs.config.gpnp.configureGNS=false \
oracle.install.crs.config.autoConfigureClusterNodeVIP=false \
oracle.install.crs.config.clusterNodes=${CLUSTER_NODE_LIST} \
oracle.install.crs.config.networkInterfaceList=${INTERFACE_LIST} \
oracle.install.asm.configureGIMRDataDG=${GIMRDATADG_FLAG} \
oracle.install.crs.config.storageOption=${CRS_STORAGE_TYPE} \
oracle.install.crs.config.useIPMI=false \
oracle.install.asm.storageOption=${ASM_STORAGE_TYPE} \
oracle.install.asm.SYSASMPassword=${SYS_PWD} \
oracle.install.asm.diskGroup.name=${CRS_DG_NAME} \
oracle.install.asm.diskGroup.redundancy=${CRS_DG_REDUNDANCY} \
oracle.install.asm.diskGroup.AUSize=${CRS_AU_SIZE} \
oracle.install.asm.diskGroup.disksWithFailureGroupNames= \
oracle.install.asm.diskGroup.disks=${CRS_DISK_LIST} \
oracle.install.asm.diskGroup.diskDiscoveryString=${DISK_DISCOVERY_STRING} \
oracle.install.asm.monitorPassword=${SYS_PWD} \
oracle.install.asm.gimrDG.name=${GIRM_DG_NAME} \
oracle.install.asm.gimrDG.redundancy=${GIRM_DG_REDUNDANCY} \
oracle.install.asm.gimrDG.AUSize=${GIRM_AU_SIZE} \
oracle.install.asm.gimrDG.disks=${GIRM_DISK_LIST} \
oracle.install.asm.configureAFD=false \
oracle.install.crs.configureRHPS=false \
oracle.install.crs.config.ignoreDownNodes=false \
oracle.install.config.managementOption=NONE \
oracle.install.crs.rootconfig.executeRootScript=false
# 2) 逐個節點依次順序執行root指令碼
su - root
sh ${ORA_INVENTORY}/orainstRoot.sh && sh ${GRID_HOME}/root.sh
# 3) 建立MGMT管理資料庫 【19C MGMT不是必選項】
su - grid
cd /oracle/app/12.2/grid/
GI_SETUP_RSP=$(ls -tr ${SOFT_HOME_DIR}/install/response/db_*.rsp|tail -1)
./gridSetup.sh -silent -executeConfigTools -responseFile ${GI_SETUP_RSP}
響應檔案方式
準備響應檔案
grep -Ev "^#|^$|[ \t]+#" ${ORACLE_HOME}/install/response/gridsetup.rsp > ~/grid.rsp
# 修改響應檔案內容
cat > ~/gridsetup.rsp<<-EOF
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v19.0.0
oracle.install.option=CRS_CONFIG
oracle.install.asm.OSDBA=asmdba
oracle.install.asm.OSOPER=asmoper
oracle.install.asm.OSASM=asmadmin
oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.SCANClientDataFile=
oracle.install.crs.config.gpnp.scanName=o19c-scan
oracle.install.crs.config.gpnp.scanPort=1533
oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.memberClusterManifestFile=
oracle.install.crs.config.clusterName=o19c-cluster
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.gpnp.gnsOption=
oracle.install.crs.config.gpnp.gnsClientDataFile=
oracle.install.crs.config.gpnp.gnsSubDomain=
oracle.install.crs.config.gpnp.gnsVIPAddress=
oracle.install.crs.config.sites=
oracle.install.crs.config.clusterNodes=o19c1:o19c1-vip:HUB,o19c2:o19c2-vip:HUB
oracle.install.crs.config.networkInterfaceList=ens32:192.168.10.0:1,ens33:172.168.0.0:5
oracle.install.crs.configureGIMR=false
oracle.install.asm.configureGIMRDataDG=false
oracle.install.crs.config.storageOption=
oracle.install.crs.config.sharedFileSystemStorage.votingDiskLocations=
oracle.install.crs.config.sharedFileSystemStorage.ocrLocations=
oracle.install.crs.config.useIPMI=false
oracle.install.crs.config.ipmi.bmcUsername=
oracle.install.crs.config.ipmi.bmcPassword=
oracle.install.asm.SYSASMPassword=oracle
oracle.install.asm.diskGroup.name=CRSDG
oracle.install.asm.diskGroup.redundancy=NORMAL
oracle.install.asm.diskGroup.AUSize=4
oracle.install.asm.diskGroup.FailureGroups=
oracle.install.asm.diskGroup.disksWithFailureGroupNames=
oracle.install.asm.diskGroup.disks=/dev/asm-crs1,/dev/asm-crs2,/dev/asm-crs3
oracle.install.asm.diskGroup.quorumFailureGroupNames=
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/asm-*
oracle.install.asm.monitorPassword=oracle
oracle.install.asm.gimrDG.name=MGMT
oracle.install.asm.gimrDG.redundancy=EXTERNAL
oracle.install.asm.gimrDG.AUSize=4
oracle.install.asm.gimrDG.FailureGroups=
oracle.install.asm.gimrDG.disksWithFailureGroupNames=
oracle.install.asm.gimrDG.disks=/dev/asm-mgmt
oracle.install.asm.gimrDG.quorumFailureGroupNames=
oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=NONE
oracle.install.config.omsHost=
oracle.install.config.omsPort=
oracle.install.config.emAdminUser=
oracle.install.config.emAdminPassword=
oracle.install.crs.rootconfig.executeRootScript=false
oracle.install.crs.rootconfig.configMethod=
oracle.install.crs.rootconfig.sudoPath=
oracle.install.crs.rootconfig.sudoUserName=
oracle.install.crs.config.batchinfo=
oracle.install.crs.app.applicationAddress=
oracle.install.crs.deleteNode.nodes=
EOF
# 執行命令
${ORACLE_HOME}/gridSetup.sh -silent -ignorePrereq -skipPrereqs -waitforcompletion -responseFile ~/gridsetup.rsp
GI軟體配置時升級PSU補丁檔案
[參考文件](How to Apply a Grid Infrastructure Patch Before Grid Infrastructure Configuration (before root.sh or rootupgrade.sh or gridsetup.bat) is Executed (Doc ID 1410202.1))
必須在未執行 root.sh or rootupgrade.sh or gridsetup.bat指令碼前
這種方式先應用GI PSU二進位制檔案然後進行GI叢集配置
19.7版本更新(RU)包含許多與在OL8/RHEL8上執行Oracle 19c RAC有關的修補程式,建議新部署環境時使用-applyRU選項安裝部署
解壓基礎軟體包到GI Home目錄並更新OPatch
# 1. 解壓軟體到gi home (grid使用者)
su - grid
unzip -qo /ups/soft/linuxx64_12201_grid_home.zip -d /oracle/app/12.2/grid
# 2. 更新opatch (grid使用者)
unzip -d /oracle/app/12.2/grid p6880880_<platform version info>.zip
實際案例
/ups/oracle/o19c/grid/gridSetup.sh -silent -ignorePrereq -skipPrereqs -waitforcompletion -responseFile /ups/soft/gridsetup.rsp -applyRU /ups/soft/31750108
安裝/更新補丁使用方法
用法
Usage: gridSetup.sh [<flag>] [<option>]
Following are the possible flags:
-help - display help.
-silent - run in silent mode. The inputs can be a response file or a list of command line variable value pairs.
[-ignorePrereqFailure - ignore all prerequisite checks failures.]
[-lenientInstallMode - perform the best effort installation by automatically ignoring invalid data in input parameters.]
-responseFile - specify the complete path of the response file to use.
-logLevel - enable the log of messages up to the priority level provided in this argument. Valid options are: severe, warning, info, config, fine, finer, finest.
-executePrereqs | -executeConfigTools | -createGoldImage | -switchGridHome | -downgrade | -dryRunForUpgrade
-executePrereqs - execute the prerequisite checks only.
-executeConfigTools - execute the config tools for an installed home.
[-skipStackCheck - skip the stack status check.]
-createGoldImage - create a gold image from the current Oracle home.
-destinationLocation - specify the complete path to where the created gold image will be located.
[-exclFiles - specify the complete paths to the files to be excluded from the new gold image.]
-switchGridHome - change the Oracle Grid Infrastructure home path.
-downgrade - To downgrade Grid Infrastructure back to old home (to be used only in the case of incomplete upgrade).
-silent - run in silent mode. The inputs can be a response file or a list of command line variable value pairs.
[-ignorePrereqFailure - ignore all prerequisite checks failures.]
[-lenientInstallMode - perform the best effort installation by automatically ignoring invalid data in input parameters.]
[-configmethod - Specify the method to execute scripts as privileged user. If not specified then user will be instructed to run the scripts by logging in as privileged user. Valid options are: root,sudo.]
[-sudopath - Specify the complete path to the sudo program. This is an optional argument. This is needed if 'sudo' is specified for the configmethod and 'sudo' program is not present in the default path.]
[-sudousername - Specify the name of sudoer.]
-dryRunForUpgrade - To perform a dry run of the Grid Infrastructure Upgrade process.
-debug - run in debug mode.
-printdiskusage - log the debug information for the disk usage.
-printmemory - log the debug information for the memory usage.
-printtime - log the debug information for the time usage.
-waitForCompletion - wait for the completion of the installation, instead of spawning the installer and returning the console prompt.
-noconfig - do not execute the config tools.
-noconsole - suppress the display of messages in the console. The console is not allocated.
-ignoreInternalDriverError - ignore any internal driver errors.
-noCopy - perform the configuration without copying the software on to the remote nodes.
-applyRU - apply release update to the Oracle home.
-applyOneOffs - apply one-off patch to the Oracle home. Multiple one-off patches can be passed as a comma separated list of locations.
12.2.0.1
# 3. 應用補丁 (grid使用者)
# 解壓補丁檔案
unzip -qo p27468969*.zip -d /oracle/app/12.2/grid/
# 3.1 應用RU補丁
$GI_HOME/gridSetup.sh -applyPSU <downloaded patch location>
# 3.2 應用 Non-RU 補丁
$GI_HOME/gridSetup.sh -applyOneOffs <downloaded patch location>
# 3.3 應用 RU 和 Non-RU 補丁
$GI_HOME/gridSetup.sh -applyPSU <downloaded patch location> -applyOneOffs <downloaded patch location>
18.1 and above
To apply only Release Updates:
$GI_HOME/gridSetup.sh -silent -ignorePrereq -skipPrereqs -waitforcompletion -applyRU <downloaded patch location> -responseFile /ups/soft/gridsetup.rsp
For Windows: $GI_HOME/gridSetup.bat -silent -applyRU <downloaded patch location>
To apply only Non-RU patches:
$GI_HOME/gridSetup.sh -silent -ignorePrereq -skipPrereqs -waitforcompletion -applyRUR <downloaded patch location> -responseFile /ups/soft/gridsetup.rsp
For Windows: $GI_HOME/gridSetup.bat -silent -applyOneOffs <downloaded patch location>
To apply Release Updates and Non-RU patches in one command:
$GI_HOME/gridSetup.sh -silent -ignorePrereq -skipPrereqs -waitforcompletion -applyRU <downloaded patch location> -applyRUR <downloaded atch location> -responseFile /ups/soft/gridsetup.rsp
檢查叢集服務狀態
su - grid
crsctl status res -t -init
crsctl status res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE ol6-122-rac1 STABLE
ONLINE ONLINE ol6-122-rac2 STABLE
ora.CRS.dg
ONLINE ONLINE ol6-122-rac1 STABLE
ONLINE ONLINE ol6-122-rac2 STABLE
ora.MGMT.dg
ONLINE ONLINE ol6-122-rac1 STABLE
ONLINE ONLINE ol6-122-rac2 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE ol6-122-rac1 STABLE
ONLINE ONLINE ol6-122-rac2 STABLE
ora.chad
ONLINE ONLINE ol6-122-rac1 STABLE
ONLINE ONLINE ol6-122-rac2 STABLE
ora.net1.network
ONLINE ONLINE ol6-122-rac1 STABLE
ONLINE ONLINE ol6-122-rac2 STABLE
ora.ons
ONLINE ONLINE ol6-122-rac1 STABLE
ONLINE ONLINE ol6-122-rac2 STABLE
ora.proxy_advm
OFFLINE OFFLINE ol6-122-rac1 STABLE
OFFLINE OFFLINE ol6-122-rac2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE ol6-122-rac2 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE ol6-122-rac1 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE ol6-122-rac1 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE ol6-122-rac1 169.254.15.226 192.1
68.1.201,STABLE
ora.asm
1 ONLINE ONLINE ol6-122-rac1 Started,STABLE
2 ONLINE ONLINE ol6-122-rac2 Started,STABLE
3 OFFLINE OFFLINE STABLE
ora.cvu
1 ONLINE ONLINE ol6-122-rac1 STABLE
ora.mgmtdb
1 ONLINE ONLINE ol6-122-rac1 Open,STABLE
ora.ol6-122-rac1.vip
1 ONLINE ONLINE ol6-122-rac1 STABLE
ora.ol6-122-rac2.vip
1 ONLINE ONLINE ol6-122-rac2 STABLE
ora.qosmserver
1 ONLINE ONLINE ol6-122-rac1 STABLE
ora.scan1.vip
1 ONLINE ONLINE ol6-122-rac2 STABLE
ora.scan2.vip
1 ONLINE ONLINE ol6-122-rac1 STABLE
ora.scan3.vip
1 ONLINE ONLINE ol6-122-rac1 STABLE
--------------------------------------------------------------------------------
$
3.2 建立ASM DG
圖形介面建立DG
su - grid
asmca
依次按需建立ASM DG
完成後退出。
asmca -silent
命令方式建立DG
# 建立外部冗餘磁碟組 external
asmca -silent -createDiskGroup -diskString '/dev/asm-*' -diskGroupName DATA2 -disk '/dev/asm-data1' -redundancy EXTERNAL -au_size 4
asmca -silent -createDiskGroup -diskString '/dev/asm-*' -diskGroupName DATA3 -diskList '/dev/asm-data2','/dev/asm-data3' -redundancy EXTERNAL -au_size 4 # -compatible.asm 12.2 -compatible.rdbms 12.2 -compatible.advm 12.2
# 建立Normal磁碟組
asmca -silent -createDiskGroup -diskString '/dev/asm-*' -diskGroupName DATA4 -diskList '/dev/asm-data4','/dev/asm-data5' -redundancy NORMAL -au_size 4
3.3 安裝DB軟體
解壓軟體包
su - oracle
unzip -qo /ups/soft/linuxx64_12201_database.zip -d ${ORACLE_HOME}
建議執行 runInstaller 命令安裝期間應用補丁
使用-applyRU 或者 -applyOneOffs 選項
如果是 19.7 RU 或者更高版本
===========================================
$ export CV_ASSUME_DISTID=OL7
$ ./runInstaller -applyRU <19.7DBRU patch 30869156 unzip location>
比如-
$ ./runInstaller -applyRU /u01/app/30869156
這裡的-
30869156 - DATABASE RELEASE UPDATE 19.7.0.0.0
如果是 19.6 RU
===========================================
$ cd /u01/app/oracle/product/19c/dbhome_1
$ export CV_ASSUME_DISTID=OL7
$ ./runInstaller -applyRU <19.6 DBRU patch 30557433 unzip location> -applyOneOffs <19.6 OJVM patch 30484981 unzip location>,<19.6 Oneoff patch 30150710 unzip location>
比如-
$ ./runInstaller -applyRU /u01/app/30557433 -applyOneOffs /u01/app/30484981,/u01/app/30150710
這裡的-
30557433 - DATABASE RELEASE UPDATE 19.6.0.0.0
30484981 - OJVM RELEASE UPDATE 19.6.0.0.0
30150710 - EM express OneOff patch 19.6.0.0.0
注意-
1. 在執行 Installer 之前下載最新的 OPatch
2. 在這兩種情況下,安裝RU後,OCW RU仍然是19.3.0.0.0.0版本。終端使用者可以選擇在安裝後或安裝過程中更新OCW版本(參見下面的步驟a和b)。
a. 如果是 19.7 RU 或者更高版本
===========================================
$ export CV_ASSUME_DISTID=OL7
$ ./runInstaller -applyRU <19.7 DBRU patch 30869156 unzip location> -applyOneOffs <19.7 OCWRU patch 30894985 unzip location>
比如-
$ ./runInstaller -applyRU /u01/app/30869156 -applyOneOffs /u01/app/30899722/30894985
這裡的-
30869156 - DATABASE RELEASE UPDATE 19.7.0.0.0
30894985 - OCWRU Patch 19.7.0.0.0 (Not separately available for download.Download GI RU 19.7.0.0.0 Patch 30899722 & then unzip to locate the OCWRU Patch)
b. 如果是 19.6 RU
===========================================
$ export CV_ASSUME_DISTID=OL7
$ ./runInstaller -applyRU <19.6DBRU patch 30557433 unzip location> -applyOneOffs <19.6 OJVM patch 30484981 unzip location>,<19.6 Oneoff patch 30150710 unzip location>,<19.6 OCWRU patch 30489227 unzip location>
比如-
$ ./runInstaller -applyRU /u01/app/30557433 -applyOneOffs /u01/app/30484981,/u01/app/30150710,/u01/app/30501910/30489227
這裡的-
30557433 - DATABASE RELEASE UPDATE 19.6.0.0.0
30484981 - OJVM RELEASE UPDATE 19.6.0.0.0
30150710 - EM express OneOff patch 19.6.0.0.0
30489227 - OCWRU Patch 19.6.0.0.0 (Not separately available for download.Download GI RU 19.6.0.0.0 Patch 30501910 & then unzip to locate the OCWRU Patch)
3. EM Express One-Off Patch 30150710 已包含在 19.7 DBRU Patch 30869156
圖形介面安裝
su - oracle
cd ${ORACLE_HOME}
./runInstaller
依次逐個節點執行root指令碼
su - root
sh /u01/app/oracle/product/12.2.0.1/db_1/root.sh
靜默安裝
使用說明
# 1. 使用說明
${ORACLE_HOME}/runInstaller -silent -h
Usage: runInstaller [<flag>] [<option>]
Following are the possible flags:
-help - display help.
-silent - run in silent mode. The inputs can be a response file or a list of command line variable value pairs.
[-ignorePrereqFailure - ignore all prerequisite checks failures.]
-responseFile - specify the complete path of the response file to use.
-logLevel - enable the log of messages up to the priority level provided in this argument. Valid options are: severe, warning, info, config, fine, finer, finest.
-executePrereqs | -executeConfigTools | -createGoldImage
-executePrereqs - execute the prerequisite checks only.
-executeConfigTools - execute the config tools for an installed home.
-createGoldImage - create a gold image from the current Oracle home.
-destinationLocation - specify the complete path to where the created gold image will be located.
[-exclFiles - specify the complete paths to the files to be excluded from the new gold image.]
-debug - run in debug mode.
-printdiskusage - log the debug information for the disk usage.
-printmemory - log the debug information for the memory usage.
-printtime - log the debug information for the time usage.
-waitForCompletion - wait for the completion of the installation, instead of spawning the installer and returning the console prompt.
-noconfig - do not execute the config tools.
-noconsole - suppress the display of messages in the console. The console is not allocated.
-ignoreInternalDriverError - ignore any internal driver errors.
-noCopy - perform the configuration without copying the software on to the remote nodes.
-applyRU - apply release update to the Oracle home.
-applyOneOffs - apply one-off patch to the Oracle home. Multiple one-off patches can be passed as a comma separated list of locations.
軟體安裝配置
響應檔案方式
# 修改配置影響檔案方式
cat > /ups/soft/db_install.rsp <<-EOF
oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v19.0.0
oracle.install.option=INSTALL_DB_SWONLY
oracle.install.db.InstallEdition=EE
oracle.install.db.OSDBA_GROUP=dba
oracle.install.db.OSOPER_GROUP=oper
oracle.install.db.OSBACKUPDBA_GROUP=backupdba
oracle.install.db.OSDGDBA_GROUP=dgdba
oracle.install.db.OSKMDBA_GROUP=kmdba
oracle.install.db.OSRACDBA_GROUP=racdba
oracle.install.db.rootconfig.executeRootScript=false
oracle.install.db.rootconfig.configMethod=
oracle.install.db.rootconfig.sudoPath=
oracle.install.db.rootconfig.sudoUserName=
oracle.install.db.CLUSTER_NODES=o19c1,o19c2
oracle.install.db.config.starterdb.type=GENERAL_PURPOSE
oracle.install.db.config.starterdb.globalDBName=o19cdb
oracle.install.db.config.starterdb.SID=o19cdb
oracle.install.db.ConfigureAsContainerDB=true
oracle.install.db.config.PDBName=pdb_orcl
oracle.install.db.config.starterdb.characterSet=AL32UTF8
oracle.install.db.config.starterdb.memoryOption=false
oracle.install.db.config.starterdb.memoryLimit=2048
oracle.install.db.config.starterdb.installExampleSchemas=false
oracle.install.db.config.starterdb.password.ALL=oracle
oracle.install.db.config.starterdb.password.SYS=
oracle.install.db.config.starterdb.password.SYSTEM=
oracle.install.db.config.starterdb.password.DBSNMP=
oracle.install.db.config.starterdb.password.PDBADMIN=oracle
oracle.install.db.config.starterdb.managementOption=
oracle.install.db.config.starterdb.omsHost=
oracle.install.db.config.starterdb.omsPort=
oracle.install.db.config.starterdb.emAdminUser=
oracle.install.db.config.starterdb.emAdminPassword=
oracle.install.db.config.starterdb.enableRecovery=false
oracle.install.db.config.starterdb.storageType=ASM_STORAGE
oracle.install.db.config.starterdb.fileSystemStorage.dataLocation=
oracle.install.db.config.starterdb.fileSystemStorage.recoveryLocation=
oracle.install.db.config.asm.diskGroup=DATA
oracle.install.db.config.asm.ASMSNMPPassword=oracle
EOF
${ORACLE_HOME}/runInstaller -silent -waitForCompletion -ignorePrereqFailure -responseFile /ups/soft/db_install.rsp -ignoreSysPrereqs
命令列方式
su - oracle
cd /ups/soft/database
# 執行./runInstaller啟用介面 【命令列引數】
./runInstaller -ignorePrereq -waitforcompletion -silent \
-responseFile ${DB_HOME_DIR}/install/response/db_install.rsp \
oracle.install.option=${INSTALL_OPTION} \
ORACLE_HOSTNAME=${ORA_HOST_NAME} \
UNIX_GROUP_NAME=${UNIX_GROUP_NAME} \
INVENTORY_LOCATION=${ORA_INVENTORY} \
SELECTED_LANGUAGES=en,en_US,zh_CN \
ORACLE_HOME=${DB_HOME_DIR} \
ORACLE_BASE=${DB_BASE_DIR} \
oracle.install.db.InstallEdition=${INSTALL_EDITION} \
oracle.install.db.OSDBA_GROUP=${OSDBA_GROUP} \
oracle.install.db.OSOPER_GROUP=${OSOPER_GROUP} \
oracle.install.db.OSBACKUPDBA_GROUP=${OSBACKUPDBA_GROUP} \
oracle.install.db.OSDGDBA_GROUP=${OSDGDBA_GROUP} \
oracle.install.db.OSKMDBA_GROUP=${OSKMDBA_GROUP} \
oracle.install.db.OSRACDBA_GROUP=${OSRACDBA_GROUP} \
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
DECLINE_SECURITY_UPDATES=true ${GI_PARAMS}
# 2) 執行root指令碼
su - root
sh ${ORA_INVENTORY}/orainstRoot.sh && sh ${DB_HOME_DIR}/root.sh
3.4 建庫前安裝軟體補丁(可選)
使用root使用者執行補丁更新命令,GI打補丁需要用GI $ORACLE_HOME/OPatch/opatchauto
,DB打補丁需要使用 DB $ORACLE_HOME/OPatch/opatchauto
su - root
/ups/app/19c/grid/OPatch/opatchauto apply /oracle/soft/<patchid> -oh /ups/app/19c/grid
# 由於沒有建立例項,OCR中沒有DB HOME資訊,從而不能更新DB HOME,需要單獨指定目錄更新補丁
su - root
/ups/app/oracle/database/19c/db_1/OPatch/opatchauto apply /oracle/soft/<patchid> -oh /ups/app/oracle/database/19c/db_1
3.5 DBCA工具配置資料庫
圖形介面建庫
su - oracle
dbca
根據需要是否建立PDB
這裡都全選,包含PDB。避免後續業務需要元件需要在PDB上另外單獨安裝配置。
根據實際配置SGA+PGA
配置資料塊大小和程序數
建議:12C+版本選擇預設的字符集和國際字符集配置(即:AL32UTF8和AL32UTF16),業務根據需要單獨配置PDB字符集。
靜默方式建庫
# Data_Warehouse.dbc|General_Purpose.dbc|New_Database.dbt
TEMPLATE_NAME='General_Purpose.dbc'
# SINGLE | RAC | RACONENODE
DB_CONING_TYPE='RAC'
DB_NAME='orcl'
# AL32UTF8 | ZHS16GBK
CHARACTERSET='AL32UTF8'
SYS_PWD='oracle'
# MULTIPURPOSE | OLTP
DB_TYPE='OLTP'
# ASM |FS
STORAGE_TYPE='ASM'
MEMORY_PCT='40'
# 資料檔案目錄
DATA_DIR='DATA'
USE_OMF='true'
ISCDB='true'
INITPARAMS='processes=1000'
NODE_LIST='node1,node2'
# su - oracle
dbca -silent -ignorePreReqs -createDatabase \
-databaseConfigType ${DB_CONING_TYPE} \
-datafileJarLocation ${ORACLE_HOME}/assistants/dbca/templates/ \
-templateName ${TEMPLATE_NAME} \
-gdbname ${DB_NAME} -sid ${DB_NAME} -responseFile NO_VALUE \
-characterSet ${CHARACTERSET} \
-sysPassword "${SYS_PWD}" \
-systemPassword "${SYS_PWD}" \
-databaseType ${DB_TYPE} \
-automaticMemoryManagement false \
-memoryPercentage ${MEMORY_PCT} \
-storageType ${STORAGE_TYPE} \
-datafileDestination '"+${DATA_DIR}"' \
-recoveryAreaDestination NONE \
-emConfiguration NONE \
-initParams db_create_file_dest='"+${DATA_DIR}"' \
-initParams ${INITPARAMS} \
-createAsContainerDatabase ${ISCDB} \
-nodelist ${NODE_LIST} -asmsnmpPassword "${SYS_PWD}" \
-useOMF ${USE_OMF} ${LISTENER_PARAMS}
# 在 DBCA 靜默安裝模式下可以指定 control_file 初始化引數來修改 control file 的路徑:
-initparams control_files='+DATA'
比如:
dbca -silent -ignorePreReqs -createDatabase -databaseConfigType RAC -datafileJarLocation /ups/oracle/database/product/12.2/db_1/assistants/dbca/templates/ -templateName General_Purpose.dbc -gdbname o12cdb -sid o12cdb -responseFile NO_VALUE -characterSet AL32UTF8 -sysPassword oracle -systemPassword oracle -databaseType OLTP -automaticMemoryManagement false -memoryPercentage 40 -storageType ASM -datafileDestination '+DATA' -recoveryAreaDestination NONE -emConfiguration NONE -initParams db_create_file_dest='+DATA' -initParams processes=1000 -initparams control_files='+CTL1','+CTL2' -createAsContainerDatabase true -nodelist orc1,orc2 -asmsnmpPassword oracle -useOMF true
響應檔案
dbca -createDatabase -silent -ignorePreReqs -ignorePrereqFailure -responseFile /ups/soft/dbca.rsp
檢查服務狀態
srvctl config database -d orcl
srvctl status database -d orcl
. /home/grid/.bash_profile
crsctl status res -t
刪除資料庫
${ORACLE_HOME}/bin/dbca -silent -ignorePreReqs -ignorePrereqFailure -deleteDatabase -sourceDB o19c -sysDBAUserName sys -sysDBAPassword oracle
新增例項
# 新增例項
${ORACLE_HOME}/bin/dbca -silent -addInstance -gdbName o19c -nodeName o19c2 -instanceName o19c2 -sysDBAUserName sys -sysDBAPassword oracle
5. 資料庫配置調整
5.1 調整控制檔案
Oracle使用ASM儲存,建庫時預設只有一個控制檔案,需手動新增2個控制檔案。安裝資料庫預設設定control_files為3個,==分別存放不同VG/ASMDG,即建立3個VG/ASMDG。==
-- 1. 查詢當前控制檔名稱資訊
sqlplus "/ as sysdba"
SQL> select value from v$parameter where name='control_files';
-- 2. 備份引數檔案
SQL> set verify off
SQL> column timecol new_value timestamp noprint
SQL> SELECT to_char(sysdate,'yyyymmddhh24') timecol FROM dual;
SQL> create pfile='/tmp/pfile_&×tamp..ora' from spfile;
-- 3. 新增spfile中的控制檔案
-- 命令格式: alter system set control_files='[步驟1中一個控制檔名稱路徑]','[新增檔案路徑]'... scope=spfile sid='*';
-- 例子:
SQL> alter system set control_files='+DATA/ORCL/CONTROLFILE/current.279.1033659949','+CTL1','+CTL2','+CTL3' scope=spfile sid='*';
SQL> quit;
# 4. 將資料庫例項重啟動到nomount狀態
su - oracle
$ srvctl stop database -d orcl
# 其中一個例項進行變更實施
$ srvctl start instance -d orcl -i orcl1 -o nomount
# 5. 使用rman還原controlfile
$ rman target /
RMAN> restore controlfile from '+DATA/ORCL/CONTROLFILE/current.279.1033659949';
Starting restore at 19-MAY-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1153 device type=DISK
channel ORA_DISK_1: copied control file copy
output file name=+CTL1/ORCL/CONTROLFILE/current.267.880113159
output file name=+CTL2/ORCL/CONTROLFILE/current.269.880122971
output file name=+CTL3/ORCL/CONTROLFILE/current.270.880122971
Finished restore at 19-MAY-15
RMAN> sql 'alter database mount';
sql statement: alter database mount
released channel: ORA_DISK_1
RMAN> sql 'alter database open';
sql statement: alter database open
RMAN> exit
# 6. 將第5步建立的控制檔案更新到spfile檔案
sqlplus "/ as sysdba"
SQL> alter system set control_files='+CTL1/ORCL/CONTROLFILE/current.267.880113159' ,'+CTL2/ORCL/CONTROLFILE/current.269.880122971', '+CTL3/ORCL/CONTROLFILE/current.270.880122971' scope=spfile sid='*';
-- 再次備份spfile檔案(可選)
SQL> create pfile='/tmp/pfile1.ora' from spfile;
SQL> quit;
# 7. 關閉DB例項1
$ srvctl stop instance -d orcl -i orcl1
# 8. 啟動資料庫服務
$ srvctl start instance -d orcl -i orcl1
$ srvctl start instance -d orcl -i orcl2
# 9. 檢查確認
sqlplus "/ as sysdba"
SQL> select value from v$parameter where name='control_files';
SQL> show parameter control_file
su - oracle
$ . /home/grid/.bash_profile
$ crsctl status res -t
5.2 調整Redo檔案配置
無特殊要求建立6組redo,大小1G,2個成員分別存放==不同VG/ASM DG==上,即建立2個VG/ASM DG。注意字尾不用log,以免誤刪。
sqlplus "/ as sysdba"
-- 1. 查詢當前redo 日誌檔案
set lines 168
col member for a61
col INSTANCE_NAME for a13
col status for a10
col archived for a8
SELECT
i.instance_name
, i.thread#
, f.group#
, f.member
, f.type
, l.status
, l.bytes/1048576 size_mb
, l.archived
FROM
gv$logfile f
, gv$log l
, gv$instance i
WHERE
f.group# = l.group#
AND l.thread# = i.thread#
AND i.inst_id = f.inst_id
AND f.inst_id = l.inst_id
ORDER BY
i.instance_name
, f.group#
, f.member;
-- 2. 新增日誌組,每個日誌2個成員,日誌大小1G
-- 例項1
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 11 ('+REDO1','+REDO2') size 1024m;
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 12 ('+REDO1','+REDO2') size 1024m;
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 13 ('+REDO1','+REDO2') size 1024m;
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 14 ('+REDO1','+REDO2') size 1024m;
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 15 ('+REDO1','+REDO2') size 1024m;
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 16 ('+REDO1','+REDO2') size 1024m;
-- 例項2
ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 21 ('+REDO1','+REDO2') size 1024m;
ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 22 ('+REDO1','+REDO2') size 1024m;
ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 23 ('+REDO1','+REDO2') size 1024m;
ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 24 ('+REDO1','+REDO2') size 1024m;
ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 25 ('+REDO1','+REDO2') size 1024m;
ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 26 ('+REDO1','+REDO2') size 1024m;
-- 3. 切換日誌,確認日誌組狀態為'INACTIVE',既可以刪除
alter system switch logfile;
alter system checkpoint;
-- 4. 依次刪除要廢棄的日誌組(可選)
ALTER DATABASE drop LOGFILE GROUP 1;
ALTER DATABASE drop LOGFILE GROUP 2;
......
-- 5. 再次檢查確認
set lines 168
col member for a61
col INSTANCE_NAME for a13
col status for a10
col archived for a8
SELECT
i.instance_name
, i.thread#
, f.group#
, f.member
, f.type
, l.status
, l.bytes/1048576 size_mb
, l.archived
FROM
gv$logfile f
, gv$log l
, gv$instance i
WHERE
f.group# = l.group#
AND l.thread# = i.thread#
AND i.inst_id = f.inst_id
AND f.inst_id = l.inst_id
ORDER BY
i.instance_name
, f.group#
, f.member;
5.3 調整程序數(可選)
-- 備份引數檔案
SQL> set verify off
SQL> column timecol new_value timestamp noprint
SQL> SELECT to_char(sysdate,'yyyymmddhh24') timecol FROM dual;
SQL> create pfile='/tmp/pfile_&×tamp..ora' from spfile;
-- 調整引數,process需要重啟例項生效
SQL> alter system set processes=3000 scope=spfile sid='*';
SQL> alter system set open_cursors=3000 scope=both sid='*';
-- 調整後確認
select name,
value,
p.ISDEFAULT,
p.ISSES_MODIFIABLE,
p.ISSYS_MODIFIABLE,
p.ISINSTANCE_MODIFIABLE,
p.DESCRIPTION
from v$parameter p
where name in ('processes', 'open_cursors');
------------------------------------------------------------
-- 引數說明:
Processes - user processes
open_cursors - max # cursors per session
5.4 資料庫叢集記憶體配置
OS HugePage特性跟資料庫AMM衝突的。
- 禁用OS 大頁時,資料庫使用Automatic Memory Management(AMM)記憶體機制
- 啟用OS大頁時,資料庫使用Automatic Shared Memory Management(ASMM)記憶體機制
Automatic Memory Management (AMM) on 11g & 12c (Doc ID 443746.1)
5.4.1 DB 例項記憶體配置(可選)
[hugepages_settings.sh計算指令碼](Doc ID 401749.1)
配置HugePages(禁用AMM)
- 設定memlock(新增memlock的限制,注意該值略微小於實際實體記憶體的大小)
vi /etc/security/limits.conf
* soft memlock 60397977
* hard memlock 60397977
- DB例項中禁用AMM
-- 設定初始化引數MEMORY_TARGET 和MEMORY_MAX_TARGET 為0
- 計算vm.nr_hugepages的建議值
sh ./hugepages_settings.sh
- 設定vm.nr_hugepages引數
vi /etc/sysctl.d/97-oracle.conf
vm.nr_hugepages = 1496
-
停止所有例項,並重啟伺服器
-
驗證配置
# 確保所有的資料庫例項都已經啟動後,檢查HugePage的狀態
grep HugePages /proc/meminfo
# 確保HugePages配置的有效性,HugePages_Free值應該小於HugePages_Total 的值,並且應該等於HugePages_Rsvd的值。Hugepages_Free 和HugePages_Rsvd 的值應該小於SGA 分配的gages。
5.4.2 ASM 例項記憶體配置(可選)
預設情況下ASM instance 也是使用AMM的,但因為ASM 例項不需要大SGA,所以對ASM 例項使用HugePages意義不大。
例項需要pool池需要的容量隨著ASM DG的容量增加而增加,為了避免出現ORA-4031錯誤,根據實際情況按需調整ASM 例項記憶體。
-- 1. AMM: 建議ASM的版本低於12.1,按以下配置
SQL> alter system set memory_max_target=4096m scope=spfile;
SQL> alter system set memory_target=1536m scope=spfile;
--備份引數檔案
su - grid
sqlplus / as sysdba
create pfile='/home/grid/pfile.ora' from spfile;
SQL> alter system set memory_max_target=4096m scope=spfile;
SQL> alter system set memory_target=1536m scope=spfile;
--檢查確認
sqlplus / as sysdba
show parameter target
quit;
-- 2. ASMM: 查詢ASM儲存容量及冗餘,後計算SHARED_POOL_SIZE值
SELECT SUM(bytes)/(1024*1024*1024) FROM V$DATAFILE;
SELECT SUM(bytes)/(1024*1024*1024) FROM V$LOGFILE a, V$LOG b
WHERE a.group#=b.group#;
SELECT SUM(bytes)/(1024*1024*1024) FROM V$TEMPFILE
WHERE status='ONLINE';
- For disk groups using external redundancy, every 100 GB of space needs 1 MB of extra shared pool plus 2 MB.
- For disk groups using normal redundancy, every 50 GB of space needs 1 MB of extra shared pool plus 4 MB.
- For disk groups using high redundancy, every 33 GB of space needs 1 MB of extra shared pool plus 6 MB.
-- 3. 在11.2.0.3/11.2.0.4
1) If PROCESSES parameter is explicitly set:
The MEMORY_TARGET should be set to no less than:
256M + PROCESSES * 132K (64bit)
256M + PROCESSES * 120K (32bit)
2) If PROCESSES parameter is not set:
The MEMORY_TARGET should be set to no less than:
256M + (available_cpu_cores * 80 + 40) * 132K (64bit)
256M + (available_cpu_cores * 80 + 40) * 120K (32bit)
5.5 配置歸檔模式(可選)
-- 1. 備份引數檔案
create pfile='/home/oracle/pfile2.ora' from spfile;
-- 2. 配置歸檔目錄
alter system set log_archive_dest_1='location=+ARCH' scope=spfile sid='*';
-- 3. 關閉資料庫
$ srvctl stop instance -d orcl -i orcl1
$ srvctl stop instance -d orcl -i orcl2
-- 4. 啟動一個例項到mount狀態
$ srvctl start instance -d orcl -i orcl1 -o mount
-- 5. 修改資料庫配置
sqlplus / as sysdba
alter database archivelog;
-- 6. 重啟資料庫
$ srvctl stop instance -d orcl -i orcl1
$ srvctl start instance -d orcl -i orcl1
$ srvctl start instance -d orcl -i orcl2
-- 7. 檢查確認
archive log list
注意歸檔目錄空間增長
# 定期清理指令碼
#!/bin/sh
export LANG=C
ECHO="echo"
export OSTYPE=$(uname -s)
case ${OSTYPE} in
"HP-UX")
;;
"AIX")
;;
"SunOS")
;;
*) ECHO="echo -e"
esac
# 載入 profile
PROFILE="$1"
[[ -f ${PROFILE} ]] && . ${PROFILE}
# 例項名稱
INST_NAME="$2"
[[ ! -z "${INST_NAME}" ]] && export ORACLE_SID=${INST_NAME}
[[ -z "${LOG_DIR}" ]] && LOG_DIR='/tmp'
[[ -z "${SQLPLUS_CONN}" ]] && SQLPLUS_CONN='/ as sysdba'
[[ -z "${SQLPLUS_CONN}" ]] && RMAN_CONN='/'
# 日誌保留期限
KEEP_TIME_HOURS="$3"
[[ -z "${KEEP_TIME_HOURS}" ]] && KEEP_TIME_HOURS='48'
# 儲存空間閾值
STORE_PCT="$4"
[[ -z "${STORE_PCT}" ]] && STORE_PCT='80'
# 是否啟用force
[[ ! -z "$5" ]] && FORCE_FLAG="force"
# 操作日誌記錄
LOG_FILE="${LOG_DIR}/rman_clean_arch_$(date +%Y%m%d%H).log"
# 收集歸檔目錄使用率
ARCH_DEST=$(${ECHO} "set heading off feedback off timing off\nselect trim(substr(value,10)) from v\$parameter where name='log_archive_dest_1';"|sqlplus -S ${SQLPLUS_CONN}|perl -lane 'print if !/@|:|^$/'|tail -1)
# ASM環境:獲取ASM 歸檔目錄使用率
ARCH_DEST_PCT=$(${ECHO} "set heading off feedback off \nselect trim(round((total_mb-usable_file_mb)/total_mb*100,0)) pct from v\$asm_diskgroup_stat where name = (select nvl(substr(substr(value,1,instr(value,'/',1,1)-1),11),substr(value,11)) from v\$parameter where name ='log_archive_dest_1');"|sqlplus -S ${SQLPLUS_CONN}|perl -lane 'print if !/@|:|^$/'|tail -1)
# 為空則為普通檔案系統的佔用百分比
if [[ -z "${ARCH_DEST_PCT}" ]]; then
# 處理歸檔目錄非單獨檔案系統情況
while true;do
df -P |grep -iq "${ARCH_DEST}$"
[[ "$?" = "0" ]] && break || ARCH_DEST=$(dirname ${ARCH_DEST})
done
ARCH_DEST_PCT=$(df -P |grep -i "${ARCH_DEST}$" |awk '{print $5*1}')
fi
if [[ "${ARCH_DEST_PCT}" -ge "${STORE_PCT}" ]]; then
${ORACLE_HOME}/bin/rman log=${LOG_FILE} append <<-EOF
export NLS_DATE_FORMAT='yyyy-mm-dd HH24:MI:SS';
connect target ${RMAN_CONN}
run{
sql "alter session set optimizer_mode=RULE";
crosscheck archivelog all;
delete noprompt expired archivelog all;
delete noprompt ${FORCE_FLAG} archivelog until time 'sysdate-${KEEP_TIME_HOURS}/24';
}
EOF
fi
# select name,sequence#,status ,first_time from v$archived_log order by sequence# desc;
# STATUS=A Available,表示歸檔日誌檔案有效
# STATUS=U Unavailable,表示歸檔日誌檔案無效
# STATUS=D Deleted,表示歸檔日誌已經被刪除
# STATUS=X eXpired,表示歸檔日誌被使用者使用作業系統的命令刪除了
5.6 開啟塊跟蹤(可選)
su - oracle
sqlplus / as sysdba
SQL> alter database enable block change tracking using file '+DATA';
Database altered.
-- 檢查確認
SQL> select status, filename from v$block_change_tracking;
STATUS
----------
FILENAME
--------------------------------------------------------------------------------
ENABLED
+DATA/drmdb/changetracking/ctf.258.990434395
5.7 建立PDB(可選)
-- 建立PDBYYY
sqlplus / as sysdba
alter session set container=CDB$ROOT;
-- alter session set container=PDBORCL;
create pluggable database PDBORCL admin user pdbadmin identified by oracle create_file_dest='+DATA';
-- 2. 開啟PDB
alter pluggable database PDBORCL open instances=all;
alter pluggable database PDBORCL save state;
-- 3. 檢查
set lines 168 pages 99
col NAME for a12
select inst_id,con_id,name,open_mode from gv$pdbs where name='PDBORCL';
set lines 168 pages 168
col con_name for a18
col instance_name for a18
col restricted for a12
select con_id,con_name, instance_name,state,restricted from dba_pdb_saved_states where con_name='PDBORCL' order by 1;
-- 4. 修改資料庫字符集
alter pluggable database PDBYYY close immediate instances=all;
alter pluggable database PDBYYY open read write restricted;
alter session set container=PDBORCL; -- 切換對應的PDB
select userenv('language') from dual;
show con_id con_name
alter database character set internal_use zhs16gbk;
alter pluggable database PDBORCL close;
alter pluggable database PDBORCL open instances=all;
-- alter pluggable database PDBYYY save state;
-- 5. 檢查修改結果
alter session set container=PDBORCL;
set lines 168 pages 168
col parameter for a30
col value for a30
select * from nls_database_parameters;
-- 6. 刪除PDB
DROP PLUGGABLE DATABASE PDBORCL
INCLUDING DATAFILES;
5.8 資源管理計劃(可選)
-- 設定PDB資源限制
su - oracle
alter session set container=PDBYYY;
-- ALTER SYSTEM SET db_performance_profile='MEDIUM' SCOPE=SPFILE PDB='PDBYYY';
alter system set db_performance_profile=medium scope=spfile;
alter pluggable database PDBYYY close immediate instances=all;
alter pluggable database PDBYYY open instances=all;
-- 檢查
col name for a28
select inst_id, name, con_id, value, ispdb_modifiable
from gv$system_parameter2 where name = 'db_performance_profile';
5.9 建立表空間
-- 連線到PDB
sqlplus / as sysdba
alter session set container=PDBYYY;
or
sqlplus pdbadmin/[email protected]/pdbyyy
-- 建立表空間
create tablespace main datafile '+DATA' size 32764M autoextend off;
alter tablespace main add datafile '+DATA' size 32764M autoextend off;
-- 檢查
col file_name for a82
select FILE_ID,FILE_NAME,TABLESPACE_NAME,BYTES/1024/1024 Mb_size,AUTOEXTENSIBLE,MAXBYTES/1024/1024 max_size from dba_data_files;
5.10 建立使用者及授權
-- 連線到對應的PDB
sqlplus / as sysdba
alter session set container=PDBYYY;
or
sqlplus pdbadmin/[email protected]/pdbyyy
-- 建立使用者
create user YYY identified by "yy" default tablespace main quota unlimited on main;
-- 授權
grant connect to YYY;
grant resource to YYY;
grant create view to YYY;
-- 檢查
col username for a18
col ACCOUNT_STATUS for a18
col PROFILE for a12
select USERNAME,ACCOUNT_STATUS,DEFAULT_TABLESPACE,TEMPORARY_TABLESPACE,LOCAL_TEMP_TABLESPACE,PROFILE from dba_users where username='YYY';
5.11 配置PDB儲存狀態(可選)
預設情況,pdb需要在cdb啟動後,命令方式啟動PDB。當配置儲存PDB狀態後,每當CDB重啟後自動將PDB拉起來。
1) 命令配置啟動pdb隨cdb啟動
-- 12.1.0.1版本
# 使用SYS使用者建立如下觸發器自動啟動PDB Pluggable Database:
conn / as sysdba
CREATE TRIGGER open_all_pdbs
AFTER STARTUP
ON DATABASE
BEGIN
EXECUTE IMMEDIATE 'alter pluggable database all open';
END open_all_pdbs;
/
-- 12.1.0.2版本提供命令配置啟動pdb隨cdb啟動
alter pluggable database [pdbname] save state;
alter pluggable database ORCLPDB save state instances=all;
or
alter pluggable database all save state instances=all;
-- 結果檢查
set lines 168 pages 999
col CON_NAME for a18
col INSTANCE_NAME for a18
col RESTRICTED for a12
select con_id,CON_NAME,INSTANCE_NAME,STATE,RESTRICTED from dba_pdb_saved_states;
CON_ID CON_NAME INSTANCE_NAME STATE RESTRICTED
---------- ------------------ ------------------ -------------- ------------
3 ORCLPDB orcl2 OPEN NO
3 ORCLPDB orcl1 OPEN NO
SQL>
# 禁用pdb隨cdb啟動
alter pluggable database ORCLPDB discard state;
SQL> alter pluggable database ORCLPDB discard state;
Pluggable database altered.
SQL> select con_id,CON_NAME,INSTANCE_NAME,STATE,RESTRICTED from dba_pdb_saved_states;
CON_ID CON_NAME INSTANCE_NAME STATE RESTRICTED
---------- ------------------ ------------------ -------------- ------------
3 ORCLPDB orcl2 OPEN NO
SQL> alter pluggable database ORCLPDB save state;
Pluggable database altered.
SQL> select con_id,CON_NAME,INSTANCE_NAME,STATE,RESTRICTED from dba_pdb_saved_states;
CON_ID CON_NAME INSTANCE_NAME STATE RESTRICTED
---------- ------------------ ------------------ -------------- ------------
3 ORCLPDB orcl2 OPEN NO
3 ORCLPDB orcl1 OPEN NO
SQL>
5.12 PDB 啟動關閉
alter pluggable database [pdbname] [open|close] [instances=('instance_name'|'instance_name1')]
alter pluggable database ALBINPDB close INSTANCES=('o12c');
alter pluggable database ALBINPDB open INSTANCES=('o12c');
5.13 在PDB級建立AWR
-
在PDB級別設定awr_pdb_autoflush_enabled=true
-
alter session set container=PDB1; alter system set awr_pdb_autoflush_enabled=true;
-
-
正確設定AWR快照
-
select * from cdb_hist_wr_control; DBID SNAP_INTERVAL RETENTION TOPNSQL CON_ID 2580889417 +40150 00:01:00.0 +00008 00:00:00.0 DEFAULT 3 execute dbms_workload_repository.modify_snapshot_settings(interval => 60); select * from cdb_hist_wr_control; DBID SNAP_INTERVAL RETENTION TOPNSQL CON_ID 2580889417 +00000 01:00:00.0 +00008 00:00:00.0 DEFAULT 3
-
-
將AWR_SNAPSHOT_TIME_OFFSET設定為1000000,以避免多個PDB同時建立快照時出現效能問題
alter system set AWR_SNAPSHOT_TIME_OFFSET=1000000 scope=both;
-
生成快照
select * from awr_pdb_snapshot;
-
建立AWR報告
@?/rdbms/admin/awrrpt
5.14 對於12.2非CDB環境,建議調整引數_cursor_obsolete_threshold
- 從12.2資料庫版本開始引數_cursor_obsolete_threshold的值增加到8192。
- 這會導致父遊標不會被廢棄,因此父級下的子游標會擴充套件到1024以上(這是12.1中的預設值),從而導致cursor mutex的併發問題。
引數的預設值
11.2.0.3: _cursor_obsolete_threshold=100
11.2.0.4: _cursor_obsolete_threshold=1024
12.1: _cursor_obsolete_threshold=1024
12.2: _cursor_obsolete_threshold=8192
從12.2開始,_cursor_obsolete_threshold的預設值大幅增加(從1024開始為8192)以便支援4096個PDB(而12.1只有252個PDB)。 此引數值是在多租戶環境中廢棄父遊標的最大限制,並且不能超過8192。
但這個設定並不適用於非CDB環境,因此對於那些資料庫,此引數應手動設定為12.1的預設值,即1024. 預設值1024適用於非CDB環境,並且如果出現問題,可以調整相同的引數,應視具體情況而定。
優化建議
對於12.2非CDB環境,請將引數設定為以下值,以避免由於高版本數而導致的mutex併發問題
alter system set "_cursor_obsolete_threshold"=1024 scope=spfile;
6. 常見問題
6.1 SSH
互信問題
1. 檢查/etc/hosts 檔案中主機名建議都使用小寫字母,scan名稱不含有數字,且小於15個字元。
2. 檢查防火牆和selinux是否都關閉
3. 檢查`ssh`,`scp`等命令工具的路徑是否合適
4. 檢查$HOME/.ssh目錄及其下檔案許可權
5.
OpenSSH版本8.X
現象
當嘗試通過執行 <gridSetup.sh> 配置 19c grid infrastructure 的時候,在 SSH connectivity 步驟,發生如下錯誤:
[INS-06006] Passwordless SSH connectivity not set up between the following node(s): [
以上錯誤無法被忽略,因此 CRS 安裝會失敗。
然而,SSH 配置是成功的,並且 ssh
通過 debug 模式執行 gridSetup.sh:
$ gridSetup.sh -debug | tee /tmp/gridsetup.log
在 debug 的 trace 檔案 "/tmp/gridsetup.log" 中,會發現在呼叫
[Worker 0] [ 2019-05-31 14:40:49.921 CST ] [UnixSystem.remoteCopyFile:848] UnixSystem: **/usr/local/bin/scp -p :'/tmp/GridSetupActions2019-05-31_02-39-46PM/CVU_19.0.0.0.0_grid/scratch/getFileInfo12906.out' /tmp/GridSetupActions2019-05-31_02-39-46PM/.getFileInfo12906.out**
[Thread-440] [ 2019-05-31 14:40:49.921 CST ] [StreamReader.run:62] In StreamReader.run
[Worker 0] [ 2019-05-31 14:40:49.921 CST ] [RuntimeExec.runCommand:294] runCommand: Waiting for the process
[Thread-439] [ 2019-05-31 14:40:49.921 CST ] [StreamReader.run:62] In StreamReader.run
[Thread-440] [ 2019-05-31 14:40:50.109 CST ] [StreamReader.run:66] **ERROR>protocol error: filename does not match request**
[Worker 0] [ 2019-05-31 14:40:50.109 CST ] [RuntimeExec.runCommand:296] runCommand: process returns 1
[Worker 0] [ 2019-05-31 14:40:50.109 CST ] [RuntimeExec.runCommand:323] RunTimeExec: error>
[Worker 0] [ 2019-05-31 14:40:50.109 CST ] [RuntimeExec.runCommand:326] **protocol error: filename does not match request**
原因
OpenSSH 被升級到 8.x. 請注意 OpenSSH 在不同的平臺 / OS 可能會有不同的行為,例如在 AIX 平臺,OpenSSH 7.5 有這個問題,在 SLES Linux 12 SP4 上,OpenSSH_7.2p2 有這個問題。
# ssh -V
OpenSSH_8.0p1, OpenSSL 1.0.2r 26 Feb 2019
如下的命令在 OpenSSH 8.0 上可能會產生與上面相同的錯誤。
# scp -p <racnode2>:"'/tmp/test.txt'" /tmp/test.txt
protocol error: filename does not match request
以上錯誤可以通過在命令中新增 "-T" 選項來避免:
# scp -T -p <racnode2>:"'/tmp/test.txt'" /tmp/test.txt
test.txt 100% 2 0.1KB/s 00:00
為了降低 (CVE-2019-6111) 的風險,OpenSSH 8.0 增加了客戶端檢查,檢查從伺服器傳送的檔名是否與命令列請求匹配,如果客戶端和伺服器萬用字元擴充套件存在差異,那麼客戶端可能會拒絕來自伺服器的檔案。由於這個原因,OpenSSH 8.0 為 scp 提供了一個新的 -T 選項,它將禁用這些客戶端檢查。
處理
變通方案:
在安裝之前 通過 root 使用者: (注意如果您的 "scp" 命令的路徑與如下示例不同,請根據實際情況修改)
# 將原來的 scp 命令改名.
mv /usr/bin/scp /usr/bin/scp.orig
# 建立一個新的檔案 </usr/bin/scp>.
vi /usr/bin/scp
# 在新建立的檔案 </usr/bin/scp> 中新增如下行.
/usr/bin/scp.orig -T $*
# 修改檔案的許可權.
chmod 555 /usr/bin/scp
在安裝結束後:
mv /usr/bin/scp.orig /usr/bin/scp
參考文件
在 OpenSSH 升級到 8.x 後 GI 安裝失敗 INS-06006 (Doc ID 2639907.1)
6.2 19C需要將節點1上oui-patch.xml傳遞到其它節點
為了解決其它節點因缺少oui-patch.xml檔案導致補丁升級失敗,因此需要將其傳遞到其它所有節點並確保檔案許可權。
# 1. 複製oui-patch.xml檔案到其它節點,避免後期補丁升級異常
-- “During 19.x GI installation, the file 'oui-patch.xml' will be created under the central inventory directory on the OUI node (node where gridSetup.sh was invoked) but not on the other nodes.
# node1 (執行gridSetup.sh的節點)
export INV_DIR="$(grep 'inventory_loc' /etc/oraInst.loc|awk -F= '{print $NF}')/ContentsXML"
scp ${INV_DIR}/oui-patch.xml o19c2:${INV_DIR}/oui-patch.xml
ssh o19c2 "chmod 660 ${INV_DIR}/oui-patch.xml && chown grid:oinstall ${INV_DIR}/oui-patch.xml"
6.3 低版本客戶端連線報錯ORA-28040
現象
ORA-28040: No matching authentication protocol
ORA-28040: 沒有匹配的驗證協議
問題原因
原因客戶端與服務端的密碼版本(dba_users.password_versions)不一致導致
解決方法
方法1
新增sqlnet.ora檔案配置
在資料庫伺服器上DB軟體的$ORACLE_HOME/network/admin/sqlnet.ora檔案新增相應引數
# 12c以下版本
# SQLNET.ALLOWED_LOGON_VERSION=8
# 12c及以上版本
SQLNET.ALLOWED_LOGON_VERSION_SERVER=8
SQLNET.ALLOWED_LOGON_VERSION_CLIENT=8
- QLNET.ALLOWED_LOGON_VERSION_SERVER:控制可以連線到12c資料庫的客戶端版本(client -->12c server )
- SQLNET.ALLOWED_LOGON_VERSION_CLIENT:控制12c資料庫可以連到哪些版本的資料庫(12c server -->其它版本dbserver),例如:控制通過DB LINK可連線到哪些版本的oracle庫。
注:單例項或RAC都是此目錄的sqlnet.ora檔案
重新修改使用者密碼
方法2
參考文件
不同 Oracle 版本的客戶端/伺服器互操作性支援矩陣 (Doc ID 1945782.1)
12c Database Alert.log File Shows The Message: Using Deprecated SQLNET.ALLOWED_LOGON_VERSION Parameter (Doc ID 2111876.1)
6.4 在19C版本上cssd訪問使用了AFD的Voting disk時節點重啟
現象
- 2節點的19c的GI 安裝的時候,執行root.sh失敗,並提示以下錯誤,而且第一個節點被踢出叢集,安裝介面消失。 節點可以正常再啟動。
- 使用了Oracle ASM Filter Driver特性
原因
Bug 30006262
出現在使用AFD的系統升級19c的過程中
處理
-
補丁30006262的安裝文件,在所有的節點安裝補丁
-
重新執行root.sh指令碼
-
如果是19C以下的環境要升級到19c,並且使用了AFD的功能,請先安裝這個補丁到19c的 Oracle Home裡,然後再執行:rootupgrade.sh避免該問題的出現。
參考文件
19C: cssd訪問使用了AFD的Voting disk時節點重啟 (Doc ID 2639902.1)
6.5 asm mmon程序跟蹤檔案增大問題
現象
MMON跟蹤檔案增大,並被訊息“dbkrfssm:mmon not open”淹沒。該訊息每隔5分鐘在跟蹤檔案中重複一次。
[grid@node1 ~]$ ps -ef|grep 'asm_mmon' |grep -v grep |awk '{print $2}' |xargs lsof -p |grep ".trc$" |awk '{print $NF}'
/ups/oracle/grid/diag/asm/+asm/+ASM1/trace/+ASM1_mmon_14560.trc
dbkrfssm: mmon not open
*** 2020-11-12T09:12:19.222584+08:00
dbkrfssm: mmon not open
*** 2020-11-12T09:17:19.259359+08:00
dbkrfssm: mmon not open
dbkrfssm: mmon not open
*** 2020-11-12T09:27:19.341368+08:00
dbkrfssm: mmon not open
*** 2020-11-12T09:32:19.420488+08:00
dbkrfssm: mmon not open
dbkrfssm: mmon not open
原因
Bug 28370061 : ASM MMON EVERY 5 MIN WRITES TRC MESSAGE 'DBKRFSSM: MMON NOT OPEN'
處理
- 在GI_HOME應用補丁28370061,以解決此問題
- 臨時解決方法,可以手動刪除跟蹤檔案。
參考文件
ASM mmon每5分鐘寫入一次trc訊息'dbkrfssm:mmon not open' (Doc ID 2610739.1)
6.6 diagsnap程序收集stack traces造成程序hang和節點驅逐
Diagsnap是從12.1.0.2 GI開始引入的,CHM的osysmod 管理著diagsnap。 diagsnap用來收集那些CHM不收集的額外的os的statistics。
Diagsnap每15分鐘自動執行收集baseline的matric。此外以下的事件也會觸發diagsnap的matric收集。
-
cssd報告missing network heartbeats(NHB's).
-
gipcd 偵測到一個或者多個網絡卡啟動或者停止.
-
gipcd rank的事件(網絡卡的健康狀態,GIPC會把啟動階段標記為-1,沒有packet傳輸標記為0,任何小於90的標記意味著有packet的丟失)
問題觸發條件
- Oracle Clusterware 版本是 12.1.0.2.160419或更高版本的12.1.0.2, 或12.2.0.1, 或 18
- 啟用了diagsnap。 注意:即使曾顯式地禁用過diagsnap 的12.1.0.2 和12.2.0.1 環境,其實際執行的是比2017年10月 (171017)早的季度release時,當它升級到 2017年10月的release後,diagsnap會被重新啟用
- 從 Oracle Clusterware 12.1.0.2.171017, 12.2.0.1.171017, 和18版開始 ,除diagsnap之外,還有其它元件也會生成stack traces
處理
# 1. 禁用diagsnap
<GRID_HOME>/bin/oclumon manage -disable diagsnap
# 2.禁用pstack
<GRID_HOME>/bin/oclumon manage -disable pstack
或 編輯$GI_HOME/crf/admin/crf
# 1. 關閉資源
# crsctl stop res ora.crf -init
# 2. vi $GI_HOME/crf/admin/crf$(hostname -s).ora
DIAGSNAP=DISABLE
PSTACK=DISABLE
# 3. 啟動資源
# crsctl start res ora.crf -init
參考文件
diagsnap 和其它元件對一些clusterware程序收集stack traces造成程序hang和節點驅逐 (Doc ID 2440134.1)
Document 2469642.1 ALERT: 在12.1.0.2和12.2的GI Home安裝完2017年10月到2018年7月之間的PSU/RU需要禁用diagsnap中的pstack
參考 <Document 27068526.8> 和 <Document 2251437.1> 裡的更多詳情
6.7 執行完 root 指令碼之後,如果 installer GUI 視窗失效處理
在執行完 root 指令碼(root.sh or rootupgrade.sh)之後,如果 installer GUI 視窗失效,可以執行如下命令來完成 plugin:
gridSetup.sh -executeConfigTools
6.8 OL7|RHEL7中systemd自啟動或關閉服務
# vi /usr/lib/systemd/system/dbora.service
[Unit]
Description=Oracle Database Start/Stop Service
After=syslog.target network.target local-fs.target remote-fs.target
[Service]
# systemd, by design does not honor PAM limits
# See: https://bugzilla.redhat.com/show_bug.cgi?id=754285
LimitNOFILE=65536
LimitNPROC=16384
LimitSTACK=32M
LimitMEMLOCK=infinity
LimitCORE=infinity
Type=simple
User=oracle
Group=oinstall
Restart=no
ExecStartPre=/bin/rm -rf /u01/app/oracle/product/12.2.0/dbhome_1/listener.log
ExecStartPre=/bin/rm -rf /u01/app/oracle/product/12.2.0/dbhome_1/startup.log
ExecStart=/u01/app/oracle/product/12.2.0/dbhome_1/bin/dbstart /u01/app/oracle/product/12.2.0/dbhome_1
RemainAfterExit=yes
ExecStop=/bin/rm -rf /u01/app/oracle/product/12.2.0/dbhome_1/shutdown.log
ExecStop=/u01/app/oracle/product/12.2.0/dbhome_1/bin/dbshut /u01/app/oracle/product/12.2.0/dbhome_1
TimeoutStopSec=5min
[Install]
WantedBy=multi-user.target
6.9 DB alert日誌丟擲ORA-27300,ORA-27301和ORA-27302
現象
在DB 例項的alert日誌中丟擲以下錯誤資訊:
2020-11-18T13:42:58.448277+08:00
Errors in file /ups/oracle/database/diag/rdbms/orcl/orcl1/trace/orcl1_asmb_5574.trc (incident=317146) (PDBNAME=CDB$ROOT):
ORA-00603: ORACLE server session terminated by fatal error
ORA-27504: IPC error creating OSD context
ORA-27300: OS system dependent operation:sendmsg failed with status: 105
ORA-27301: OS failure message: No buffer space available
ORA-27302: failure occurred at: sskgxpsnd2
Incident details in: /ups/oracle/database/diag/rdbms/orcl/orcl1/incident/incdir_317146/orcl1_asmb_5574_i317146.trc
2020-11-18T13:43:10.752227+08:00
原因
發生這種情況是因為可用於網路緩衝區保留的空間較少
處理
- On servers with High Physical Memory, the parameter vm.min_free_kbytes should be set in the order of 0.4% of total Physical Memory. This helps in keeping a larger range of defragmented memory pages available for network buffers reducing the probability of a low-buffer-space conditions.
**For example, on a server which is having 256GB RAM, the parameter vm.min_free_kbytes should be set to 1073742 **
/sbin/sysctl -a | grep min_free_kbytes
On NUMA Enabled Systems, the value of vm.min_free_kbytes should be multiplied by the number of NUMA nodes since the value is to be split across all the nodes.
On NUMA Enabled Systems, the value of vm.min_free_kbytes = n * 0.4% of total Physical Memory. Here 'n' is the number of NUMA nodes.
- Additionally, the MTU value should be modified as below
Linux : #ifconfig lo mtu 16384
AIX : #chdev -Pl lo0 -a mtu=16436
To make the change persistent over reboot add the following line in the file /etc/sysconfig/network-scripts/ifcfg-lo :
MTU=16436
Save the file and restart the network service to load the changes
#service network restart
檢查確認
netstat -in
Note : While making the changes in CRS nodes, if network is restarted while CRS is up, it can hung CRS. So cluster services should be stopped prior to the network restart.
參考文件
ODA Upgrade to 12.1.2.11 or .12 Causes ORA-27301 ORA-27302: failure occurred at: sskgxpsnd2 "The OS has most likely run out of buffers (rval: 4)" with Multiple asmcmd daemon (Doc ID 2484025.1)
"ORA-27301: OS failure message: No buffer space available" occurs on OPC RAC (Doc ID 2397062.1)
Oracle Linux: ORA-27301:OS Failure Message: No Buffer Space Available (Doc ID 2041723.1)
附錄
https://www.cnblogs.com/plluoye/p/10963194.html
http://blog.itpub.net/31439444/viewspace-2679289/
How to Complete Grid Infrastructure Configuration Assistant(Plug-in) if OUI is not Available (Doc ID 1360798.1)
https://www.cndba.cn/dave/article/310
hugepages配置
開啟大頁
- Calculate the amount of hugepages that will need to be allocated for your Oracle instance.
NUMBER_OF_HUGEPAGES = ( SGA + PGA + (20KB * # of Oracle_processes)) / 2MB
-
SGA and PGA are values that are obtained from the Oracle instance. The correct number here is crucial. If you do not allocate enough hugepages for use with Oracle, zero pages will be used, causing "lost" memory from your available pool.
-
Enable the oracle user to be able to utilize huge pages in the /etc/sysctl.conf file.
vm.hugetlb_shm_group=`id -g oracle`
- Set the amount of huge pages to allocate in the /etc/sysctl.conf file.
vm.nr_hugepages=NUMBER_OF_HUGEPAGES
Note: NUMBER_OF_HUGEPAGES should be replaced with the number calculated in step 1.
- Calculate the amount of pages to put into the 'memlock' parameter.
AMOUNT_OF_MEMORY = NUMBER_OF_HUGEPAGES * 1024 * 2
- Set the 'memlock' parameter in the /etc/security/limits.conf file.
oracle - memlock AMOUNT_OF_MEMORY
Note: AMOUNT_OF_MEMORY should be replaced with the value calculated from step 5.
- Restart the system.
- These settings can be changed on the fly in a live environment; however, memory defragmentation will begin to occur which will have a negative impact on the system and could result in the system being unresponsive for an extended period of time while a large block of free memory is allocated. It is recommended to restart the machine completely to avoid this from happening.
- If you're having trouble calculating your correct amount of huge pages, you may be able to use the following script to automatically determine the correct value
#!/bin/bash
KERN=`uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }'`
# Find out the HugePage size
HPG_SZ=`grep Hugepagesize /proc/meminfo | awk '{print $2}'`
# Start from 1 pages to be on the safe side and guarantee 1 free HugePage
NUM_PG=1
# Cumulative number of pages required to handle the running shared memory segments
for SEG_BYTES in `ipcs -m | awk '{print $5}' | grep "[0-9][0-9]*"`
do
MIN_PG=`echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q`
if [ $MIN_PG -gt 0 ]; then
NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q`
fi
done
# Finish with results
case $KERN in
'2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`;
echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;;
'2.6') echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;;
'3.10')echo "Recommended setting: vm.nr_hugepages = $NUM_PG";;
*) echo "Unrecognized kernel version $KERN. Exiting." ;;
esac
檢查大頁的使用情況
-
First, check for the current hugepages usage:
[root@server ~]# grep HugePages_ /proc/meminfo HugePages_Total: 16299 HugePages_Free: 7764 HugePages_Rsvd: 5330 HugePages_Surp: 0
-
Now, let's deduct the free pages, so we can find the used pages and sum to it the reserved pages. So the allocated pages is going to be
Total - Free + Rsvd
16299 - 7764 + 5330 = 13865
-
Multiply the alloocated pages by 2048 (2048 is the number of kbytes of a single page)
13865 x 2048 = 28,395,520 kbytes
-
If you want to put the kbytes in byte count, multiply it by 1024 (1 kbyte = 1024 bytes):
28395520 x 1024 = 29,077,012,480 bytes
Where is the memory
- You can easily quantify the shared hugepages memory. To do so:
-
List the shared memory in use:
[root@rfreire sys]# ipcs -m ------ Shared Memory Segments -------- key shmid owner perms bytes nattch status 0x00000000 163840 oracle 640 14680064 50 0x00000000 196609 oracle 640 2499805184 50 0x27126a4c 229378 oracle 640 2097152 50 0x00000000 5636099 oracle 640 33554432 58 0x00000000 5668868 oracle 640 4160749568 58 [...]
-
Sum the 5th column (quick shell:
ipcs -m|awk '{ print $5}'|awk '{a+=$0}END{print a}'
) and match against/proc/meminfo
hugepages information (see procedure above). If the values matches, then you have a hugepages-only shared memory. If you get a larger value, you have 4kb regular pages shared memory in use as well. Just deduct this sum from/proc/meminfo
hugepages value in bytes and then you'll find how much you have of regular 4kb pages.
-
Check if you are using a hugepages filesystem. Grep for huge in
/proc/mounts
:[root@server ~]# grep -i huge /proc/mounts none /hugepages hugetlbfs rw,relatime 0 0
-
Unfortunately, at this time there are no means to quantify private hugetlbfs pages, which are used for
qemu-kvm
, for example.
How to caculate the size of hugepage used by a specified process
- The following command can be used to caculate the size of hugepage used by a specified process, assumption that HugePage size is 2048 kB, the output unit is MiB:
grep -B 11 'KernelPageSize: 2048 kB' /proc/[PID]/smaps | grep "^Size:" | awk 'BEGIN{sum=0}{sum+=$2}END{print sum/1024}'
- Note: avoid double counting of the same address in /proc/[PID]/smaps.