kubernetes 1.15.1 高可用部署 -- 從零開始
這是一本書!!!
一本寫我在容器生態圈的所學!!!
重點先知:
1. centos 7.6安裝優化
2. k8s 1.15.1 高可用部署
3. 網路外掛calico
4. dashboard 外掛
5. metrics-server 外掛
6. kube-state-metrics 外掛
原文分享:http://note.youdao.com/noteshare?id=c9f647765493d11099a939d7e5e102c9&sub=A837AA253CA54660AABADEF435A40714
第1章 從零開始
1.1 前言
一直想寫點內容,來記錄我在IT這條路上的旅程。這個念頭持續了很久。終於在2019年的7月21日成行。
我將我在IT這條路上的所學、所做、所聞當作旅途中的所看、所聽、所感,一一記錄下來。
IT是一條不歸路。高手之上還有高手。而我單單的希望和越來越強的前輩過招。
我將我的IT方向,軌到容器開發。容器是容器生態圈的簡稱,開發是Go語言開發的簡稱。
我個人認為運維的趨勢是容器化運維,開發的趨勢是容器化開發。所以我走的是容器開發的路。
今年是相對悠閒的一年,可以沉下心來做兩件大事:1. 我的容器生態圈之旅 2.Go語言從小白到大神之旅
我希望的是用時6個月初步達到容器開發的級別,因為我具備一定的基礎,應該還是可以的。
我希望的是在2020年的5月份時,可以初步完成這兩件大事。
I can do it because I'm young !
筆落心堅。拭目以待。
1.2 內容介紹
- 容器引擎:docker
- 容器編排:kubernetes
- 容器儲存:ceph
- 容器監控:prometheus
- 日誌分析:elk
- 服務網路: istio
1.3 資源
所需軟體分享連結: 連結:https://pan.baidu.com/s/1IvUG_hdqDvReDJS9O1k9OA 提取碼:7wfh
內容來源:官網、博文、其他
1.3.1 物理機
硬體效能
從上圖可以看出來:硬體記憶體達到24.0GB,所以可以支援開啟眾多虛擬機器,更有效的模擬真實生成環境。
1.3.2 虛擬機器工具
VMware Workstation Pro 14
VMware Workstation(中文名“威睿工作站”)是一款功能強大的桌面虛擬計算機軟體,提供使用者可在單一的桌面上同時執行不同的作業系統,和進行開發、測試 、部署新的應用程式的最佳解決方案。VMware Workstation可在一部實體機器上模擬完整的網路環境,以及可便於攜帶的虛擬機器器,其更好的靈活性與先進的技術勝過了市面上其他的虛擬計算機軟體。對於企業的 IT開發人員和系統管理員而言, VMware在虛擬網路,實時快照,拖曳共享資料夾,支援 PXE 等方面的特點使它成為必不可少的工具。
VMware Workstation允許作業系統(OS)和應用程式(Application)在一臺虛擬機器內部執行。虛擬機器是獨立執行主機作業系統的離散環境。在 VMware Workstation 中,你可以在一個視窗中載入一臺虛擬機器,它可以執行自己的作業系統和應用程式。你可以在運行於桌面上的多臺虛擬機器之間切換,通過一個網路共享虛擬機器(例如一個公司區域網),掛起和恢復虛擬機器以及退出虛擬機器,這一切不會影響你的主機操作和任何作業系統或者其它正在執行的應用程式。
1.3.3 遠端連結工具
Xshell是一個強大的安全終端模擬軟體,它支援SSH1, SSH2, 以及Microsoft Windows 平臺的TELNET 協議。Xshell 通過網際網路到遠端主機的安全連線以及它創新性的設計和特色幫助使用者在複雜的網路環境中享受他們的工作。
Xshell可以在Windows介面下用來訪問遠端不同系統下的伺服器,從而比較好的達到遠端控制終端的目的。除此之外,其還有豐富的外觀配色方案以及樣式選擇。
1.4 虛機
1.4.1 centos 7.6 系統安裝
1.4.2 模板機優化
檢視系統版本和核心
[root@mobanji ~]# cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core) [root@mobanji ~]# uname -r 3.10.0-957.el7.x86_64
別名設定
#進入網路配置檔案 [root@mobanji ~]# yum install -y vim [root@mobanji ~]# alias vimn="vim /etc/sysconfig/network-scripts/ifcfg-eth0" [root@mobanji ~]# vim ~/.bashrc alias vimn="vim /etc/sysconfig/network-scripts/ifcfg-eth0"
網路優化
[root@mobanji ~]# vimn TYPE=Ethernet BOOTPROTO=none NAME=eth0 DEVICE=eth0 ONBOOT=yes IPADDR=20.0.0.5 PREFIX=24 GATEWAY=20.0.0.2 DNS1=233.5.5.5 DNS2=8.8.8.8 DNS3=119.29.29.29 DNS4=114.114.114.114
更新yum源及必要軟體安裝
[root@mobanji ~]# yum install -y wget [root@mobanji ~]# cp -r /etc/yum.repos.d /etc/yum.repos.d.bak [root@mobanji ~]# rm -f /etc/yum.repos.d/*.repo [root@mobanji ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo \ && wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo [root@mobanji ~]# yum clean all && yum makecache [root@mobanji ~]# yum install bash-completion lrzsz nmap nc tree htop iftop net-tools ntpdate lsof screen tcpdump conntrack ntp ipvsadm ipset jq sysstat libseccomp nmon iptraf mlocate strace nethogs iptraf iftop bridge-utils bind-utils nc nfs-tuils rpcbind dnsmasq python python-devel tree telnet git sshpass bind-utils -y
配置時間
#配置時間同步 [root@mobanji ~]# ntpdate -u pool.ntp.org [root@mobanji ~]# crontab -e #dingshi time */15 * * * * /usr/sbin/ntpdate -u pool.ntp.org >/dev/null 2>&1 #調整系統TimeZone [root@mobanji ~]# timedatectl set-timezone Asia/Shanghai #將當前的 UTC 時間寫入硬體時鐘 [root@mobanji ~]# timedatectl set-local-rtc 0
# 重啟依賴於系統時間的服務 [root@mobanji ~]# systemctl restart rsyslog [root@mobanji ~]# systemctl restart crond
ssh優化
[root@mobanji ~]# sed -i '79s@GSSAPIAuthentication yes@GSSAPIAuthentication no@;115s@#UseDNS yes@UseDNS no@' /etc/ssh/sshd_config [root@mobanji ~]# systemctl restart sshd
關閉防火牆和SElinux
#關閉防火牆,清理防火牆規則,設定預設轉發策略 [root@mobanji ~]# systemctl stop firewalld [root@mobanji ~]# systemctl disable firewalld Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [root@mobanji ~]# iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat [root@mobanji ~]# iptables -P FORWARD ACCEPT [root@mobanji ~]# firewall-cmd --state not running
關閉SELinux,否則後續K8S掛載目錄時可能 setenforce 0 報錯 Permission denied [root@mobanji ~]# setenforce 0 [root@mobanji ~]# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
關閉無關的服務
[root@mobanji ~]# systemctl list-unit-files |grep "enabled" [root@mobanji ~]# systemctl status postfix && systemctl stop postfix && systemctl disable postfix
設定limits.conf
[root@mobanji ~]# cat >> /etc/security/limits.conf <<EOF # End of file * soft nofile 65525 * hard nofile 65525 * soft nproc 65525 * hard nproc 65525 EOF
升級系統核心
CentOS 7.x系統自帶的3.10.x核心存在一些Bugs,導致執行的Docker、Kubernetes不穩定,例如:
-> 高版本的 docker(1.13 以後) 啟用了3.10 kernel實驗支援的kernel memory account功能(無法關閉),當節點壓力大如頻繁啟動和停止容器時會導致 cgroup memory leak;
-> 網路裝置引用計數洩漏,會導致類似於報錯:"kernel:unregister_netdevice: waiting for eth0 to become free. Usage count = 1";
解決方案如下:
-> 升級核心到 4.4.X 以上;
-> 或者,手動編譯核心,disable CONFIG_MEMCG_KMEM 特性;
-> 或者安裝修復了該問題的 Docker 18.09.1 及以上的版本。但由於 kubelet 也會設定 kmem(它 vendor 了 runc),所以需要重新編譯 kubelet 並指定 GOFLAGS="-tags=nokmem";
[root@mobanji ~]# uname -r 3.10.0-957.el7.x86_64 [root@mobanji ~]# yum update -y [root@mobanji ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org [root@mobanji ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm [root@mobanji ~]# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available kernel-lt.x86_64 4.4.185-1.el7.elrepo elrepo-kernel <---長期文件版 ...... kernel-ml.x86_64 5.2.1-1.el7.elrepo elrepo-kernel <---最新主線穩定版 ......
#安裝核心原始檔 [root@mobanji ~]# yum --enablerepo=elrepo-kernel install kernel-lt-devel kernel-lt -y
為了讓新安裝的核心成為預設啟動選項
需要如下修改 GRUB 配置,開啟並編輯 /etc/default/grub 並設定 GRUB_DEFAULT=0
意思是 GRUB 初始化頁面的第一個核心將作為預設核心.
#檢視預設啟動順序 [root@mobanji ~]# awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg CentOS Linux (4.4.185-1.el7.elrepo.x86_64) 7 (Core) CentOS Linux (3.10.0-957.21.3.el7.x86_64) 7 (Core) CentOS Linux (3.10.0-957.el7.x86_64) 7 (Core) CentOS Linux (0-rescue-b4c601a613824f9f827cb9787b605efb) 7 (Core)
由上面可以看出新核心(4.4.185)目前位置在0,原來的核心(3.10.0)目前位置在1,所以如果想生效最新的核心,還需要我們修改核心的啟動順序為0
#編輯/etc/default/grub檔案 [root@mobanji ~]# vim /etc/default/grub GRUB_DEFAULT=0 <--- saved改為0 #執行grub2-mkconfig命令來重新建立核心配置 #重啟系統 [root@mobanji ~]# reboot
關閉NUMA
[root@mobanji ~]# cp /etc/default/grub{,.bak} [root@mobanji ~]# vim /etc/default/grub ......... GRUB_CMDLINE_LINUX="...... numa=off" # 即新增"numa=0ff"內容 重新生成 grub2 配置檔案: # cp /boot/grub2/grub.cfg{,.bak} # grub2-mkconfig -o /boot/grub2/grub.cfg
設定rsyslogd 和systemd journald
systemd 的 journald 是 Centos 7 預設的日誌記錄工具,它記錄了所有系統、核心、Service Unit 的日誌。相比 systemd,journald 記錄的日誌有如下優勢:
-> 可以記錄到記憶體或檔案系統;(預設記錄到記憶體,對應的位置為 /run/log/jounal);
-> 可以限制佔用的磁碟空間、保證磁碟剩餘空間;
-> 可以限制日誌檔案大小、儲存的時間;
-> journald 預設將日誌轉發給 rsyslog,這會導致日誌寫了多份,/var/log/messages 中包含了太多無關日誌,不方便後續檢視,同時也影響系統性能。
[root@mobanji ~]# mkdir /var/log/journal <---#持久化儲存日誌的目錄 [root@mobanji ~]# mkdir /etc/systemd/journald.conf.d [root@mobanji ~]# cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF > [Journal] > # 持久化儲存到磁碟 > Storage=persistent > > # 壓縮歷史日誌 > Compress=yes > > SyncIntervalSec=5m > RateLimitInterval=30s > RateLimitBurst=1000 > > # 最大佔用空間 10G > SystemMaxUse=10G > > # 單日誌檔案最大 200M > SystemMaxFileSize=200M > > # 日誌儲存時間 2 周 > MaxRetentionSec=2week > > # 不將日誌轉發到 syslog > ForwardToSyslog=no > EOF [root@mobanji ~]# systemctl restart systemd-journald [root@mobanji ~]# systemctl status systemd-journald
載入核心模組
[root@mobanji ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF > #!/bin/bash > modprobe -- ip_vs > modprobe -- ip_vs_rr > modprobe -- ip_vs_wrr > modprobe -- ip_vs_sh > modprobe -- nf_conntrack_ipv4 > modprobe -- br_netfilter > EOF /etc/sysconfig/modules/ipvs.modules[root@mobanji ~]# [root@mobanji ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules [root@mobanji ~]# lsmod | grep br_netfilter br_netfilter 22256 0 bridge 151336 1 br_netfilter
優化核心引數
[root@mobanji ~]# cat << EOF | tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 net.ipv4.tcp_tw_recycle=0 #由於tcp_tw_recycle與kubernetes的NAT衝突,必須關閉!否則會導致服務不通。 vm.swappiness=0 #禁止使用 swap 空間,只有當系統 OOM 時才允許使用它 vm.overcommit_memory=1 #不檢查實體記憶體是否夠用 vm.panic_on_oom=0 #開啟 OOM fs.inotify.max_user_instances=8192 fs.inotify.max_user_watches=1048576 fs.file-max=52706963 fs.nr_open=52706963 net.ipv6.conf.all.disable_ipv6=1 #關閉不使用的ipv6協議棧,防止觸發docker BUG. net.netfilter.nf_conntrack_max=2310720 EOF [root@mobanji ~]# sysctl -p /etc/sysctl.d/k8s.conf
注:
必須關閉 tcp_tw_recycle,否則和 NAT 衝突,會導致服務不通;
關閉 IPV6,防止觸發 docker BUG;
個性vim配置
https://blog.csdn.net/zisefeizhu/article/details/89407487
[root@mobanji ~]# cat ~/.vimrc set nocompatible filetype on set paste set rtp+=~/.vim/bundle/Vundle.vim call vundle#begin() " 這裡根據自己需要的外掛來設定,以下是我的配置 " " " YouCompleteMe:語句補全外掛 set runtimepath+=~/.vim/bundle/YouCompleteMe autocmd InsertLeave * if pumvisible() == 0|pclose|endif "離開插入模式後自動關閉預覽視窗" let g:ycm_collect_identifiers_from_tags_files = 1 " 開啟 YCM基於標籤引擎 let g:ycm_collect_identifiers_from_comments_and_strings = 1 " 註釋與字串中的內容也用於補全 let g:syntastic_ignore_files=[".*\.py$"] let g:ycm_seed_identifiers_with_syntax = 1 " 語法關鍵字補全 let g:ycm_complete_in_comments = 1 let g:ycm_confirm_extra_conf = 0 " 關閉載入.ycm_extra_conf.py提示 let g:ycm_key_list_select_completion = ['<c-n>', '<Down>'] " 對映按鍵,沒有這個會攔截掉tab, 導致其他外掛的tab不能用. let g:ycm_key_list_previous_completion = ['<c-p>', '<Up>'] let g:ycm_complete_in_comments = 1 " 在註釋輸入中也能補全 let g:ycm_complete_in_strings = 1 " 在字串輸入中也能補全 let g:ycm_collect_identifiers_from_comments_and_strings = 1 " 註釋和字串中的文字也會被收入補全 let g:ycm_global_ycm_extra_conf='~/.vim/bundle/YouCompleteMe/third_party/ycmd/cpp/ycm/.ycm_extra_conf.py' let g:ycm_show_diagnostics_ui = 0 " 禁用語法檢查 inoremap <expr> <CR> pumvisible() ? "\<C-y>" : "\<CR>" " 回車即選中當前項 nnoremap <c-j> :YcmCompleter GoToDefinitionElseDeclaration<CR> " 跳轉到定義處 let g:ycm_min_num_of_chars_for_completion=2 " 從第2個鍵入字元就開始羅列匹配項 " " github 倉庫中的外掛 " Plugin 'VundleVim/Vundle.vim' Plugin 'vim-airline/vim-airline' "vim-airline配置:優化vim介面" "let g:airline#extensions#tabline#enabled = 1 " airline設定 " 顯示顏色 set t_Co=256 set laststatus=2 " 使用powerline打過補丁的字型 let g:airline_powerline_fonts = 1 " 開啟tabline let g:airline#extensions#tabline#enabled = 1 " tabline中當前buffer兩端的分隔字元 let g:airline#extensions#tabline#left_sep = ' ' " tabline中未啟用buffer兩端的分隔字元 let g:airline#extensions#tabline#left_alt_sep = ' ' " tabline中buffer顯示編號 let g:airline#extensions#tabline#buffer_nr_show = 1 " 對映切換buffer的鍵位 nnoremap [b :bp<CR> nnoremap ]b :bn<CR> " 對映<leader>num到num buffer map <leader>1 :b 1<CR> map <leader>2 :b 2<CR> map <leader>3 :b 3<CR> map <leader>4 :b 4<CR> map <leader>5 :b 5<CR> map <leader>6 :b 6<CR> map <leader>7 :b 7<CR> map <leader>8 :b 8<CR> map <leader>9 :b 9<CR> " vim-scripts 中的外掛 " Plugin 'taglist.vim' "ctags 配置:F3快捷鍵顯示程式中的各種tags,包括變數和函式等。 map <F3> :TlistToggle<CR> let Tlist_Use_Right_Window=1 let Tlist_Show_One_File=1 let Tlist_Exit_OnlyWindow=1 let Tlist_WinWidt=25 Plugin 'The-NERD-tree' "NERDTree 配置:F2快捷鍵顯示當前目錄樹 map <F2> :NERDTreeToggle<CR> let NERDTreeWinSize=25 Plugin 'indentLine.vim' Plugin 'delimitMate.vim' " 非 github 倉庫的外掛" " Plugin 'git://git.wincent.com/command-t.git' " 本地倉庫的外掛 " call vundle#end() """"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" """""新檔案標題 """""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" "新建.c,.h,.sh,.java檔案,自動插入檔案頭 autocmd BufNewFile *.sh,*.yaml exec ":call SetTitle()" ""定義函式SetTitle,自動插入檔案頭 func SetTitle() "如果檔案型別為.sh檔案 if &filetype == 'sh' call setline(1, "##########################################################################") call setline(2,"#Author: zisefeizhu") call setline(3,"#QQ: 2********0") call setline(4,"#Date: ".strftime("%Y-%m-%d")) call setline(5,"#FileName: ".expand("%")) call setline(6,"#URL: https://www.cnblogs.com/zisefeizhu/") call setline(7,"#Description: The test script") call setline(8,"#Copyright (C): ".strftime("%Y")." All rights reserved") call setline(9, "##########################################################################") call setline(10, "#!/bin/bash") call setline(11,"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin") call setline(12, "export $PATH") call setline(13, "") endif if &filetype == 'yaml' call setline(1, "##########################################################################") call setline(2,"#Author: zisefeizhu") call setline(3,"#QQ: 2********0") call setline(4,"#Date: ".strftime("%Y-%m-%d")) call setline(5,"#FileName: ".expand("%")) call setline(6,"#URL: https://www.cnblogs.com/zisefeizhu/") call setline(7,"#Description: The test script") call setline(8,"#Copyright (C): ".strftime("%Y")." All rights reserved") call setline(9, "###########################################################################") call setline(10, "") endif "新建檔案後,自動定位到檔案末尾 autocmd BufNewFile * normal G endfunc """""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" "鍵盤命令 """""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" nmap <leader>w :w!<cr> nmap <leader>f :find<cr> " 對映全選+複製 ctrl+a map <C-A> ggVGY map! <C-A> <Esc>ggVGY map <F12> gg=G " 選中狀態下 Ctrl+c 複製 vmap <C-c> "+y """""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" ""實用設定 """"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" " 設定當檔案被改動時自動載入 set autoread " quickfix模式 autocmd FileType c,cpp map <buffer> <leader><space> :w<cr>:make<cr> "程式碼補全 set completeopt=preview,menu "允許外掛 filetype plugin on "共享剪貼簿 set clipboard=unnamed "從不備份 set nobackup "make 執行 :set makeprg=g++\ -Wall\ \ % "自動儲存 set autowrite set ruler " 開啟狀態列標尺 set cursorline " 突出顯示當前行 set magic " 設定魔術 set guioptions-=T " 隱藏工具欄 set guioptions-=m " 隱藏選單欄 "set statusline=\ %<%F[%1*%M%*%n%R%H]%=\ %y\ %0(%{&fileformat}\ %{&encoding}\ %c:%l/%L%)\ " 設定在狀態行顯示的資訊 set foldcolumn=0 set foldmethod=indent set foldlevel=3 set foldenable " 開始摺疊 " 不要使用vi的鍵盤模式,而是vim自己的 set nocompatible " 語法高亮 set syntax=on " 去掉輸入錯誤的提示聲音 set noeb " 在處理未儲存或只讀檔案的時候,彈出確認 set confirm " 自動縮排 set autoindent set cindent " Tab鍵的寬度 set tabstop=2 " 統一縮排為2 set softtabstop=2 set shiftwidth=2 " 不要用空格代替製表符 set noexpandtab " 在行和段開始處使用製表符 set smarttab " 顯示行號 " set number " 歷史記錄數 set history=1000 "禁止生成臨時檔案 set nobackup set noswapfile "搜尋忽略大小寫 set ignorecase "搜尋逐字元高亮 set hlsearch set incsearch "行內替換 set gdefault "編碼設定 set enc=utf-8 set fencs=utf-8,ucs-bom,shift-jis,gb18030,gbk,gb2312,cp936 "語言設定 set langmenu=zh_CN.UTF-8 set helplang=cn " 我的狀態行顯示的內容(包括檔案型別和解碼) set statusline=%F%m%r%h%w\ [FORMAT=%{&ff}]\ [TYPE=%Y]\ [POS=%l,%v][%p%%]\ %{strftime(\"%d/%m/%y\ -\ %H:%M\")} set statusline=[%F]%y%r%m%*%=[Line:%l/%L,Column:%c][%p%%] " 總是顯示狀態行 set laststatus=2 " 命令列(在狀態行下)的高度,預設為1,這裡是2 set cmdheight=2 " 偵測檔案型別 filetype on " 載入檔案型別外掛 filetype plugin on " 為特定檔案型別載入相關縮排檔案 filetype indent on " 儲存全域性變數 set viminfo+=! " 帶有如下符號的單詞不要被換行分割 set iskeyword+=_,$,@,%,#,- " 字元間插入的畫素行數目 set linespace=0 " 增強模式中的命令列自動完成操作 set wildmenu " 使回格鍵(backspace)正常處理indent, eol, start等 set backspace=2 " 允許backspace和游標鍵跨越行邊界 set whichwrap+=<,>,h,l " 可以在buffer的任何地方使用滑鼠(類似office中在工作區雙擊滑鼠定位) set mouse=a set selection=exclusive set selectmode=mouse,key " 通過使用: commands命令,告訴我們檔案的哪一行被改變過 set report=0 " 在被分割的視窗間顯示空白,便於閱讀 set fillchars=vert:\ ,stl:\ ,stlnc:\ " 高亮顯示匹配的括號 set showmatch " 匹配括號高亮的時間(單位是十分之一秒) set matchtime=1 " 游標移動到buffer的頂部和底部時保持3行距離 set scrolloff=3 " 為C程式提供自動縮排 set smartindent " 高亮顯示普通txt檔案(需要txt.vim指令碼) au BufRead,BufNewFile * setfiletype txt "自動補全 :inoremap ( ()<ESC>i :inoremap ) <c-r>=ClosePair(')')<CR> ":inoremap { {<CR>}<ESC>O ":inoremap } <c-r>=ClosePair('}')<CR> :inoremap [ []<ESC>i :inoremap ] <c-r>=ClosePair(']')<CR> :inoremap " ""<ESC>i :inoremap ' ''<ESC>i function! ClosePair(char) if getline('.')[col('.') - 1] == a:char return "\<Right>" else return a:char endif endfunction filetype plugin indent on "開啟檔案型別檢測, 加了這句才可以用智慧補全 set completeopt=longest,menu """""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
設定sysctl.conf
[root@mobanji ~]# [ ! -e "/etc/sysctl.conf_bk" ] && /bin/mv /etc/sysctl.conf{,_bk} \ > && cat > /etc/sysctl.conf << EOF > fs.file-max=1000000 > fs.nr_open=20480000 > net.ipv4.tcp_max_tw_buckets = 180000 > net.ipv4.tcp_sack = 1 > net.ipv4.tcp_window_scaling = 1 > net.ipv4.tcp_rmem = 4096 87380 4194304 > net.ipv4.tcp_wmem = 4096 16384 4194304 > net.ipv4.tcp_max_syn_backlog = 16384 > net.core.netdev_max_backlog = 32768 > net.core.somaxconn = 32768 > net.core.wmem_default = 8388608 > net.core.rmem_default = 8388608 > net.core.rmem_max = 16777216 > net.core.wmem_max = 16777216 > net.ipv4.tcp_timestamps = 0 > net.ipv4.tcp_fin_timeout = 20 > net.ipv4.tcp_synack_retries = 2 > net.ipv4.tcp_syn_retries = 2 > net.ipv4.tcp_syncookies = 1 > #net.ipv4.tcp_tw_len = 1 > net.ipv4.tcp_tw_reuse = 1 > net.ipv4.tcp_mem = 94500000 915000000 927000000 > net.ipv4.tcp_max_orphans = 3276800 > net.ipv4.ip_local_port_range = 1024 65000 > #net.nf_conntrack_max = 6553500 > #net.netfilter.nf_conntrack_max = 6553500 > #net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60 > #net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120 > #net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120 > #net.netfilter.nf_conntrack_tcp_timeout_established = 3600 > EOF [root@mobanji ~]# sysctl -p
科目目錄
#指令碼目錄 [root@mobanji ~]#mkdir -p /service/scripts
至此:模板機優化完畢
1.4.3 虛機準備
節點名稱 |
IP |
安裝軟體 |
角色 |
|
jumpserver |
20.0.0.200 |
jumpserver |
堡壘機 |
|
k8s-master01 |
20.0.0.201 |
kubeadm、kubelet、kubectl、docker、etcd |
master節點 |
|
k8s-master02 |
20.0.0.202 |
|||
ceph |
||||
k8s-master03 |
20.0.0.203 |
|||
k8s-node01 |
20.0.0.204 |
kubeadm、kubelet、kubectl、docker |
業務節點 |
|
k8s-node02 |
20.0.0.205 |
|||
k8s-node03 |
20.0.0.206 |
|||
k8s-ha01 |
20.0.0.207 20.0.0.208 |
VIP:20.0.0.250 |
haproxy、keepalived、ceph
|
VIP |
k8s-ha02 |
||||
k8s-ceph |
20.0.0.209 |
ceph |
儲存節點 |
以k8s-master01為例
#改主機名 [root@mobanji ~]# hostnamectl set-hostname k8s-master01 [root@mobanji ~]# bash [root@k8s-master01 ~]# #改IP [root@k8s-master01 ~]# vimn TYPE=Ethernet BOOTPROTO=none NAME=eth0 DEVICE=eth0 ONBOOT=yes IPADDR=20.0.0.201 PREFIX=24 GATEWAY=20.0.0.2 DNS1=223.5.5.5 [root@k8s-master01 ~]# systemctl restart network [root@k8s-master01 ~]# ping www.baidu.com PING www.baidu.com (61.135.169.121) 56(84) bytes of data. 64 bytes from 61.135.169.121 (61.135.169.121): icmp_seq=1 ttl=128 time=43.3 ms ^C --- www.baidu.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 43.348/43.348/43.348/0.000 ms [root@k8s-master01 ~]# hostname -I 20.0.0.201 注: init 0 --> 快照
至此:虛機準備完畢
1.5 叢集
1.5.1 部署負載均衡高可用
以k8s-ha01為例
1.5.1.1 軟體安裝 #k8s-ha01和k8s-ha02 [root@k8s-ha01 ~]# yum -y install keepalived haproxy -y
1.5.1.2 部署keepalived #k8s-ha01和k8s-ha02 [root@k8s-ha01 ~]# cp /etc/keepalived/keepalived.conf{,.bak} [root@k8s-ha01 ~]# > /etc/keepalived/keepalived.conf #k8s-ha01 [root@k8s-ha01 ~]# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { [email protected] [email protected] } notification_email_from [email protected] smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id master-node } vrrp_script chk_haproxy_port { script "/service/scripts/chk_hapro.sh" interval 2 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 101 advert_int 1 unicast_src_ip 20.0.0.207 unicast_peer { 20.0.0.208 } authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 20.0.0.250 dev eth0 label eth0:1 } track_script { chk_haproxy_port } } [root@k8s-ha01 ~]# scp /etc/keepalived/keepalived.conf 20.0.0.208:/etc/keepalived/keepalived.conf #k8s-ha02 [root@k8s-ha02 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { [email protected] [email protected] } notification_email_from [email protected] smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id master-node } vrrp_script chk_http_port { script "/service/scripts/chk_hapro.sh" interval 3 weight -2 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 90 advert_int 1 unicast_src_ip 20.0.0.208 unicast_peer { 20.0.0.207 } authentication { auth_type PASS auth_pas s 1111 } virtual_ipaddress { 20.0.0.250 dev eth0 label eth0:1 } track_script { check_haproxy } }
1.5.1.3 部署haproxy #k8s-ha01和k8s-ha02 [root@k8s-ha01 ~]# cp /etc/haproxy/haproxy.cfg{,.bak} [root@k8s-ha01 ~]# > /etc/haproxy/haproxy.cfg #k8s-ha01 [root@k8s-ha01 ~]# vim /etc/haproxy/haproxy.cfg [root@k8s-ha01 ~]# cat /etc/haproxy/haproxy.cfg #--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global maxconn 100000 #chroot /var/haproxy/lib/haproxy #stats socket /var/lib/haproxy/haproxy.sock mode 600 level admin uid 99 gid 99 daemon nbproc 2 cpu-map 1 0 cpu-map 2 1 #pidfile /var/haproxy/run/haproxy.pid log 127.0.0.1 local3 info defaults option http-keep-alive option forwardfor maxconn 100000 mode http timeout connect 300000ms timeout client 300000ms timeout server 300000ms listen stats mode http bind 0.0.0.0:9999 stats enable log global stats uri /haproxy-status stats auth admin:zisefeizhu #K8S-API-Server frontend K8S_API bind *:8443 mode tcp default_backend k8s_api_nodes_6443 backend k8s_api_nodes_6443 mode tcp balance leastconn server 20.0.0.201 20.0.0.201:6443 check inter 2000 fall 3 rise 5 server 20.0.0.202 20.0.0.202:6443 check inter 2000 fall 3 rise 5 server 20.0.0.203 20.0.0.203:6443 check inter 2000 fall 3 rise 5 #k8s-ha02 [root@k8s-ha01 ~]# scp /etc/haproxy/haproxy.cfg 20.0.0.208:/etc/haproxy/haproxy.cfg 1.5.1.4 設定服務啟動順序及依賴關係 #k8s-ha01和k8s-ha02 [root@k8s-ha01 ~]# vim /usr/lib/systemd/system/keepalived.service [Unit] Description=LVS and VRRP High Availability Monitor After=syslog.target network-online.target haproxy.service Requires=haproxy.service ......
1.5.1.5 檢查指令碼 [root@k8s-ha01 ~]# vim /service/scripts/chk_hapro.sh [root@k8s-ha01 ~]# cat /service/scripts/chk_hapro.sh ########################################################################## #Author: zisefeizhu #QQ: 2********0 #Date: 2019-07-26 #FileName: /service/scripts/chk_hapro.sh #URL: https://www.cnblogs.com/zisefeizhu/ #Description: The test script #Copyright (C): 2019 All rights reserved ########################################################################## #!/bin/bash counts=$(ps -ef|grep -w "haproxy"|grep -v grep|wc -l) if [ "${counts}" = "0" ]; then systemctl restart keepalived.service sleep 2 counts=$(ps -ef|grep -w "haproxy"|grep -v grep|wc -l) if [ "${counts}" = "0" ]; then systemctl stop keepalived.service fi fi
1.5.1.6 啟動服務 [root@k8s-ha01 ~]# systemctl enable keepalived && systemctl start keepalived && systemctl enable haproxy && systemctl start haproxy && systemctl status keepalived && systemctl status haproxy
1.5.1.7 測試
[root@k8s-ha01 ~]# systemctl stop keepalived #重新整理瀏覽器 [root@k8s-ha01 ~]# systemctl start keepalived [root@k8s-ha01 ~]# systemctl stop haproxy #重新整理瀏覽器
1.5.2 部署kubernetes叢集
1.5.2.1 虛機初始化
以k8s-master01為例
為每臺虛機新增host解析記錄
[root@k8s-master01 ~]# cat >> /etc/hosts << EOF > 20.0.0.201 k8s-master01 > 20.0.0.202 k8s-master02 > 20.0.0.203 k8s-master03 > 20.0.0.204 k8s-node01 > 20.0.0.205 k8s-node02 > 20.0.0.206 k8s-node03 > EOF
免金鑰登陸
[root@k8s-master01 ~]# vim /service/scripts/ssh-copy.sh ########################################################################## #Author: zisefeizhu #QQ: 2********0 #Date: 2019-07-27 #FileName: /service/scripts/ssh-copy.sh #URL: https://www.cnblogs.com/zisefeizhu/ #Description: The test script #Copyright (C): 2019 All rights reserved ########################################################################## #!/bin/bash #目標主機列表 IP=" 20.0.0.201 k8s-master01 20.0.0.202 k8s-master02 20.0.0.203 k8s-master03 20.0.0.204 k8s-node01 20.0.0.205 k8s-node02 20.0.0.206 k8s-node03 " for node in ${IP};do sshpass -p 1 ssh-copy-id ${node} -o StrictHostKeyChecking=no if [ $? -eq 0 ];then echo "${node} 祕鑰copy完成" else echo "${node} 祕鑰copy失敗" fi done [root@k8s-master01 ~]# ssh-keygen -t rsa [root@k8s-master01 ~]# sh /service/scripts/ssh-copy.sh
關閉交換分割槽
[root@k8s-master01 ~]# swapoff -a [root@k8s-master01 ~]# yes | cp /etc/fstab /etc/fstab_bak [root@k8s-master01 ~]# cat /etc/fstab_bak |grep -v swap > /etc/fstab
新增k8s源
[root@k8s-master01 ~]# cat << EOF > /etc/yum.repos.d/kubernetes.repo > [kubernetes] > name=Kubernetes > baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ > enabled=1 > gpgcheck=1 > repo_gpgcheck=1 > gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg > EOF
1.5.2.2 安裝docker
k8s-master01為例
安裝必要的一些系統工具
[root@k8s-master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
安裝docker
[root@k8s-master01 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo [root@k8s-master01 ~]# yum list docker-ce --showduplicates | sort -r [root@k8s-master01 ~]# yum -y install docker-ce-18.06.3.ce-3.el7
配置daemon.json
#獲取映象加速 阿里雲 開啟網址:https://cr.console.aliyun.com/#/accelerator 註冊、登入、設定密碼 然後在頁面上可以看到加速器地址,類似於:https://123abc.mirror.aliyuncs.com 騰訊雲(非騰訊雲主機不可用) 加速地址:https://mirror.ccs.tencentyun.com #配置 [root@k8s-master01 ~]# mkdir -p /etc/docker/ \ && cat > /etc/docker/daemon.json << EOF { "registry-mirrors":[ "https://c6ai9izk.mirror.aliyuncs.com" ], "max-concurrent-downloads":3, "data-root":"/data/docker", "log-driver":"json-file", "log-opts":{ "max-size":"100m", "max-file":"1" }, "max-concurrent-uploads":5, "storage-driver":"overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ], "live-restore": true, <--- 保證 docker daemon重啟,但容器不重啟 "exec-opts": [ "native.cgroupdriver=systemd" ] } EOF
啟動檢查docker
[root@k8s-master01 ~]# systemctl enable docker \ > && systemctl restart docker \ > && systemctl status docker
注:daemon.json詳解
{ "authorization-plugins": [], //訪問授權外掛 "data-root": "", //docker資料持久化儲存的根目錄 "dns": [], //DNS伺服器 "dns-opts": [], //DNS配置選項,如埠等 "dns-search": [], //DNS搜尋域名 "exec-opts": [], //執行選項 "exec-root": "", //執行狀態的檔案的根目錄 "experimental": false, //是否開啟試驗性特性 "storage-driver": "", //儲存驅動器 "storage-opts": [], //儲存選項 "labels": [], //鍵值對式標記docker元資料 "live-restore": true, //dockerd掛掉是否保活容器(避免了docker服務異常而造成容器退出) "log-driver": "", //容器日誌的驅動器 "log-opts": {}, //容器日誌的選項 "mtu": 0, //設定容器網路MTU(最大傳輸單元) "pidfile": "", //daemon PID檔案的位置 "cluster-store": "", //叢集儲存系統的URL "cluster-store-opts": {}, //配置叢集儲存 "cluster-advertise": "", //對外的地址名稱 "max-concurrent-downloads": 3, //設定每個pull程序的最大併發 "max-concurrent-uploads": 5, //設定每個push程序的最大併發 "default-shm-size": "64M", //設定預設共享記憶體的大小 "shutdown-timeout": 15, //設定關閉的超時時限(who?) "debug": true, //開啟除錯模式 "hosts": [], //監聽地址(?) "log-level": "", //日誌級別 "tls": true, //開啟傳輸層安全協議TLS "tlsverify": true, //開啟輸層安全協議並驗證遠端地址 "tlscacert": "", //CA簽名檔案路徑 "tlscert": "", //TLS證書檔案路徑 "tlskey": "", //TLS金鑰檔案路徑 "swarm-default-advertise-addr": "", //swarm對外地址 "api-cors-header": "", //設定CORS(跨域資源共享-Cross-origin resource sharing)頭 "selinux-enabled": false, //開啟selinux(使用者、程序、應用、檔案的強制訪問控制) "userns-remap": "", //給使用者名稱空間設定 使用者/組 "group": "", //docker所在組 "cgroup-parent": "", //設定所有容器的cgroup的父類(?) "default-ulimits": {}, //設定所有容器的ulimit "init": false, //容器執行初始化,來轉發訊號或控制(reap)程序 "init-path": "/usr/libexec/docker-init", //docker-init檔案的路徑 "ipv6": false, //開啟IPV6網路 "iptables": false, //開啟防火牆規則 "ip-forward": false, //開啟net.ipv4.ip_forward "ip-masq": false, //開啟ip掩蔽(IP封包通過路由器或防火牆時重寫源IP地址或目的IP地址的技術) "userland-proxy": false, //使用者空間代理 "userland-proxy-path": "/usr/libexec/docker-proxy", //使用者空間代理路徑 "ip": "0.0.0.0", //預設IP "bridge": "", //將容器依附(attach)到橋接網路上的橋標識 "bip": "", //指定橋接ip "fixed-cidr": "", //(ipv4)子網劃分,即限制ip地址分配範圍,用以控制容器所屬網段實現容器間(同一主機或不同主機間)的網路訪問 "fixed-cidr-v6": "", //(ipv6)子網劃分 "default-gateway": "", //預設閘道器 "default-gateway-v6": "", //預設ipv6閘道器 "icc": false, //容器間通訊 "raw-logs": false, //原始日誌(無顏色、全時間戳) "allow-nondistributable-artifacts": [], //不對外分發的產品提交的registry倉庫 "registry-mirrors": [], //registry倉庫映象 "seccomp-profile": "", //seccomp配置檔案 "insecure-registries": [], //非https的registry地址 "no-new-privileges": false, //禁止新優先順序(??) "default-runtime": "runc", //OCI聯盟(The Open Container Initiative)預設執行時環境 "oom-score-adjust": -500, //記憶體溢位被殺死的優先順序(-1000~1000) "node-generic-resources": ["NVIDIA-GPU=UUID1", "NVIDIA-GPU=UUID2"], //對外公佈的資源節點 "runtimes": { //執行時 "cc-runtime": { "path": "/usr/bin/cc-runtime" }, "custom": { "path": "/usr/local/bin/my-runc-replacement", "runtimeArgs": [ "--debug" ] } } }
1.5.2.3 使用kubeadm部署kubernetes
以k8s-master01為例
安裝必備軟體
[root@k8s-master01 ~]# yum list kubelet kubeadm kubectl --showduplicates | sort -r [root@k8s-master01 ~]# yum install -y kubelet-1.15.1 kubeadm-1.15.1 kubectl-1.15.1 ipvsadm ipset ##設定kubelet開機自啟動,注意:這一步不能直接執行 systemctl start kubelet,會報錯,成功初始化完後kubelet會自動起來 [root@k8s-master01 ~]# systemctl enable kubelet #kubectl命令補全 [root@k8s-master01 ~]# source /usr/share/bash-completion/bash_completion [root@k8s-master01 ~]# source <(kubectl completion bash) [root@k8s-master01 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
修改初始化配置
使用kubeadm config print init-defaults > kubeadm-init.yaml 打印出預設配置,然後在根據自己的環境修改配置 注意 需要修改advertiseAddress、controlPlaneEndpoint、imageRepository、serviceSubnet、kubernetesVersion advertiseAddress為master01的ip controlPlaneEndpoint為VIP+8443埠 imageRepository修改為阿里的源 serviceSubnet找網路組要一段沒有使用的IP段 kubernetesVersion和上一步的版本一致 [root@k8s-master01 ~]# cd /data/ [root@k8s-master01 data]# ll [root@k8s-master01 data]# mkdir tmp [root@k8s-master01 data]# cd tmp [root@k8s-master01 tmp]# kubeadm config print init-defaults > kubeadm-init.yaml [root@k8s-master01 tmp]# cp kubeadm-init.yaml{,.bak} [root@k8s-master01 tmp]# vim kubeadm-init.yaml [root@k8s-master01 tmp]# diff kubeadm-init.yaml{,.bak} 12c12 < advertiseAddress: 20.0.0.201 --- > advertiseAddress: 1.2.3.4 26d25 < controlPlaneEndpoint: "20.0.0.250:8443" 33c32 < imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers --- > imageRepository: k8s.gcr.io 35c34 < kubernetesVersion: v1.15.1 --- > kubernetesVersion: v1.14.0 38c37 < serviceSubnet: 10.0.0.0/16 --- > serviceSubnet: 10.96.0.0/12
下載映象
#檢視所需映象版本 [root@k8s-master01 tmp]# kubeadm config images list k8s.gcr.io/kube-apiserver:v1.15.1 k8s.gcr.io/kube-controller-manager:v1.15.1 k8s.gcr.io/kube-scheduler:v1.15.1 k8s.gcr.io/kube-proxy:v1.15.1 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/coredns:1.3.1 #下載所需映象 [root@k8s-master01 tmp]# kubeadm config images pull --config kubeadm-init.yaml [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.15.1 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.15.1 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.15.1 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.15.1 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10 [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1
初始化
[init] Using Kubernetes version: v1.15.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.0.0.1 20.0.0.201 20.0.0.250] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [20.0.0.201 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [20.0.0.201 127.0.0.1 ::1] [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "admin.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "kubelet.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "controller-manager.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 57.514816 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8s-master01 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: abcdef.0123456789abcdef [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join 20.0.0.250:8443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:cdfa555306ee75391e03eef75b8fa16ba121f5a9effe85e81874f6207b610c9f \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 20.0.0.250:8443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:cdfa555306ee75391e03eef75b8fa16ba121f5a9effe85e81874f6207b610c9f
注:kubeadm init主要執行了以下操作
[init]:指定版本進行初始化操作 [preflight] :初始化前的檢查和下載所需要的Docker映象檔案 [kubelet-start] :生成kubelet的配置檔案”/var/lib/kubelet/config.yaml”,沒有這個檔案kubelet無法啟動,所以初始化之前的kubelet實際上啟動失敗。 [certificates]:生成Kubernetes使用的證書,存放在/etc/kubernetes/pki目錄中。 [kubeconfig] :生成 KubeConfig 檔案,存放在/etc/kubernetes目錄中,元件之間通訊需要使用對應檔案。 [control-plane]:使用/etc/kubernetes/manifest目錄下的YAML檔案,安裝 Master 元件。 [etcd]:使用/etc/kubernetes/manifest/etcd.yaml安裝Etcd服務。 [wait-control-plane]:等待control-plan部署的Master元件啟動。 [apiclient]:檢查Master元件服務狀態。 [uploadconfig]:更新配置 [kubelet]:使用configMap配置kubelet。 [patchnode]:更新CNI資訊到Node上,通過註釋的方式記錄。 [mark-control-plane]:為當前節點打標籤,打了角色Master,和不可排程標籤,這樣預設就不會使用Master節點來執行Pod。 [bootstrap-token]:生成token記錄下來,後邊使用kubeadm join往叢集中新增節點時會用到 [addons]:安裝附加元件CoreDNS和kube-proxy
為kubectl 準備kubeconfig檔案
#kubectl預設會在執行的使用者家目錄下面的.kube目錄下尋找config檔案。這裡是將在初始化時[kubeconfig]步驟生成的admin.conf拷貝到.kube/config [root@k8s-master01 tmp]# mkdir -p $HOME/.kube [root@k8s-master01 tmp]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@k8s-master01 tmp]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
檢視元件狀態
[root@k8s-master01 tmp]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true"} [root@k8s-master01 tmp]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master01 NotReady master 6m23s v1.15.1 目前只有一個節點,角色是Master,狀態是NotReady,狀態是NotReady狀態是因為還沒有安裝網路外掛
部署其他master
在k8s-master01將證書檔案拷貝至k8s-master02、k8s-master03節點 在k8s-master01上部署 #拷貝證書至k8s-master02節點 [root@k8s-master01 ~]# vim /service/scripts/k8s-master-zhengshu.sh ########################################################################## #Author: zisefeizhu #QQ: 2********0 #Date: 2019-07-27 #FileName: /service/scripts/k8s-master-zhengshu-master02.sh #URL: https://www.cnblogs.com/zisefeizhu/ #Description: The test script #Copyright (C): 2019 All rights reserved ########################################################################## #!/bin/bash USER=root CONTROL_PLANE_IPS="k8s-master02 k8s-master03" for host in ${CONTROL_PLANE_IPS}; do ssh "${USER}"@$host "mkdir -p /etc/kubernetes/pki/etcd" scp /etc/kubernetes/pki/ca.* "${USER}"@$host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.* "${USER}"@$host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.* "${USER}"@$host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/etcd/ca.* "${USER}"@$host:/etc/kubernetes/pki/etcd/ scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/ don [root@k8s-master01 ~]# sh -x /service/scripts/k8s-master-zhengshu-master02.sh #在k8s-master02上執行,注意注意--experimental-control-plane引數 [root@k8s-master02 ~]# kubeadm join 20.0.0.250:8443 --token abcdef.0123456789abcdef \ > --discovery-token-ca-cert-hash sha256:cdfa555306ee75391e03eef75b8fa16ba121f5a9effe85e81874f6207b610c9f \ > --experimental-control-plane Flag --experimental-control-plane has been deprecated, use --control-plane instead [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [preflight] Running pre-flight checks before initializing the new control plane instance [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master02 localhost] and IPs [20.0.0.202 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master02 localhost] and IPs [20.0.0.202 127.0.0.1 ::1] [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master02 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.0.0.1 20.0.0.202 20.0.0.250] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki" [certs] Using the existing "sa" key [kubeconfig] Generating kubeconfig files [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf" [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [check-etcd] Checking that the etcd cluster is healthy [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [etcd] Announced new etcd member joining to the existing etcd cluster [etcd] Wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml" [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [mark-control-plane] Marking the node k8s-master02 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8s-master02 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] This node has joined the cluster and a new control plane instance was created: * Certificate signing request was sent to apiserver and approval was received. * The Kubelet was informed of the new secure connection details. * Control plane (master) label and taint were applied to the new node. * The Kubernetes control plane instances scaled up. * A new etcd member was added to the local/stacked etcd cluster. To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Run 'kubectl get nodes' to see this node join the cluster. [root@k8s-master02 ~]# mkdir -p $HOME/.kube [root@k8s-master02 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@k8s-master02 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config #在k8s-master02上執行,注意注意--experimental-control-plane引數 [root@k8s-master03 ~]# kubeadm join 20.0.0.250:8443 --token abcdef.0123456789abcdef \ > --discovery-token-ca-cert-hash sha256:cdfa555306ee75391e03eef75b8fa16ba121f5a9effe85e81874f6207b610c9f \ > --experimental-control-plane Flag --experimental-control-plane has been deprecated, use --control-plane instead [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [preflight] Running pre-flight checks before initializing the new control plane instance [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master03 localhost] and IPs [20.0.0.203 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master03 localhost] and IPs [20.0.0.203 127.0.0.1 ::1] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "apiserver" certif