視訊學習--二進位制安裝k8s記錄
Kubernetes概述
●官網: https://kubernetes.io ●GitHub : https://github.com/kubernetes/kubernetes ●由來:谷歌的Borg系統,後經Go語言重寫並捐獻給CNCF基金會開源 ●含義:詞根源於希臘語:舵手/飛行員, K8S→K12345678S ●重要作用:開源的容器編排框架工具(生態極其豐富) ●學習的意義:解決跑裸docker的若干痛點
Kubernetes優勢
●自動裝箱,水平擴充套件,自我修復 ●服務發現和負載均衡 ●自動釋出(預設滾動釋出模式)和回滾
釋出策略:[藍綠髮布,灰度釋出,滾動釋出,金絲雀釋出] ●集中化配置管理和金鑰管理 ●儲存編排 ●任務批處理執行
Kubernetes快速入門
●四組基本概念
●Pod/Pod控制器
●Pod ●Pod是K8S裡能夠被執行的最小的邏輯單元(原子單元) ●1個Pod裡面可以執行多個容器,它們共享UTS+ NET +IPC名稱空間 ●可以把Pod理解成豌豆莢,而同一Pod內的每個容器是一-顆顆豌豆 ●一個Pod裡執行多個容器,又叫:邊車( SideCar )模式 ●Pod控制器 ●Pod控制器是Pod啟動的一-種模板,用來保證在K8S裡啟動的Pod 應始終按照人們的預期執行(副本數、生命週期、健康狀態檢查... ) ●K8S內提供了眾多的Pod控制器,常用的有以下幾種: ●Deployment ●DaemonSet ●ReplicaSet ●StatefulSet ●Job ●Cronjob
●Name/Namespace
●Name ●由於K8S內部,使用“資源”來定義每一 種邏輯概念(功能) 故每種"資源”, 都應該有自己的"名稱” ●"資源”有api版本( apiVersion )類別( kind)、元資料 ( metadata)、定義清單( spec)、狀態( status )等配置資訊 “名稱”通常定義在"資源”的"元資料” 資訊裡 ●Namespace ●隨著專案增多、人員增加、叢集規模的擴大,需要- -種能夠隔離K8S內 各種"資源”的方法,這就是名稱空間 ●名稱空間可以理解為K8S內部的虛擬叢集組 ●不同名稱空間內的"資源” ,名稱可以相同,相同名稱空間內的同種 “資源” , “名稱” 不能相同 ●合理的使用K8S的名稱空間,使得叢集管理員能夠更好的對交付到K8S裡 的服務進行分類管理和瀏覽 ●K8S裡預設存在的名稱空間有: default. kube-system、 kube-public ●查詢K8S裡特定"資源”要帶上相應的名稱空間
●Label/Label選擇器
●Label
●標籤是k8s特色的管理方式,便於分類管理資源物件。
●一個標籤可以對應多個資源,一個資源也可以有多個標籤,它們是
多對多的關係。
●一個資源擁有多個標籤,可以實現不同維度的管理。
●標籤的組成: key=value
●與標籤類似的,還有一-種“註解” ( annotations )
Label選擇器
●給資源打上標籤後,可以使用標籤選擇器過濾指定的標籤
標籤選擇器目前有兩個:基於等值關係(等於、不等於)和基於集
合關係(屬於、不屬於、存在)
●許多資源支援內嵌標籤選擇器欄位
●matchLabels
●matchExpressions
● Service/Ingress
●Service
●在K8S的世界裡,雖然每個Pod都會被分配一 個單獨的IP地址,但這
個IP地址會隨著Pod的銷燬而消失
●Service (服務)就是用來解決這個問題的核心概念
●一個Service可以看作- -組提供相同服務的Pod的對外訪問介面
●Service作用於哪些Pod是通過標籤選擇器來定義的
●Ingress
●Ingress是K8S叢集裡工作在OSI網路參考模型下,第7層的應用,對
外暴露的介面
●Service只能進行4流量排程,表現形式是ip+ port
●Ingress則可以排程不同業務域、 不同URL訪問路徑的業務流量
元件
●核心元件
配置儲存中心→etcd服務(叢集主掛了會選舉)
●主控( master )節點
kube-apiserver服務 核心大腦
●apiserver
●提供了叢集管理的REST
API介面(包括鑑權、資料;
校驗及叢集狀態變更)
負責其他模組之間的資料
互動,承擔通訊樞紐功能
●是資源配額控制的入口
●提供完備的叢集安全機制
kube-controller-manager服務
●controller-manager
●由一系列控制器組成,通過
apiserver監控整個叢集的
狀態,並確保叢集處於預期
的工作狀態
●Node Controller 節點控制器
●Deployment Controller pod控制器
●Service Controller 服務控制器
●Volume Controller 儲存卷控制器
●Endpoint Controller 接入點控制器
●Garbage Controller 垃圾回收控制器
●Namespace Controller 名稱空間控制器
●Job Controller 任務控制器
●Resource quta Controller 資源配額控制器
kube-scheduler服務
●scheduler
●主要功能是接收排程pod
到適合的運算節點上
●預算策略( predict )
●優選策略( priorities )
●運算(node)節點
kube-kubelet服務
●kubelet
●簡單地說, kubelet的
主要功能就是定時從某
個地方獲取節點上pod
的期望狀態(執行什麼
容器、執行的副本數量
、
網路或者儲存如何配
置等等) , 並呼叫對應
的容器平臺介面達到這
個狀態
定時彙報當前節點的狀
態給apiserver,以供調
度的時候使用
●映象和容器的清理工作
保證節點上映象不會
佔滿磁碟空間,退出的
容器不會佔用太多資源
●Kube-proxy服務
●kube-proxy
●是K8S在每個節點上執行網路
代理, service資源的載體
●建立了pod網路和叢集網路的
關係( clusterip>podip )
●常用三種流量排程模式
●Userspace (廢棄)
●Iptables (瀕臨廢棄)
●Ipvs(推薦)
●負責建立和刪除包括更新調
度規則、通知apiserver自己
的更新,或者從apiserver哪
裡獲取其他kube-proxy的調
度規則變化來更新自己的
●CLI客戶端 ● kubectl 核心附件 .CNI網路外掛→flannel/calico 服務發現用外掛> coredns 服務暴露用外掛> traefik .GUI管理外掛→Dashboard
k8s三條網路
service ip
pod ip
node ip
10.4.7.0 10--idc 私有地址 4--對應機房地址(亦莊同濟-世紀互聯還是酒仙橋大白樓) 7--區分業務和環境(通過vlan做物理隔離)
邏輯架構
常見的K8S安裝部署方式:
●Minikube 單節點微型K8S (僅供學習,預覽使用) ●二進位制安裝部署(生產首選,新手推薦) ●使用kubeadmin進行部署 , K8S的部署工具,跑在K8S裡(相對簡單,熟手推薦)
安裝部署準備工作:
●準備5臺2c/2g/50g虛機, 使用10.4.7.0/24網路 ● 預裝CentOS7. 6作業系統,做好基礎優化 ●安裝部署bind9 ,部署自建DNS系統 ●準備自簽證書環境 ● 安裝部署Docker環境 ,部署Harbor私有倉庫
虛擬網路編輯器
VMnet8 子網IP:10.4.7.0 子網掩碼: 255.255.255.0
NAT設定 -->閘道器 10.4.7.254
windows 下網路連線 VMnet8 ipv4 10.4.7.1 躍點數 10 ,後續配置DNS 優先走此網絡卡
建立主機
10.4.7.11 主機名:hdss7-11.host.com CPU:2核 記憶體:2048
10.4.7.12 主機名:hdss7-12.host.com
10.4.7.21 主機名:hdss7-21.host.com
10.4.7.22 主機名:hdss7-22.host.com
10.4.7.200 主機名:hdss7-200.host.com #可以白底黑字醒目
主機名
hdss7-11.host.com hdss--匯德商廈 命名:位置加主機後兩位
主機命名儘量和業務看起來不要有關係 ,只給地名ip,下回跑別的業務也行,避免 mysql01 這種
關閉SElinux 和 firewalld
[root@hdss7-11 ~]# getenforce
Disabled
## setenforce 0
[root@hdss7-11 ~]# uname -a
Linux hdss7-11.host.com 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
## systemctl stop firewalld
安裝epel-release
yum install epel-release
安裝必要的工具
# yum install wget net-tools telnet tree nmap sysstat lrzsz dos2unix bind-utils -y
DNS服務初始化
給容器綁host ,需要用到bind 自建的DNS,讓所有的容器能服從DNS解析記錄
HDSS7-11.host.com 上 安裝bind9軟體
yum install bind -y
[root@hdss7-11 ~]# rpm -qa bind
bind-9.11.4-16.P2.el7_8.6.x86_64
配置bind9
注意bind 配置格式很嚴格,該有空格有空格,該有分好有分號
主配置檔案
[root@hdss7-11 ~]# vi /etc/named.conf
options {
listen-on port 53 { 10.4.7.11; };
18 recursing-file "/var/named/data/named.recursing";
19 secroots-file "/var/named/data/named.secroots";
20 allow-query { any; };
21 forwarders { 10.4.7.254; };
32 recursion yes; ##遞迴查詢
33
34 dnssec-enable no; #DNS實驗環境可以先關掉,節省資源
35 dnssec-validation no; #也是先關掉
儲存退出,檢查語法
[root@hdss7-11 ~]# named-checkconf
沒報錯 OK
區域配置檔案
/etc/named.rfc1912.zones
##主機域
zone "host.com" IN { ##主機域假的只能主機內使用,叫啥都行 一般叫host.com 或 opi.com 沒實際意義的
type master;
file "host.com.zone";
allow-update { 10.4.7.11; };
};
##業務域
zone "od.com" IN {
type master;
file "od.com.zone";
allow-update { 10.4.7.11; };
};
配置區域資料檔案
- 配置主機域資料檔案
/var/named/host.com.zone
$ORIGIN host.com.
$TTL 600 ; 10 minutes
@ IN SOA dns.host.com. dnsadmin.host.com. (
2020061301 ; serial #需要10位 01第一條
10800 ; refresh (3 hours)
900 ; retry (15 minutes)
604800 ; expire (1 week)
86400 ; minimum (1 day)
)
NS dns.host.com.
$TTL 60 ; 1 minute
dns A 10.4.7.11
HDSS7-11 A 10.4.7.11
HDSS7-12 A 10.4.7.12
HDSS7-21 A 10.4.7.21
HDSS7-22 A 10.4.7.22
HDSS7-200 A 10.4.7.200
- 配置業務域資料檔案
/var/named/od.com.zone
$ORIGIN od.com.
$TTL 600 ; 10 minutes
@ IN SOA dns.od.com. dnsadmin.od.com. (
2020061301 ; serial
10800 ; refresh (3 hours)
900 ; retry (15 minutes)
604800 ; expire (1 week)
86400 ; minimum (1 day)
)
NS dns.od.com.
$TTL 60 ; 1 minute
dns A 10.4.7.11
#檢查 named-checkconf
systemctl start named
netstat -lntup |grep 53
[root@hdss7-11 ~]# netstat -lntup |grep 53
tcp 0 0 10.4.7.11:53 0.0.0.0:* LISTEN 7646/named
tcp 0 0 127.0.0.1:953 0.0.0.0:* LISTEN 7646/named
tcp6 0 0 ::1:953 :::* LISTEN 7646/named
udp 0 0 10.4.7.11:53 0.0.0.0:* 7646/named
[root@hdss7-11 ~]# dig -t A hdss7-21.host.com @10.4.7.11 +short
10.4.7.21
dns 修改
[root@hdss7-11 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0
DNS1=10.4.7.11
[root@hdss7-11 ~]# systemctl restart network
其他節點:
[root@hdss7-12 ~]# vi /etc/resolv.conf 正常是重啟網絡卡自動新增 search
search host.com
windows VMnet8 ipv4 DNS : 10.4.7.11 ##windows開啟網頁也需要域
準備簽發證書環境
運維主機HDSS7-200.host.com. 上:
安裝CFSSL
。證書籤發工具CFSSL: R1.2 cfssI下載地址 Clssl-jison下載地址 cfssl-certinfo 下載地址
HDSS7-200.host.com上:
~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/bin/cfssl
~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/bin/cfssl-json
~]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/bin/cfssl-certinfo
~]# chmod +x /usr/bin/cfssl*
建立生成CA證書籤名請求( csr )的JSON配置檔案
/opt/certs/ca-csr.json
{
"CN": "OldboyEdu",
"hosts": [
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
],
"ca": {
"expiry": "175200h"
}
}
CN: Common Name ,瀏覽器使用該欄位驗證網站是否合法, -般寫的是域名。非常重要。測覽器使 用該欄位驗證網站是否合法 C: Country.國家 ST:State,州,省 L: Locality,地區,城市 O: Organization Name ,組織名稱.公司名稱 OU: Organization Unit Name ,組織單位名稱,公司部門
生成ca證書和私鑰
[root@hdss7-200 certs]# cfssl gencert -initca ca-csr.json #這個用不了
[root@hdss7-200 certs]# cfssl gencert -initca ca-csr.json |cfssl-json -bare ca #做成承載式證書
部署docker環境
HDSS7-200.host.com ,HDSS7-21.host.com ,HDSS7-22.host.com上:
安裝
curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
配置
[root@hdss7-21 ~]# mkdir /etc/docker /data/docker -p
[root@hdss7-21 ~]# vi /etc/docker/daemon.json
{
"graph": "/data/docker",
"storage-driver": "overlay2",
"insecure-registries": ["registry.access.redhat.com","quay.io","harbor.od.com"],
"registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],
"bip": "172.7.21.1/24",
"exec-opts": ["native.cgroupdriver=systemd"],
"live-restore": true
}
[root@hdss7-21 ~]# systemctl start docker
22 ,200 都操作 注意bip 172.7.x.1/24 x--對應的22,200
部署docker映象私有倉庫harbor
HDSS7-200.host.com上:
下載一個二進位制包並解壓
harbor官方github地址
https://github.com/goharbor/harbor 1.7.5之前19年11月有過漏洞 , harbor-offline--1.8.x以上
harbor 下載地址
src]# tar xf harbor-offline-installer-v1.8.5.tgz -C /opt/
opt]# mv harbor/ harbor-v1.8.5
opt]# ln -s /opt/harbor-v1.8.5/ /opt/harbor
修改配置檔案
vi /opt/harbor/harbor.yml
1 hostname: harbor.od.com
2 http:
10 port: 180
27 harbor.admin_password: Harbor12345
35 data_volume: /data/harbor
6 1og:
7 level: info
8rotate count: 50
rotate_ size: 200M
82 location: /data/harbor/1ogs
mkdir -p /data/harbor/logs
安裝 docker-compose
yum install docker-compose -y
安裝harbor
# harbor 也是單機編排的容器,依賴docker-compose
sh /opt/harbor/install.sh
檢查harbor啟動情況
docker-compose ps
安裝nginx並配置
yum install nginx -y
vi /etc/nginx/conf.d/harbor.od.com.conf
server {
listen 80;
server_name harbor.od.com;
client_max_body_size 1000m;
location / {
proxy_pass http://127.0.0.1:180;
}
}
nginx -t
systemctl start nginx
systemctl enable nginx
配置harbor的DNS內網解析
- 配置
HDSS7-11上:
~]# vi /var/named/od.com.zone
2020061302 ; serial
harbor A 10.4.7.200
注意serial前滾一個序號 #手擼bind就得前滾序列號 +1
~]# systemctl restart named
- 檢查
~]# dig -t A harbor.od.com +short
10.4.7.200 -t 指定查詢的記錄型別 +short 顯示簡簡訊息
瀏覽器開啟
http://harbor.od.com
下拉nginx 並上傳harbor
docker pull nginx:1.7.9 #公網不帶v
<==> docker pull docker.io/library/nginx:1.7.9
docker tag 84581e99d807 harbor.od.com/public/nginx:v1.7.9
docker login harbor.od.com
Username: admin
Password: 123456
docker push harbor.od.com/public/nginx:v1.7.9
部署Master節點服務
部署etcd叢集
建立基於根證書的config配置檔案(客戶端和服務端通訊證書配置)
HDSS7-200上:
cd /opt/certs
vi /opt/certs/ca-config.json
{
"signing": {
"default": {
"expiry": "175200h"
},
"profiles": {
"server": { #服務端啟動需要證書
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"server auth"
]
},
"client": { #客戶端找服務端通訊需要證書
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"client auth"
]
},
"peer": { ##對端通訊兩邊都需要證書
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
證書型別 client certificate:客戶端使用。用幹服名端認證客戶端,例如etcdctl,etcd proxy, fleetctl docker客戶端 server certificate:服務端使用,客戶端以此驗證服務端身份,例如docker服務端、kube -apiserver peer certilicate:雙向證書,用於etcd叢集成員間通訊
建立生成自簽證書籤名請求(csr)的JSON配置檔案
運維主機 HDSS7-200.host.com上:
vi /opt/certs/etcd-peer-csr.json
{
"CN": "k8s-etcd",
"hosts": [
"10.4.7.11",
"10.4.7.12",
"10.4.7.21",
"10.4.7.22"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
生成etcd證書和私鑰
[root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json |cfssl-json -bare etcd-peer
2020/06/14 09:31:57 [INFO] generate received request
2020/06/14 09:31:57 [INFO] received CSR
2020/06/14 09:31:57 [INFO] generating key: rsa-2048
2020/06/14 09:31:58 [INFO] encoded CSR
2020/06/14 09:31:58 [INFO] signed certificate with serial number 558323261183529745818213423343421240701155084378
2020/06/14 09:31:58 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
檢查生成的證書和私鑰
cd /opt/certs
[root@hdss7-200 certs]# ls -l |grep etcd
-rw-r--r-- 1 root root 1062 Jun 14 09:31 etcd-peer.csr
-rw-r--r-- 1 root root 363 Jun 14 09:02 etcd-peer-csr.json
-rw------- 1 root root 1679 Jun 14 09:31 etcd-peer-key.pem
-rw-r--r-- 1 root root 1428 Jun 14 09:31 etcd-peer.pem
建立etcd使用者
HDSS7-12.host.com上:
useradd -s /sbin/nologin -M etcd
下載軟體. 解壓,做軟連結
etcd 下載地址:https://github.com/etcd-io/etcd/tags 建議使用穩定的3.1
HDSS7-12.host.com上:
[root@hdss7-12 src]# wget https://github.com/etcd-io/etcd/releases/download/v3.1.20/etcd-v3.1.20-linux-amd64.tar.gz
[root@hdss7-12 src]# tar xfv etcd-v3.1.20-linux-amd64.tar.gz -C /opt/
[root@hdss7-12 opt]# mv etcd-v3.1.20-linux-amd64 etcd-v3.1.20
[root@hdss7-12 opt]# ln -s etcd-v3.1.20 etcd
[root@hdss7-12 opt]# ll
total 0
lrwxrwxrwx 1 root root 12 Jun 14 10:19 etcd -> etcd-v3.1.20
drwxr-xr-x 3 478493 89939 123 Oct 11 2018 etcd-v3.1.20
drwxr-xr-x 2 root root 45 Jun 14 10:03 src
建立目錄,拷貝證書,私鑰
- 建立目錄:
mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server
- 拷貝證書
將運維主機上生成的ca.pem. etcd-peer-key.pem. etcd-peer.pem 拷貝到/opt/etcd/certs目 錄中.注意私鑰檔案許可權600
[root@hdss7-12 certs]# ll
total 12
-rw-r--r-- 1 root root 1346 Jun 14 10:28 ca.pem
-rw------- 1 root root 1679 Jun 14 10:29 etcd-peer-key.pem
-rw-r--r-- 1 root root 1428 Jun 14 10:28 etcd-peer.pem
- 建立etcd服務啟動指令碼
HDSS7-12.host.com上:
/opt/etcd/etcd-server-startup.sh
#!/bin/sh
./etcd --name etcd-server-7-12 \
--data-dir /data/etcd/etcd-server \
--listen-peer-urls https://10.4.7.12:2380 \
--listen-client-urls https://10.4.7.12:2379,http://127.0.0.1:2379 \
--quota-backend-bytes 8000000000 \
--initial-advertise-peer-urls https://10.4.7.12:2380 \
--advertise-client-urls https://10.4.7.12:2379,http://127.0.0.1:2379 \
--initial-cluster etcd-server-7-12=https://10.4.7.12:2380,etcd-server-7-21=https://10.4.7.21:2380,etcd-server-7-22=https://10.4.7.22:2380 \
--ca-file ./certs/ca.pem \
--cert-file ./certs/etcd-peer.pem \
--key-file ./certs/etcd-peer-key.pem \
--client-cert-auth \
--trusted-ca-file ./certs/ca.pem \
--peer-ca-file ./certs/ca.pem \
--peer-cert-file ./certs/etcd-peer.pem \
--peer-key-file ./certs/etcd-peer-key.pem \
--peer-client-cert-auth \
--peer-trusted-ca-file ./certs/ca.pem \
--log-output stdout
調整許可權
[root@hdss7-12 etcd]# chmod +x etcd-server-startup.sh
[root@hdss7-21 certs]# useradd -s /sbin/nologin -M etcd
[root@hdss7-12 etcd]# chown -R etcd.etcd /opt/etcd-v3.1.20
[root@hdss7-12 opt]# chown -R etcd.etcd /data/etcd/
[root@hdss7-12 opt]# chown -R etcd.etcd /data/logs/etcd-server/
安裝supervisor軟體
yum install supervisor -y 管理後臺程序
systemctl start supervisord
systemctl enable supervisord
建立etcd-server啟動配置
HDSS7-12.host.com上:
/etc/supervisord.d/etcd-server.ini
[program:etcd-server-7-12]
command=/opt/etcd/etcd-server-startup.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/etcd ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=etcd ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/etcd-server/etcd.stdout.log ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
啟動並檢查
supervisorctl update #配置檔案修改後可以使用該命令載入新的配置
etcd-server-7-12: added process group
[root@hdss7-12 etcd]# supervisorctl status #檢視所有任務狀態
etcd-server-7-12 RUNNING pid 22656, uptime 0:00:35
root@hdss7-12 opt]# tail -fn 200 /data/logs/etcd-server/etcd.stdout.log
[root@hdss7-12 etcd]# netstat -luntp|grep etcd
tcp 0 0 10.4.7.12:2379 0.0.0.0:* LISTEN 22657/./etcd
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 22657/./etcd
tcp 0 0 10.4.7.12:2380 0.0.0.0:* LISTEN 22657/./etcd
另兩臺etcd同理
HDSS7-21.host.com HDSS7-22.host.com
/opt/etcd/etcd-server-startup.sh ##注意ip修改
/etc/supervisord.d/etcd-server.ini #注意ip修改
三個etcd節點都起來後,檢查etcd叢集狀態
##任意一點都可執行
[root@hdss7-21 etcd]# ./etcdctl cluster-health
member 988139385f78284 is healthy: got healthy result from http://127.0.0.1:2379
member 5a0ef2a004fc4349 is healthy: got healthy result from http://127.0.0.1:2379
member f4a0cb0a765574a8 is healthy: got healthy result from http://127.0.0.1:2379
cluster is healthy
[root@hdss7-21 etcd]# ./etcd member list
2020-06-14 12:05:54.774522 E | etcdmain: error verifying flags, 'member' is not a valid flag. See 'etcd --help'.
[root@hdss7-21 etcd]# ./etcdctl member list
988139385f78284: name=etcd-server-7-22 peerURLs=https://10.4.7.22:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.12:2379 isLeader=false
5a0ef2a004fc4349: name=etcd-server-7-21 peerURLs=https://10.4.7.21:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.21:2379 isLeader=false
f4a0cb0a765574a8: name=etcd-server-7-12 peerURLs=https://10.4.7.12:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.12:2379 isLeader=true
部署kube-apiserver叢集
叢集規劃
10.4.7.11 和 10.4.7.12 使用nginx做4層負載均衡器,用keepalived跑一個vip:10.4.7.10 代理兩個kube-apiserver ,實現高可用
現在hdss7-21.host.com為例
下載軟體,解壓,做軟連結
hdss7-21.host.com上;
/opt/src
rz kubernetes-server-linux-amd64-v1.15.2.tar.gz
[root@hdss7-21 src]# tar xf kubernetes-server-linux-amd64-v1.15.2.tar.gz -C /opt/
[root@hdss7-21 opt]# mv kubernetes /opt/kubernetes-v.1.15.2
[root@hdss7-21 opt]# ln -s kubernetes-v.1.15.2/ /opt/kubernetes
[root@hdss7-21 opt]# ll
total 0
drwx--x--x 4 root root 28 Jun 13 16:15 containerd
lrwxrwxrwx 1 root root 13 Jun 14 11:16 etcd -> etcd-v3.1.20/
drwxr-xr-x 4 etcd etcd 166 Jun 14 12:00 etcd-v3.1.20
lrwxrwxrwx 1 root root 20 Jun 26 16:43 kubernetes -> kubernetes-v.1.15.2/
drwxr-xr-x 4 root root 79 Aug 5 2019 kubernetes-v.1.15.2
drwxr-xr-x 2 root root 97 Jun 26 16:35 src
簽發client證書
運維主機 hdss7-21.host.com上:
建立生成證書籤名請求(csr)的JSON配置檔案 (apiserver找etcd的證書)
# vi client-csr.json
{
"CN": "k8s-node",
"hosts": [
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
生成證書和私鑰
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json |cfssl-json -bare client
檢查生成的證書.和私鑰
ls -l |grep client
-rw-r--r-- 1 root root 993 Jun 26 17:22 client.csr
-rw-r--r-- 1 root root 281 Jun 26 17:16 client-csr.json
-rw------- 1 root root 1679 Jun 26 17:22 client-key.pem
-rw-r--r-- 1 root root 1363 Jun 26 17:22 client.pem
簽發api-server證書
運維主機 HDSS7-200.host.com上:
建立生成證書籤名請求(csv)的JSON的配置檔案 (自己啟動需要的證書)
# vi apiserver-csr.json
{
"CN": "k8s-apiserver",
"hosts": [
"127.0.0.1",
"192.168.0.1",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local",
"10.4.7.10",
"10.4.7.21",
"10.4.7.22",
"10.4.7.23"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
生成證書和祕鑰
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json |cfssl-json -bare apiserver
檢查生成的證書和祕鑰
ls -l |grep apiserver
-rw-r--r-- 1 root root 1249 Jun 26 17:38 apiserver.csr
-rw-r--r-- 1 root root 566 Jun 26 17:35 apiserver-csr.json
-rw------- 1 root root 1675 Jun 26 17:38 apiserver-key.pem
-rw-r--r-- 1 root root 1598 Jun 26 17:38 apiserver.pem
拷貝證書到各預算節點,並建立配置
HDSS7-21.host.com上:
拷貝證書,私鑰,注意私鑰檔案屬性600
cd /opt/kubernetes/server/bin
mkdir cert
[root@hdss7-21 cert]# ll
total 24
-rw------- 1 root root 1675 Jun 26 17:53 apiserver-key.pem
-rw-r--r-- 1 root root 1598 Jun 26 17:53 apiserver.pem
-rw------- 1 root root 1679 Jun 26 17:51 ca-key.pem
-rw-r--r-- 1 root root 1346 Jun 26 17:48 ca.pem
-rw------- 1 root root 1679 Jun 26 17:52 client-key.pem
-rw-r--r-- 1 root root 1363 Jun 26 17:52 client.pem
建立配置 (k8s日誌審計)
[root@hdss7-21 conf]# vi audit.yaml
apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
- "RequestReceived"
rules:
# Log pod changes at RequestResponse level
- level: RequestResponse
resources:
- group: ""
# Resource "pods" doesn't match requests to any subresource of pods,
# which is consistent with the RBAC policy.
resources: ["pods"]
# Log "pods/log", "pods/status" at Metadata level
- level: Metadata
resources:
- group: ""
resources: ["pods/log", "pods/status"]
# Don't log requests to a configmap called "controller-leader"
- level: None
resources:
- group: ""
resources: ["configmaps"]
resourceNames: ["controller-leader"]
# Don't log watch requests by the "system:kube-proxy" on endpoints or services
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: "" # core API group
resources: ["endpoints", "services"]
# Don't log authenticated requests to certain non-resource URL paths.
- level: None
userGroups: ["system:authenticated"]
nonResourceURLs:
- "/api*" # Wildcard matching.
- "/version"
# Log the request body of configmap changes in kube-system.
- level: Request
resources:
- group: "" # core API group
resources: ["configmaps"]
# This rule only applies to resources in the "kube-system" namespace.
# The empty string "" can be used to select non-namespaced resources.
namespaces: ["kube-system"]
# Log configmap and secret changes in all other namespaces at the Metadata level.
- level: Metadata
resources:
- group: "" # core API group
resources: ["secrets", "configmaps"]
# Log all other resources in core and extensions at the Request level.
- level: Request
resources:
- group: "" # core API group
- group: "extensions" # Version of group should NOT be included.
# A catch-all rule to log all other requests at the Metadata level.
- level: Metadata
# Long-running requests like watches that fall under this rule will not
# generate an audit event in RequestReceived.
omitStages:
- "RequestReceived"
建立啟動指令碼
HDSS7-21-host.com 上:
/opt/kubernetes/server/bin/kube-apiserver.sh
#!/bin/bash
./kube-apiserver \
--apiserver-count 2 \
--audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log \
--audit-policy-file ./conf/audit.yaml \
--authorization-mode RBAC \
--client-ca-file ./cert/ca.pem \
--requestheader-client-ca-file ./cert/ca.pem \
--enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
--etcd-cafile ./cert/ca.pem \
--etcd-certfile ./cert/client.pem \
--etcd-keyfile ./cert/client-key.pem \
--etcd-servers https://10.4.7.12:2379,https://10.4.7.21:2379,https://10.4.7.22:2379 \
--service-account-key-file ./cert/ca-key.pem \
--service-cluster-ip-range 192.168.0.0/16 \
--service-node-port-range 3000-29999 \
--target-ram-mb=1024 \
--kubelet-client-certificate ./cert/client.pem \
--kubelet-client-key ./cert/client-key.pem \
--log-dir /data/logs/kubernetes/kube-apiserver \
--tls-cert-file ./cert/apiserver.pem \
--tls-private-key-file ./cert/apiserver-key.pem \
--v 2
調整許可權和目錄
HDSS7-21.host.com上:
/opt/kubernetes/server/bin
chmod +x kube-apiserver.sh
mkdir -p /data/logs/kubernetes/kube-apiserver(不建立目錄啟動會有問題)
建立supervisor配置
控制apiserver啟動程序,異常退出能恢復出來
HDSS7-21.host.com上:
# vi /etc/supervisord.d/kube-apiserver.ini
[program:kube-apiserver-7-21]
command=/opt/kubernetes/server/bin/kube-apiserver.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
supervisorctl update ##啟動
kube-apiserver-7-21: added process group
HDSS7-22.host.com同理
檢查 兩個節點狀態
[root@hdss7-21 bin]# supervisorctl status
etcd-server-7-21 RUNNING pid 6491, uptime 8:13:37
kube-apiserver-7-21 RUNNING pid 7440, uptime 0:32:08
[root@hdss7-22 bin]# supervisorctl update
kube-apiserver-7-21: added process group
[root@hdss7-22 bin]# supervisorctl status
etcd-server-7-22 RUNNING pid 6515, uptime 8:39:19
kube-apiserver-7-21 RUNNING pid 7377, uptime 0:02:51
apiserver root 啟動的
etcd 普通使用者啟動的
##檢視監聽的埠
[root@hdss7-22 bin]# netstat -lntup |grep kube-api
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 7378/./kube-apiserv
tcp6 0 0 :::6443 :::* LISTEN 7378/./kube-apiserv
配4層反向代理
HDSS7-11.host.com 和HDSS7-12.host.com上:
安裝nginx
yum install nginx -y
nginx 配置
/etc/nginx/nginx.conf 4層配置加到檔案最後
stream {
upstream kube-apiserver {
server 10.4.7.21:6443 max_fails=3 fail_timeout=30s;
server 10.4.7.22:6443 max_fails=3 fail_timeout=30s;
}
server {
listen 7443;
proxy_connect_timeout 2s;
proxy_timeout 900s;
proxy_pass kube-apiserver;
}
}
安裝 keepalived
yum install keepalived -y
- 監控埠漂移指令碼
/etc/keepalived/check_port.sh
#!/bin/bash
#keepalived 監控埠指令碼
#使用方法:
#在keepalived的配置檔案中
#vrrp_script check_port {#建立一個vrrp_script指令碼,檢查配置
# script "/etc/keepalived/check_port.sh 6379" #配置監聽的埠
# interval 2 #檢查指令碼的頻率,單位(秒)
#}
CHK_PORT=$1
if [ -n "$CHK_PORT" ];then
PORT_PROCESS=`ss -lnt|grep $CHK_PORT|wc -l`
if [ $PORT_PROCESS -eq 0 ];then
echo "Port $CHK_PORT Is Not Used,End."
exit 1
fi
else
echo "Check Port Cant Be Empty!"
fi
# chmod +x /etc/keepalived/check_port.sh
keepalived 配置
keepalived 主:
! Configuration File for keepalived
global_defs {
router_id 10.4.7.11
}
vrrp_script chk_nginx {
script "/etc/keepalived/check_port.sh 7443"
interval 2
weight -20
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 251
priority 100
advert_int 1
mcast_src_ip 10.4.7.11
nopreempt ##非搶佔式,重啟nginx不會飄回主,生產上vip不能來回飄著玩,vip移動屬於重大事故
authentication {
auth_type PASS
auth_pass 11111111
}
track_script {
chk_nginx
}
virtual_ipaddress {
10.4.7.10
}
}
keepalived從:
! Configuration File for keepalived
global_defs {
router_id 10.4.7.12
}
vrrp_script chk_nginx {
script "/etc/keepalived/check_port.sh 7443"
interval 2
weight -20
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 251
mcast_src_ip 10.4.7.12
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 11111111
}
track_script {
chk_nginx
}
virtual_ipaddress {
10.4.7.10
}
}
##systemctl start keepalived.service
##systemctl enable keepalived.service
部署controller-manager
叢集規劃 :
主機名 | 角色 | ip |
---|---|---|
HDSS7-21.host.com | controller-manager | 10.4.7.21 |
HDSS7-22.host.com | controller-manager | 10.4.7.22 |
注:以HDSS7-21.host.com 主機安裝為例,另一臺22同理,可同步進行
建立啟動指令碼
HDSS7-21.host.com上:
/opt/kubernetes/server/bin/kube-controller-manager.sh
#!/bin/sh
./kube-controller-manager \
--cluster-cidr 172.7.0.0/16 \
--leader-elect true \
--log-dir /data/logs/kubernetes/kube-controller-manager \
--master http://127.0.0.1:8080 \
--service-account-private-key-file ./cert/ca-key.pem \
--service-cluster-ip-range 192.168.0.0/16 \
--root-ca-file ./cert/ca.pem \
--v 2
調整檔案許可權,建立目錄
HDSS7-21.host.com 上:
chmod +x /opt/kubernetes/server/bin/kube-controller-manager.sh
mkdir -p /data/logs/kubernetes/kube-controller-manager
建立supervisor配置
HDSS7-21.host.com上:
/etc/supervisord.d/kube-conntroller-manager.ini
[program:kube-controller-manager-7-21]
command=/opt/kubernetes/server/bin/kube-controller-manager.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-controller-manager/controller.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
啟動服務並檢查
HDSS7-21.host.com上:
[root@hdss7-21 ~]# supervisorctl update
kube-controller-manager-7-21: added process group
[root@hdss7-21 ~]# supervisorctl status
etcd-server-7-21 RUNNING pid 6491, uptime 11:32:58
kube-apiserver-7-21 RUNNING pid 7440, uptime 3:51:28
kube-controller-manager-7-21 RUNNING pid 7680, uptime 0:01:11
安裝HDSS7-22.host.com 主機kube-controller-manager服務 ,啟動並檢查
部署kube-scheduler
叢集規劃
主機名 | 角色 | ip |
---|---|---|
HDSS7-21.host.com | kube-scheduler | 10.4.7.21 |
HDSS7-22.host.com | kube-scheduler | 10.4.7.22 |
注:以HDSS7-21.host.com 主機安裝為例,另一臺22同理,可同步進行
建立啟動指令碼
HDSS7-21.host.com上;
/opt/kubernetes/server/bin/kube-scheduler.sh
#!/bin/sh
./kube-scheduler \
--leader-elect \
--log-dir /data/logs/kubernetes/kube-scheduler \
--master http://127.0.0.1:8080 \
--v 2
調整檔案許可權,建立目錄
HDSS7-22.host.com上:
chmod +x /opt/kubernetes/server/bin/kube-scheduler.sh
mkdir -p /data/logs/kubernetes/kube-scheduler
建立supervisor配置
HDSS7-21.host.com上:
/etc/supervisord.d/kube-scheduler.ini
[program:kube-scheduler-7-21]
command=/opt/kubernetes/server/bin/kube-scheduler.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
啟動服務並檢查
HDSS7-21.host.com上:
[root@hdss7-21 ~]# supervisorctl update
kube-scheduler-7-21: added process group
[root@hdss7-21 ~]# supervisorctl status
etcd-server-7-21 RUNNING pid 6491, uptime 11:49:56
kube-apiserver-7-21 RUNNING pid 7440, uptime 4:08:26
kube-controller-manager-7-21 RUNNING pid 7680, uptime 0:18:09
kube-scheduler-7-21 RUNNING pid 7707, uptime 0:00:31
檢查叢集狀態
[root@hdss7-21 ~]# ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl ##軟連結 kubectl命令 ,檢視叢集狀態需要此命令
[root@hdss7-21 ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
部署Node節點服務
部署kubelet
叢集規劃
主機名 | 角色 | ip |
---|---|---|
HDSS7-21.host.com | kubelet | 10.4.7.21 |
HDSS7-22.host.com | kubelet | 10.4.7.22 |
注:部署文件以HDSS7-21.host.com主機為例,另一臺運算節點同理
簽發kubelet 證書
運維主機HDSS7-200.host.com 上;
建立生成證書籤名請求(csr)的JSON配置檔案
# vi kubelet-csr.json
{
"CN": "k8s-kubelet",
"hosts": [
"127.0.0.1",
"10.4.7.10",
"10.4.7.21",
"10.4.7.22",
"10.4.7.23",
"10.4.7.24",
"10.4.7.25",
"10.4.7.26",
"10.4.7.27",
"10.4.7.28"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
生成kubelet證書和私鑰
/opt/certs
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet
2020/06/27 11:02:34 [INFO] generate received request
2020/06/27 11:02:34 [INFO] received CSR
2020/06/27 11:02:34 [INFO] generating key: rsa-2048
2020/06/27 11:02:34 [INFO] encoded CSR
2020/06/27 11:02:34 [INFO] signed certificate with serial number 71509884340440940923668995138673838436096221898
2020/06/27 11:02:34 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
檢查生成的證書,私鑰
/opt/certs
[root@hdss7-200 certs]# ls -l |grep kubelet
-rw-r--r-- 1 root root 1115 Jun 27 11:02 kubelet.csr
-rw-r--r-- 1 root root 452 Jun 27 10:59 kubelet-csr.json
-rw------- 1 root root 1679 Jun 27 11:02 kubelet-key.pem
-rw-r--r-- 1 root root 1468 Jun 27 11:02 kubelet.pem
拷貝證書到各運算節點,並建立配置
HDSS7-21.host.com上: 分發證書到7-21和7-22
拷貝證書,私鑰,注意私鑰檔案屬性600
[root@hdss7-21 cert]# scp hdss7-200:/opt/certs/kubelet.pem .
[root@hdss7-21 cert]# scp hdss7-200:/opt/certs/kubelet-key.pem .
[root@hdss7-21 cert]# ll
total 32
-rw------- 1 root root 1675 Jun 26 17:53 apiserver-key.pem
-rw-r--r-- 1 root root 1598 Jun 26 17:53 apiserver.pem
-rw------- 1 root root 1679 Jun 26 17:51 ca-key.pem
-rw-r--r-- 1 root root 1346 Jun 26 17:48 ca.pem
-rw------- 1 root root 1679 Jun 26 17:52 client-key.pem
-rw-r--r-- 1 root root 1363 Jun 26 17:52 client.pem
-rw------- 1 root root 1679 Jun 27 11:17 kubelet-key.pem
-rw-r--r-- 1 root root 1468 Jun 27 11:17 kubelet.pem
建立配置
set-cluster
注意:在conf目錄下
/opt/kubernetes/server/conf
conf]#kubectl config set-cluster myk8s \
--certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
--embed-certs=true \
--server=https://10.4.7.10:7443 \
--kubeconfig=kubelet.kubeconfig
Cluster "myk8s" set.
set-credentials
注意: 在conf目錄下
/opt/kubernetes/server/conf
conf]# kubectl config set-credentials k8s-node \
--client-certificate=/opt/kubernetes/server/bin/cert/client.pem \
--client-key=/opt/kubernetes/server/bin/cert/client-key.pem \
--embed-certs=true \
--kubeconfig=kubelet.kubeconfig
User "k8s-node" set.
set-context
注意:在conf目錄下
/opt/kubernetes/server/conf
conf]# kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=k8s-node \
--kubeconfig=kubelet.kubeconfig
Context "myk8s-context" created.
use-context 切換上下文到k8s node
注意:在conf目錄下
/opt/kubernetes/server/conf
conf]# kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig
Switched to context "myk8s-context".
**k8s-node.yaml ** 角色繫結,給k8s-node,賦予具有叢集運算節點的許可權
建立資源配置檔案
conf}# k8s-node.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: k8s-node roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: k8s-node
- 建立叢集許可權資源
[root@hdss7-21 conf]# kubectl create -f k8s-node.yaml clusterrolebinding.rbac.authorization.k8s.io/k8s-node created
- 檢查
[root@hdss7-21 conf]# kubectl get clusterrolebinding k8s-node NAME AGE k8s-node 72s
HDSS7-22.host.com 上 直接copy 一份就妥,省著在set ,因為檔案已經落盤了
[root@hdss7-22 ~]# cd /opt/kubernetes/server/bin/conf/ [root@hdss7-22 conf]# scp hdss7-21:/opt/kubernetes/server/bin/conf/kubelet.kubeconfig .
準備pause基礎映象
運維主機HDSS7-200.host.com上:
下載映象,打標籤,推送到harbor倉庫
HDSS7-200上: #docker login harbor.od.com admin 123456 不成功,可能是沒啟動 docker和harbor systemctl start docker docker-compose up -d # docker pull kubernetes/pause # docker tag f9d5de079539 harbor.od.com/public/pause:latest # docker push harbor.od.com/public/pause:latest
pause作用: 全稱infrastucture container(又叫infra)基礎容器。
kubelet 在啟動的時候要指定這個映象,在所有業務啟動時候先讓pause容器給業務初始化網路空間,IPC空間,uts空間
kubernetes中的pause容器主要為每個業務容器提供以下功能: PID名稱空間:Pod中的不同應用程式可以看到其他應用程式的程序ID。 網路名稱空間:Pod中的多個容器能夠訪問同一個IP和埠範圍。 IPC名稱空間:Pod中的多個容器能夠使用SystemV IPC或POSIX訊息佇列進行通訊。 UTS名稱空間:Pod中的多個容器共享一個主機名;Volumes(共享儲存卷): Pod中的各個容器可以訪問在Pod級別定義的Volumes。 作者:程式設計師同行者 連結:https://www.jianshu.com/p/bff9cf543ca4 來源:簡書 著作權歸作者所有。商業轉載請聯絡作者獲得授權,非商業轉載請註明出處。
建立kubelet啟動指令碼
HDSS7-21.host.com 上; 22上 可同步進行
/opt/kubernetes/server/bin/kubelet.sh
#!/bin/sh
./kubelet \
--anonymous-auth=false \
--cgroup-driver systemd \
--cluster-dns 192.168.0.2 \
--cluster-domain cluster.local \
--runtime-cgroups=/systemd/system.slice \
--kubelet-cgroups=/systemd/system.slice \
--fail-swap-on="false" \
--client-ca-file ./cert/ca.pem \
--tls-cert-file ./cert/kubelet.pem \
--tls-private-key-file ./cert/kubelet-key.pem \
--hostname-override hdss7-21.host.com \
--image-gc-high-threshold 20 \
--image-gc-low-threshold 10 \
--kubeconfig ./conf/kubelet.kubeconfig \
--log-dir /data/logs/kubernetes/kube-kubelet \
--pod-infra-container-image harbor.od.com/public/pause:latest \
--root-dir /data/kubelet
注意:kubelet 叢集個主機的啟動指令碼略有不同,部署其他節點時注意修改
檢查配置,許可權,建立日誌目錄
HDSS7-21.host.com上: 22同步
[root@hdss7-21 conf]# pwd
/opt/kubernetes/server/bin/conf
# mkdir -p /data/logs/kubernetes/kube-kubelet /data/kubelet
# chmod +x /opt/kubernetes/server/bin/kubelet.sh
建立supervisor配置
HDSS7-21.host.com上:
/etc/supervisord.d/kube-kubelet.ini
[program:kube-kubelet-7-21]
command=/opt/kubernetes/server/bin/kubelet.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
啟動並檢查節點狀態
[root@hdss7-21 ~]# supervisorctl update
kube-scheduler-7-21: added process grou
[root@hdss7-21 conf]# supervisorctl status
etcd-server-7-21 RUNNING pid 6259, uptime 5:45:15
kube-apiserver-7-21 RUNNING pid 6261, uptime 5:45:15
kube-controller-manager-7-21 RUNNING pid 6257, uptime 5:45:15
kube-kubelet-7-21 RUNNING pid 18626, uptime 0:58:07
kube-scheduler-7-21 RUNNING pid 6258, uptime 5:45:15
##檢視日誌報錯 tail -n 200 /data/logs/kubernetes/kube-kubelet/kubelet.stdout.log
檢視叢集狀態
[root@hdss7-21 cert]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
hdss7-21.host.com Ready <none> 78m v1.15.2
hdss7-22.host.com Ready <none> 2m38s v1.15.2
label node
[root@hdss7-21 cert]# kubectl label node hdss7-21.host.com node-role.kubernetes.io/master=
node/hdss7-21.host.com labeled
[root@hdss7-21 cert]# kubectl label node hdss7-21.host.com node-role.kubernetes.io/node=
node/hdss7-21.host.com labeled
[root@hdss7-21 cert]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
hdss7-21.host.com Ready master,node 105m v1.15.2
hdss7-22.host.com Ready <none> 29m v1.15.2
supervisor 用法
其他命令:
supervisorctl help:幫助命令
supervisorctl update :配置檔案修改後可以使用該命令載入新的配置
supervisorctl reload: 重新啟動配置中的所有程式
supervisorctl restart 服務名
supervisorctl restart kube-kubelet-7-22
部署kube-proxy
作用:連線pod 網路和叢集網路
叢集規劃
主機名 | 角色 | ip |
---|---|---|
HDSS7-21.host.com | kube-proxy | 10.4.7.21 |
HDSS7-22.host.com | kube-proxy | 20.4.7.22 |
注意:步驟以 hdss7-21.host.com為例 ,另一臺運算節點同理,可同步
簽發kube-proxy證書
運維主機HDSS7-200.host.com上:
建立生成證書籤名請求(csr)的JSON配置檔案
# vi kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
CN直接配置k8s中的角色,省著 clusterbinding
生成kube-proxy證書和祕鑰
# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json |cfssl-json -bare kube-proxy-client
2020/06/27 17:11:44 [INFO] generate received request
2020/06/27 17:11:44 [INFO] received CSR
2020/06/27 17:11:44 [INFO] generating key: rsa-2048
2020/06/27 17:11:44 [INFO] encoded CSR
2020/06/27 17:11:44 [INFO] signed certificate with serial number 65842469619446700066178412711509736733762516362
2020/06/27 17:11:44 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
檢查證書
[root@hdss7-200 certs]# ls -l |grep kube-proxy
-rw-r--r-- 1 root root 1005 Jun 27 17:11 kube-proxy-client.csr
-rw------- 1 root root 1675 Jun 27 17:11 kube-proxy-client-key.pem
-rw-r--r-- 1 root root 1375 Jun 27 17:11 kube-proxy-client.pem
-rw-r--r-- 1 root root 267 Jun 27 17:09 kube-proxy-csr.json
拷貝證書到各運算節點,並建立配置
HDSS7-21.host.com 上:
拷貝證書.私鑰,注意私鑰檔案屬性600
[root@hdss7-21 cert]# ls -l
total 40
-rw------- 1 root root 1675 Jun 26 17:53 apiserver-key.pem
-rw-r--r-- 1 root root 1598 Jun 26 17:53 apiserver.pem
-rw------- 1 root root 1679 Jun 26 17:51 ca-key.pem
-rw-r--r-- 1 root root 1346 Jun 26 17:48 ca.pem
-rw------- 1 root root 1679 Jun 26 17:52 client-key.pem
-rw-r--r-- 1 root root 1363 Jun 26 17:52 client.pem
-rw------- 1 root root 1679 Jun 27 11:17 kubelet-key.pem
-rw-r--r-- 1 root root 1468 Jun 27 11:17 kubelet.pem
-rw------- 1 root root 1675 Jun 27 17:28 kube-proxy-client-key.pem
-rw-r--r-- 1 root root 1375 Jun 27 17:28 kube-proxy-client.pem
建立配置
kubeconfig是k8s的一個使用者的檔案,作為使用者交付到K8s裡的
set-cluster
注意:在conf目錄下
conf]# kubectl config set-cluster myk8s \
--certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
--embed-certs=true \
--server=https://10.4.7.10:7443 \
--kubeconfig=kube-proxy.kubeconfig
Cluster "myk8s" set.
set-credentials
注意: 在conf目錄下
conf]# kubectl config set-credentials kube-proxy \
--client-certificate=/opt/kubernetes/server/bin/cert/kube-proxy-client.pem \
--client-key=/opt/kubernetes/server/bin/cert/kube-proxy-client-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
User "kube-proxy" set.
set-context
注意:在conf目錄下
conf]# kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
Context "myk8s-context" created.
use-context
注意:在conf目錄下
conf]# kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig
Switched to context "myk8s-context".
HDSS7-22.host.com 直接複製
[root@hdss7-22 conf]# scp hdss7-21:/opt/kubernetes/server/bin/conf/kube-proxy.kubeconfig .
建立kube-proxy啟動指令碼
HDSS7-21.host.com上:
載入ipvs模組:
/root/ipvs.sh
#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")
do
/sbin/modinfo -F filename $i &>/dev/null
if [ $? -eq 0 ];then
/sbin/modprobe $i
fi
done
建立啟動指令碼
/opt/kubernetes/server/bin/kube-proxy.sh
#!/bin/sh
./kube-proxy \
--cluster-cidr 172.7.0.0/16 \
--hostname-override hdss7-21.host.com \
--proxy-mode=ipvs \
--ipvs-scheduler=nq \
--kubeconfig ./conf/kube-proxy.kubeconfig
注意:kube-proxy叢集各主機的啟動指令碼略有不同,部署其他節點時注意修改
iptables 只支援rr
ipvs 支援的很多
[root@hdss7-21 ~]# lsmod |grep ip_vs
ip_vs_wrr 12697 0
ip_vs_wlc 12519 0
ip_vs_sh 12688 0
ip_vs_sed 12519 0
ip_vs_rr 12600 0
ip_vs_pe_sip 12740 0
nf_conntrack_sip 33860 1 ip_vs_pe_sip
ip_vs_nq 12516 0
ip_vs_lc 12516 0
ip_vs_lblcr 12922 0
ip_vs_lblc 12819 0
ip_vs_ftp 13079 0
ip_vs_dh 12688 0
檢查配置,許可權,建立日誌目錄
HDSS7-21.host.com 上;
/opt/kubernetes/server/bin/conf
[root@hdss7-21 conf]# ls -l |grep kube-proxy
-rw------- 1 root root 6215 Jun 27 17:59 kube-proxy.kubeconfig
[root@hdss7-21 conf]# chmod +x /opt/kubernetes/server/bin/kube-proxy.sh
[root@hdss7-21 conf]# mkdir -p /data/logs/kubernetes/kube-proxy
建立supervisor配置
HDSS7-21.host.com上;
/etc/supervisord.d/kube-proxy.ini
[program:kube-proxy-7-21]
command=/opt/kubernetes/server/bin/kube-proxy.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
啟動服務並檢查
HDSS7-21.host.com上:
[root@hdss7-21 conf]# supervisorctl update
kube-proxy-7-21: added process group
[root@hdss7-21 conf]# supervisorctl status
etcd-server-7-21 RUNNING pid 6259, uptime 8:10:51
kube-apiserver-7-21 RUNNING pid 6261, uptime 8:10:51
kube-controller-manager-7-21 RUNNING pid 62565, uptime 0:27:32
kube-kubelet-7-21 RUNNING pid 18626, uptime 3:23:43
kube-proxy-7-21 RUNNING pid 69323, uptime 0:00:50
kube-scheduler-7-21 RUNNING pid 62541, uptime 0:27:35
安裝LVS 驗證 ,lvs實際上內嵌在k8s裡,kube-proxy
lvs單臂路由模式 ,很nb ,和k8s結合後逼格更高了
[root@hdss7-21 conf]# yum install ipvsadm -y
[root@hdss7-21 conf]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.0.1:443 nq
-> 10.4.7.21:6443 Masq 1 0 0
-> 10.4.7.22:6443 Masq 1 0 0
##1.將cluster ip 和node ip 繫結.2.將clusterip 反代到兩個運算節點6443埠3.生產交付後,clusterip 應該指向pod ip4.kube-proxy元件就負責維護這三條網路,節點網路,cluster網路,pod網路
[root@hdss7-21 conf]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 192.168.0.1 <none> 443/TCP 24h
驗證kubernetes叢集
在任意一個運算節點,建立一個資源配置清單
現在選擇HDSS7-21.host.com主機
/root/nginx-ds.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ds
spec:
template:
metadata:
labels:
app: nginx-ds
spec:
containers:
- name: my-nginx
image: harbor.od.com/public/nginx:v1.7.9
ports:
- containerPort: 80
kubectl create -f nginx-ds.yaml
daemonset.extensions/nginx-ds created
檢查
[root@hdss7-21 ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
etcd-1 Healthy {"health": "true"}
controller-manager Healthy ok
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
[root@hdss7-21 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
hdss7-21.host.com Ready master,node 5h48m v1.15.2
hdss7-22.host.com Ready <none> 4h32m v1.15.2
[root@hdss7-21 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-ds-8r4w9 0/1 ImagePullBackOff 0 40m
nginx-ds-v4sn9 0/1 ImagePullBackOff 0 40m
學習條件
學習本套課程的資源需求說明:
由於本套課程是要實現一整套K8S生態的搭建 ,並實戰交付-套
dubbo ( java )微服務,我們要一步步實現以下功能 :
持續整合
配置中心
監控系統
●.日誌收集分析系統
●自動化運維平臺(最終實現基於K8S的開源PaaS平臺)
故學習本套課程的資源需求如下:
●2c/2g/50g x 3 + 4c/8g/50g x 2
與課程中的環境( ip規劃和部署的服務)保持-致
●
, 資源獲得方式
●筆記本加記憶體(缺點:無法實現24小時線上,排錯成本高)
有條件的可以自建伺服器工作站(缺點:費電,噪音)
租用阿里云云主機(缺點:貴,環境不-致)
沃佳雲