K8S叢集以及dashboard部署
什麼是Kubernetes ?
Kubernetes是一個輕便的和可擴充套件的開源平臺,用於管理容器化應用和服務。通過Kubernetes能夠進行應用的自動化部署和擴縮容。在Kubernetes中,會將組成應用的容器組合成 一個邏輯單元以更易管理和發現。 Kubernetes叢集架構節點角色功能,包括: Master Node節點: 1.k8s叢集控制節點,對叢集進行排程管理,接受叢集外使用者去叢集操作請求; 2.Master Node由API Server、Scheduler、ClusterState Store(ETCD資料庫)和Controller MangerServer所組成; Worker Node節點:Kubernetes 叢集元件功能:
Master元件
1、API Server:
K8S對外的唯一介面,提供HTTP/HTTPS RESTful API,所有的請求都需要經過這個介面進行通訊,主要負責接收、校驗並響應所有的REST請求,結果狀態被持久儲存在etcd當中,所有資源增刪改查的唯一入口。
2、etcd:
負責儲存k8s 叢集的配置資訊和各種資源的狀態資訊,當資料發生變化時,etcd會快速地通知k8s相關元件,etcd是一個獨立的服務元件,並不隸屬於K8S叢集。生產環境當中etcd應該以叢集方式執行,以確保服務的可用性。
3、Controller Manager:
負責管理叢集各種資源,保證資源處於預期的狀態。Controller Manager由多種controller組成,包括replication controller、endpoints controller、namespace controller、serviceaccounts controller等 。由控制器完成的主要功能主要包括生命週期功能和API業務邏輯,
4、排程器(Schedule)
資源排程,負責決定將Pod放到哪個Node上執行。Scheduler在排程時會對叢集的結構進行分析,當前各個節點的負載,以及應用對高可用、效能等方面的需求。
Node元件
1、Kubelet
kubelet是node的agent,當Scheduler確定在某個Node上執行Pod後,會將Pod的具體配置資訊(image、volume等)傳送給該節點的kubelet,kubelet會根據這些資訊建立和執行容器,並向master報告執行狀態。
2、Container Runtime
每個Node都需要提供一個容器執行時(Container Runtime)環境,它負責下載映象並執行容器。
3、Kube-proxy:
service在邏輯上代表了後端的多個Pod,外借通過service訪問Pod。service接收到請求就需要kube-proxy完成轉發到Pod的。每個Node都會執行kube-proxy服務,負責將訪問的service的TCP/UDP資料流轉發到後端的容器。
什麼是pod ?
Kubernetes並不直接地執行容器,而是被一個抽象的資源物件Pod所封裝,它是K8S最小的排程單位,Pod可以封裝一個或多個容器,同一個Pod中共享網路名稱空間和儲存資源,而容器之間可以通過本地迴環介面直接通訊,但是彼此之間又在Mount、User和Pid等名稱空間上保持了隔離。
pod建立排程過程
1.首先,使用者使用create yaml建立pod,請求給apiseerver,apiserver將yaml中的屬性資訊寫入etcd。
2.apiserver觸發watch機制開始建立pod,資訊轉發給排程器,排程器使用演算法選擇相應的node節點,並將node資訊反饋給apiserver,apiserver將繫結的node資訊寫入etcd。
3.apiserver再通過watch機制,呼叫kubelet,指定pod資訊,觸發docker run命令建立容器,建立完成後反饋給kubelet, kubelet再將pod的狀態資訊給apiserver,apiserver則將pod的狀態資訊寫入etcd。
叢集部署:
環境準備:
系統:centos7.61810 (Core)
K8s版本:1.21.x
docker版本:19.03.15
各虛擬伺服器規劃如下:
注:
一.各節點按規劃配置好hosts,另本次複用在master01節點部署安裝工具,因此在master01做ssh-keygen,並ssh-copy-id其餘節點
二.同時做好chrony時間同步
三. 各master以及node節點安裝docker
1.部署harbor,本次採用https
# 解壓軟體
[root@k8s-harbor tools]# tar xzvf harbor-offline-installer-v2.3.2.tgz
[root@k8s-harbor ~]# mkdir -p /key/harbor/certs/
[root@k8s-harbor ~]# cd /key/harbor/certs/
#生成key 以及簽發證書
[root@k8s-harbor certs]# openssl genrsa -out harbor-ca.key
[root@k8s-harbor certs]# openssl req -x509 -new -nodes -key harbor-ca.key -subj "/CN=magedu.gfeng.net" -days 7120 -out harbor-ca.crt
[root@k8s-harbor certs]# ls
修改配置檔案:
[root@k8s-harbor tools]# vim harbor/harbor.yml
配置如下:
# https related config
https:
# https port for harbor, default is 443
port: 443
# The path of cert and key files for nginx
certificate: /key/harbor/certs/harbor-ca.crt
private_key: /key/harbor/certs/harbor-ca.key
# # Uncomment following will enable tls communication between all harbor components
# internal_tls:
# # set enabled to true means internal tls is enabled
# enabled: true
# # put your cert and key files on dir
# dir: /etc/harbor/tls/internal
# Uncomment external_url if you want to enable external proxy
# And when it enabled the hostname will no longer used
# external_url: https://reg.mydomain.com:8433
# The initial password of Harbor admin
# It only works in first time to install harbor
# Remember Change the admin password from UI after launching Harbor.
harbor_admin_password: 123456
# Harbor DB configuration
database:
# The password for the root user of Harbor DB. Change this before any production use.
password: root123
# The maximum number of connections in the idle connection pool. If it <=0, no idle connections are retained.
max_idle_conns: 100
# The maximum number of open connections to the database. If it <= 0, then there is no limit on the number of open connections.
# Note: the default number of connections is 1024 for postgres of harbor.
max_open_conns: 900
# The default data volume
data_volume: /data
安裝harbor:
[root@k8s-harbor harbor]# ./install.sh --with-trivy
安裝完成後,訪問https://172.16.1.174測試:
2.客戶端同步證書驗證
[root@k8s-master01 ~]# mkdir -p /etc/docker/certs.d/magedu.gfeng.net/
[root@k8s-harbor certs]# scp harbor-ca.crt [email protected]:/etc/docker/certs.d/magedu.gfeng.net/
[root@k8s-master01 magedu.gfeng.net]# ls
重啟docker並驗證
[root@k8s-master01 magedu.gfeng.net]# docker login magedu.gfeng.net
採用同樣的方式,對master02和node節點做同樣的操作,當然也可以採寫指令碼,這裡不做演示
3.部署haproxy+keepalive高可用負載(之前文章已經部署,這裡不做演示)
#配置haproxy
[root@lb ~]# vim /etc/haproxy/haproxy.cfg
#新增如下內容:
frontend main 172.16.1.96:6443
default_backend k8s
backend k8s
balance roundrobin
server server1 172.16.1.190:6443 check
server server2 172.16.1.191:6443 check
#配置完成後,重啟haproxy
K8s部署:
1.在master01節點操作
#安裝ansible
[root@k8s-master01 ~]# yum install ansible -y
#下載部署工具以及元件
[root@k8s-master01 ~]# export release=3.1.0
[root@k8s-master01 ~]#curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
[root@k8s-master01 ~]#chmod a+x ezdown
#修改配置檔案
[root@k8s-master01 ~]# vim ezdown
# default settings, can be overridden by cmd line options, see usage
DOCKER_VER=19.03.15
KUBEASZ_VER=3.1.0
K8S_BIN_VER=v1.21.0
#使用工具指令碼下載
./ezdown -D
上述指令碼執行成功後,所有檔案(kubeasz程式碼、二進位制、離線映象)均已整理好放入目錄/etc/kubeasz
2.生成ansible hosts檔案
[root@k8s-master01 ~]# cd /etc/kubeasz/
[root@k8s-master01 kubeasz]# ./ezctl new k8s-001
#編輯生成hosts檔案
[root@k8s-master01 kubeasz]# vim clusters/k8s-001/hosts
內容如下:
# 'etcd' cluster should have odd member(s) (1,3,5,...)
[etcd]
172.16.1.194
172.16.1.195
172.16.1.196
# master node(s)
[kube_master]
172.16.1.190
172.16.1.191
# work node(s)
[kube_node]
172.16.1.192
172.16.1.193
# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'true' to install a harbor server; 'false' to integrate with existed one
[harbor]
#172.16.1.8 NEW_INSTALL=false
# [optional] loadbalance for accessing k8s from outside
[ex_lb]
172.16.1.97 LB_ROLE=backup EX_APISERVER_VIP=172.16.1.96 EX_APISERVER_PORT=8443
172.16.1.98 LB_ROLE=master EX_APISERVER_VIP=172.16.1.96 EX_APISERVER_PORT=8443
# [optional] ntp server for the cluster
[chrony]
#172.16.1.1
[all:vars]
# --------- Main Variables ---------------
# Secure port for apiservers
SECURE_PORT="6443"
# Cluster container-runtime supported: docker, containerd
CONTAINER_RUNTIME="docker"
# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"
# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"
# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.100.0.0/16"
# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="10.200.0.0/16"
# NodePort Range
NODE_PORT_RANGE="30000-32767"
# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="magedu.local"
# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
#編輯生成config.yml檔案
[root@k8s-master01 kubeasz]# vim /etc/kubeasz/clusters/k8s-001/config.yml
內容如下:
SYS_RESERVED_ENABLED: "no"
# haproxy balance mode
BALANCE_ALG: "roundrobin"
############################
# role:network [flannel,calico,cilium,kube-ovn,kube-router]
############################
# ------------------------------------------- flannel
# [flannel]設定flannel 後端"host-gw","vxlan"等
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false
# [flannel] flanneld_image: "quay.io/coreos/flannel:v0.10.0-amd64"
flannelVer: "v0.13.0-amd64"
flanneld_image: "easzlab/flannel:{{ flannelVer }}"
# [flannel]離線映象tar包
flannel_offline: "flannel_{{ flannelVer }}.tar"
# ------------------------------------------- calico
# [calico]設定 CALICO_IPV4POOL_IPIP=“off”,可以提高網路效能,條件限制詳見 docs/setup/calico.md
CALICO_IPV4POOL_IPIP: "Always"
# [calico]設定 calico-node使用的host IP,bgp鄰居通過該地址建立,可手工指定也可以自動發現
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"
# [calico]設定calico 網路 backend: brid, vxlan, none
CALICO_NETWORKING_BACKEND: "brid"
# [calico]更新支援calico 版本: [v3.3.x] [v3.4.x] [v3.8.x] [v3.15.x]
calico_ver: "v3.15.3"
# [calico]calico 主版本
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"
# [calico]離線映象tar包
calico_offline: "calico_{{ calico_ver }}.tar"
# ------------------------------------------- cilium
# [cilium]CILIUM_ETCD_OPERATOR 建立的 etcd 叢集節點數 1,3,5,7...
ETCD_CLUSTER_SIZE: 1
# [cilium]映象版本
cilium_ver: "v1.4.1"
# [cilium]離線映象tar包
cilium_offline: "cilium_{{ cilium_ver }}.tar"
# ------------------------------------------- kube-ovn
# [kube-ovn]選擇 OVN DB and OVN Control Plane 節點,預設為第一個master節點
OVN_DB_NODE: "{{ groups['kube_master'][0] }}"
# [kube-ovn]離線映象tar包
kube_ovn_ver: "v1.5.3"
kube_ovn_offline: "kube_ovn_{{ kube_ovn_ver }}.tar"
# ------------------------------------------- kube-router
# [kube-router]公有云上存在限制,一般需要始終開啟 ipinip;自有環境可以設定為 "subnet"
OVERLAY_TYPE: "full"
# [kube-router]NetworkPolicy 支援開關
FIREWALL_ENABLE: "true"
# [kube-router]kube-router 映象版本
############################
# prepare
############################
# 可選離線安裝系統軟體包 (offline|online)
INSTALL_SOURCE: "online"
# 可選進行系統安全加固 github.com/dev-sec/ansible-collection-hardening
OS_HARDEN: false
# 設定時間源伺服器【重要:叢集內機器時間必須同步】
ntp_servers:
- "ntp1.aliyun.com"
- "time1.cloud.tencent.com"
- "0.cn.pool.ntp.org"
# 設定允許內部時間同步的網路段,比如"10.0.0.0/8",預設全部允許
local_network: "0.0.0.0/0"
############################
# role:deploy
############################
# default: ca will expire in 100 years
# default: certs issued by the ca will expire in 50 years
CA_EXPIRY: "876000h"
CERT_EXPIRY: "438000h"
# kubeconfig 配置引數
CLUSTER_NAME: "cluster1"
CONTEXT_NAME: "context-{{ CLUSTER_NAME }}"
############################
# role:etcd
############################
# 設定不同的wal目錄,可以避免磁碟io競爭,提高效能
ETCD_DATA_DIR: "/var/lib/etcd"
ETCD_WAL_DIR: ""
############################
# role:runtime [containerd,docker]
############################
# ------------------------------------------- containerd
# [.]啟用容器倉庫映象
ENABLE_MIRROR_REGISTRY: true
# [containerd]基礎容器映象
SANDBOX_IMAGE: "easzlab/pause-amd64:3.4.1"
# [containerd]容器持久化儲存目錄
CONTAINERD_STORAGE_DIR: "/var/lib/containerd"
# ------------------------------------------- docker
# [docker]容器儲存目錄
DOCKER_STORAGE_DIR: "/var/lib/docker"
# [docker]開啟Restful API
ENABLE_REMOTE_API: false
# [docker]信任的HTTP倉庫
INSECURE_REG: '["127.0.0.1/8","172.16.1.174"]'
############################
# role:kube-master
############################
# k8s 叢集 master 節點證書配置,可以新增多個ip和域名(比如增加公網ip和域名)
MASTER_CERT_HOSTS:
- "10.1.1.1"
- "k8s.test.io"
#- "www.test.com"
# node 節點上 pod 網段掩碼長度(決定每個節點最多能分配的pod ip地址)
# 如果flannel 使用 --kube-subnet-mgr 引數,那麼它將讀取該設定為每個節點分配pod網段
# https://github.com/coreos/flannel/issues/847
NODE_CIDR_LEN: 24
############################
# role:kube-node
############################
# Kubelet 根目錄
KUBELET_ROOT_DIR: "/var/lib/kubelet"
# node節點最大pod 數
MAX_PODS: 210
# 配置為kube元件(kubelet,kube-proxy,dockerd等)預留的資源量
# 數值設定詳見templates/kubelet-config.yaml.j2
KUBE_RESERVED_ENABLED: "yes"
# k8s 官方不建議草率開啟 system-reserved, 除非你基於長期監控,瞭解系統的資源佔用狀況;
# 並且隨著系統執行時間,需要適當增加資源預留,數值設定詳見templates/kubelet-config.yaml.j2
# 系統預留設定基於 4c/8g 虛機,最小化安裝系統服務,如果使用高效能物理機可以適當增加預留
# 另外,叢集安裝時候apiserver等資源佔用會短時較大,建議至少預留1g記憶體
SYS_RESERVED_ENABLED: "no"
# haproxy balance mode
BALANCE_ALG: "roundrobin"
############################
# role:network [flannel,calico,cilium,kube-ovn,kube-router]
############################
# ------------------------------------------- flannel
# [flannel]設定flannel 後端"host-gw","vxlan"等
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false
# [flannel] flanneld_image: "quay.io/coreos/flannel:v0.10.0-amd64"
flannelVer: "v0.13.0-amd64"
flanneld_image: "easzlab/flannel:{{ flannelVer }}"
# [flannel]離線映象tar包
flannel_offline: "flannel_{{ flannelVer }}.tar"
# ------------------------------------------- calico
# [calico]設定 CALICO_IPV4POOL_IPIP=“off”,可以提高網路效能,條件限制詳見 docs/setup/calico.md
CALICO_IPV4POOL_IPIP: "Always"
# [calico]設定 calico-node使用的host IP,bgp鄰居通過該地址建立,可手工指定也可以自動發現
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"
# [calico]設定calico 網路 backend: brid, vxlan, none
CALICO_NETWORKING_BACKEND: "brid"
# [calico]更新支援calico 版本: [v3.3.x] [v3.4.x] [v3.8.x] [v3.15.x]
calico_ver: "v3.15.3"
# [calico]calico 主版本
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"
# [calico]離線映象tar包
calico_offline: "calico_{{ calico_ver }}.tar"
# ------------------------------------------- cilium
# [cilium]CILIUM_ETCD_OPERATOR 建立的 etcd 叢集節點數 1,3,5,7...
ETCD_CLUSTER_SIZE: 1
# [cilium]映象版本
cilium_ver: "v1.4.1"
# [cilium]離線映象tar包
cilium_offline: "cilium_{{ cilium_ver }}.tar"
# ------------------------------------------- kube-ovn
# [kube-ovn]選擇 OVN DB and OVN Control Plane 節點,預設為第一個master節點
OVN_DB_NODE: "{{ groups['kube_master'][0] }}"
# [kube-ovn]離線映象tar包
kube_ovn_ver: "v1.5.3"
kube_ovn_offline: "kube_ovn_{{ kube_ovn_ver }}.tar"
# ------------------------------------------- kube-router
# [kube-router]公有云上存在限制,一般需要始終開啟 ipinip;自有環境可以設定為 "subnet"
OVERLAY_TYPE: "full"
# [kube-router]NetworkPolicy 支援開關
FIREWALL_ENABLE: "true"
# [kube-router]kube-router 映象版本
kube_router_ver: "v0.3.1"
busybox_ver: "1.28.4"
# [kube-router]kube-router 離線映象tar包
kuberouter_offline: "kube-router_{{ kube_router_ver }}.tar"
busybox_offline: "busybox_{{ busybox_ver }}.tar"
############################
# role:cluster-addon
############################
# coredns 自動安裝
dns_install: "no"
corednsVer: "1.8.0"
ENABLE_LOCAL_DNS_CACHE: false
dnsNodeCacheVer: "1.17.0"
# 設定 local dns cache 地址
LOCAL_DNS_CACHE: "169.254.20.10"
# metric server 自動安裝
metricsserver_install: "no"
metricsVer: "v0.3.6"
# dashboard 自動安裝
dashboard_install: "no"
dashboardVer: "v2.2.0"
dashboardMetricsScraperVer: "v1.0.6"
# ingress 自動安裝
ingress_install: "no"
ingress_backend: "traefik"
traefik_chart_ver: "9.12.3"
# prometheus 自動安裝
prom_install: "no"
prom_namespace: "monitor"
prom_chart_ver: "12.10.6"
# nfs-provisioner 自動安裝
nfs_provisioner_install: "no"
nfs_provisioner_namespace: "kube-system"
nfs_provisioner_ver: "v4.0.1"
nfs_storage_class: "managed-nfs-storage"
nfs_server: "192.168.1.10"
nfs_path: "/data/nfs"
############################
# role:harbor
############################
# harbor version,完整版本號
HARBOR_VER: "v2.1.3"
HARBOR_DOMAIN: "harbor.yourdomain.com"
HARBOR_TLS_PORT: 8443
# if set 'false', you need to put certs named harbor.pem and harbor-key.pem in directory 'down'
HARBOR_SELF_SIGNED_CERT: true
# install extra component
HARBOR_WITH_NOTARY: false
HARBOR_WITH_TRIVY: false
HARBOR_WITH_CLAIR: false
HARBOR_WITH_CHARTMUSEUM: true
注:幾個自動安裝都設定成no,DNS設定成false
3.部署叢集
首先:
[root@k8s-master01 kubeasz]#vim playbooks/01.prepare.yml 關掉負載均衡初始化
# [optional] to synchronize system time of nodes with 'chrony'
- hosts:
- kube_master
- kube_node
- etcd
- ex_lb
- chrony
去掉- ex_lb
- chrony
開始叢集初始化安裝:
[root@k8s-master01 kubeasz]# ./ezctl setup k8s-001 01 初始化叢集
[root@k8s-master01 kubeasz]# ./ezctl setup k8s-001 02 部署etcd叢集
etcd節點驗證:
編寫一個指令碼,內容如下:
[root@k8s-etcd01 server]# vim etcd.sh
#!/bin/sh
export NODE_IPS="172.16.1.194 172.16.1.195 172.16.1.196"
for ip in ${NODE_IPS}; do
ETCDCTL_API=3 /opt/kube/bin/etcdctl \
--endpoints=https://${ip}:2379 \
--cacert=/etc/kubernetes/ssl/ca.pem \
--cert=/etc/kubernetes/ssl/etcd.pem \
--key=/etc/kubernetes/ssl/etcd-key.pem \
endpoint health; done
[root@k8s-etcd01 server]# chmod+x etcd.sh
[root@k8s-etcd01 server]# bash etcd.sh
全部為successfully表示正常,否則錯誤
[root@k8s-master01 kubeasz]# ./ezctl setup k8s-001 03
[root@k8s-master01 kubeasz]# ./ezctl setup k8s-001 04 部署master節點
#master部署完成後,執行驗證
[root@k8s-master01 kubeasz]# kubectl get node
[root@k8s-master01 kubeasz]# ./ezctl setup k8s-001 05 部署node節點
#node部署完成後,執行驗證
[root@k8s-master01 kubeasz]# kubectl get node
[root@k8s-master01 kubeasz]# ./ezctl setup k8s-001 06 部署網路元件
PLAY [kube_master,kube_node] ***************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************
ok: [172.16.1.190]
ok: [172.16.1.191]
ok: [172.16.1.193]
ok: [172.16.1.192]
TASK [calico : 在節點建立相關目錄] ******************************************************************************************************************************
ok: [172.16.1.191] => (item=/etc/cni/net.d)
ok: [172.16.1.193] => (item=/etc/cni/net.d)
ok: [172.16.1.192] => (item=/etc/cni/net.d)
ok: [172.16.1.190] => (item=/etc/cni/net.d)
changed: [172.16.1.191] => (item=/etc/calico/ssl)
changed: [172.16.1.192] => (item=/etc/calico/ssl)
changed: [172.16.1.193] => (item=/etc/calico/ssl)
changed: [172.16.1.190] => (item=/etc/calico/ssl)
ok: [172.16.1.191] => (item=/opt/kube/images)
ok: [172.16.1.193] => (item=/opt/kube/images)
ok: [172.16.1.192] => (item=/opt/kube/images)
ok: [172.16.1.190] => (item=/opt/kube/images)
TASK [建立calico 證書請求] ***********************************************************************************************************************************
changed: [172.16.1.190]
ok: [172.16.1.191]
ok: [172.16.1.192]
ok: [172.16.1.193]
TASK [建立 calico證書和私鑰] **********************************************************************************************************************************
changed: [172.16.1.191]
changed: [172.16.1.190]
changed: [172.16.1.193]
changed: [172.16.1.192]
TASK [分發calico證書相關] ************************************************************************************************************************************
changed: [172.16.1.191] => (item=ca.pem)
changed: [172.16.1.193] => (item=ca.pem)
changed: [172.16.1.192] => (item=ca.pem)
changed: [172.16.1.190] => (item=ca.pem)
changed: [172.16.1.191] => (item=calico.pem)
changed: [172.16.1.193] => (item=calico.pem)
changed: [172.16.1.192] => (item=calico.pem)
changed: [172.16.1.190] => (item=calico.pem)
changed: [172.16.1.191] => (item=calico-key.pem)
changed: [172.16.1.193] => (item=calico-key.pem)
changed: [172.16.1.192] => (item=calico-key.pem)
changed: [172.16.1.190] => (item=calico-key.pem)
TASK [get calico-etcd-secrets info] ********************************************************************************************************************
changed: [172.16.1.190]
TASK [建立 calico-etcd-secrets] **************************************************************************************************************************
changed: [172.16.1.190]
TASK [檢查是否已下載離線calico映象] *******************************************************************************************************************************
changed: [172.16.1.190]
TASK [calico : 嘗試推送離線docker 映象(若執行失敗,可忽略)] *************************************************************************************************************
changed: [172.16.1.191] => (item=pause.tar)
changed: [172.16.1.193] => (item=pause.tar)
changed: [172.16.1.190] => (item=pause.tar)
changed: [172.16.1.192] => (item=pause.tar)
changed: [172.16.1.193] => (item=calico_v3.15.3.tar)
changed: [172.16.1.190] => (item=calico_v3.15.3.tar)
changed: [172.16.1.191] => (item=calico_v3.15.3.tar)
changed: [172.16.1.192] => (item=calico_v3.15.3.tar)
TASK [獲取calico離線映象推送情況] ********************************************************************************************************************************
changed: [172.16.1.191]
changed: [172.16.1.190]
changed: [172.16.1.192]
changed: [172.16.1.193]
TASK [匯入 calico的離線映象(若執行失敗,可忽略)] ***********************************************************************************************************************
changed: [172.16.1.190] => (item=pause.tar)
changed: [172.16.1.193] => (item=pause.tar)
changed: [172.16.1.192] => (item=pause.tar)
changed: [172.16.1.191] => (item=pause.tar)
changed: [172.16.1.190] => (item=calico_v3.15.3.tar)
changed: [172.16.1.193] => (item=calico_v3.15.3.tar)
changed: [172.16.1.191] => (item=calico_v3.15.3.tar)
changed: [172.16.1.192] => (item=calico_v3.15.3.tar)
TASK [配置 calico DaemonSet yaml檔案] **********************************************************************************************************************
changed: [172.16.1.190]
TASK [執行 calico網路] *************************************************************************************************************************************
changed: [172.16.1.190]
TASK [calico : 刪除預設cni配置] ******************************************************************************************************************************
changed: [172.16.1.190]
changed: [172.16.1.191]
changed: [172.16.1.192]
changed: [172.16.1.193]
TASK [下載calicoctl 客戶端] *********************************************************************************************************************************
changed: [172.16.1.193] => (item=calicoctl)
changed: [172.16.1.192] => (item=calicoctl)
changed: [172.16.1.191] => (item=calicoctl)
changed: [172.16.1.190] => (item=calicoctl)
TASK [準備 calicoctl配置檔案] ********************************************************************************************************************************
changed: [172.16.1.192]
changed: [172.16.1.193]
changed: [172.16.1.191]
changed: [172.16.1.190]
TASK [輪詢等待calico-node 執行,視下載映象速度而定] ********************************************************************************************************************
changed: [172.16.1.190]
changed: [172.16.1.193]
changed: [172.16.1.192]
changed: [172.16.1.191]
PLAY RECAP *********************************************************************************************************************************************
172.16.1.190 : ok=17 changed=16 unreachable=0 failed=0 skipped=51 rescued=0 ignored=0
172.16.1.191 : ok=12 changed=10 unreachable=0 failed=0 skipped=40 rescued=0 ignored=0
172.16.1.192 : ok=12 changed=10 unreachable=0 failed=0 skipped=40 rescued=0 ignored=0
172.16.1.193 : ok=12 changed=10 unreachable=0 failed=0 skipped=40 rescued=0 ignored=0
#驗證calico
[root@k8s-master01 kubeasz]# calicoctl node status
4.建立容器測試網路通訊
[root@k8s-master01 kubeasz]# docker pull alpine
[root@k8s-master01 kubeasz]# docker tag alpine magedu.gfeng.net/magedu/alpine
[root@k8s-master01 kubeasz]# docker push magedu.gfeng.net/magedu/alpine
#建立pod測試主機網路通訊是否正常
[root@k8s-master01 kubeasz]# kubectl run net-test1 --image=magedu.gfeng.net/magedu/alpine:latest sleep 30000
[root@k8s-master01 kubeasz]# kubectl run net-test2 --image=magedu.gfeng.net/magedu/alpine:latest sleep 30000
[root@k8s-master01 kubeasz]# kubectl get pod -A -o wide
5. 部署coredns
上傳kubernetes.tar.gz到master01節點並解壓
[root@k8s-master01 kubeasz]#cd/server/kubernetes/cluster/addons/dns/coredns
[root@k8s-master01 kubeasz]# ls
[root@k8s-master01 kubeasz]#cp coredns.yaml.base /root/coredns-n56.yml
[root@k8s-master01 kubeasz]#cd ~
#首先pull coredns(版本1.8.0)
[root@k8s-master01 ~]#docker pull coredns/coredns:1.8.0
[root@k8s-master01 ~]# docker tag coredns/coredns:1.8.0 magedu.gfeng.net/magedu/coredns:1.8.0
[root@k8s-master01 ~]# docker push magedu.gfeng.net/magedu/coredns:1.8.0
#更改配置檔案
[root@k8s-master01 ~]# vim coredns-n56.yaml
找到並更改其中幾項如下:
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes magedu.local(更改為hosts中設定的域名) in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . 223.6.6.6(更改為外網地址) {
max_concurrent 1000
- name: coredns
image: magedu.gfeng.net/magedu/coredns:1.8.0 (映象地址更改為下載並tag上傳到倉庫地址)
resources:
limits:
memory: 256Mi (更改大小,此處為測試,具體環境請自行根據情況設定)
spec:
type: NodePort (新增選項)
selector:
k8s-app: kube-dns
clusterIP: 10.100.0.2(此處地址為hosts檔案設定的網段,配置地址可以通過如下檢視)
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
targetPort: 9153
nodePort: 30009(暴露埠,後面可以通過web訪問)
配置完後,儲存,然後執行如下:
[root@k8s-master01 ~]# kubectl apply -f coredns-n56.yaml
#驗證coredns是否執行,如下方正常
#驗證pod是否能訪問域名:
#驗證coredns 指標
http://172.16.1.193:30009/metrics
[root@k8s-master01 ~]# docker push magedu.gfeng.net/magedu/metrics-scraper:v1.0.6
6. 部署dashboard
下載映象並 tag上傳倉庫:
[root@k8s-master01 ~]# docker pull kubernetesui/dashboard:v2.3.1
[root@k8s-master01 ~]# docker tag kubernetesui/dashboard:v2.3.1 magedu.gfeng.net/magedu/dashboard:v2.3.1
[root@k8s-master01 ~]# docker push magedu.gfeng.net/magedu/dashboard:v2.3.1
[root@k8s-master01 ~]# docker pull kubernetesui/metrics-scraper:v1.0.6
[root@k8s-master01 ~]# docker tag kubernetesui/metrics-scraper:v1.0.6 magedu.gfeng.net/magedu/metrics-scraper:v1.0.6
[root@k8s-master01 ~]# docker push magedu.gfeng.net/magedu/metrics-scraper:v1.0.6
[root@k8s-master01 ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
#編輯下載的配置檔案
[root@k8s-master01 ~]# mv recommended.yaml dashboard-v2.3.1.yaml
[root@k8s-master01 ~]# vim dashboard-v2.3.1.yaml
更改以及新增內容如下:
spec:
type: NodePort(增加選項)
ports:
- port: 443
targetPort: 8443
nodePort: 30002(暴露訪問埠)
selector:
spec:
containers:
- name: kubernetes-dashboard
image: magedu.gfeng.net/magedu/dashboard:v2.3.1(映象地址更改為上傳的倉庫地址)
spec:
containers:
- name: dashboard-metrics-scraper
image: magedu.gfeng.net/magedu/metrics-scraper:v1.0.6(映象地址更改為上傳的地址)
配置完成後,儲存,然後執行:
[root@k8s-master01 ~]# kubectl apply -f dashboard-v2.3.1.yaml
瀏覽器訪問:
https://172.16.1.192:30002
發現有token,這時需要另外一個yaml檔案配置生成token
上傳admin-user.yml到master節點,然後執行如下截圖,生成token
#再次訪問剛才的web頁面,輸入token登入,最終結果如下
至此,部署完成