1. 程式人生 > >Docker學習-Kubernetes 叢集搭建 - Spring Boot 應用

Docker學習-Kubernetes 叢集搭建 - Spring Boot 應用

 

Docker學習

Docker學習-VMware Workstation 本地多臺虛擬機器互通,主機網路互通搭建

Docker學習-Docker搭建Consul叢集

Docker學習-簡單的私有DockerHub搭建

Docker學習-Spring Boot on Docker

Docker學習-Kubernetes - 叢集部署 

Docker學習-Kubernetes - Spring Boot 應用

 

簡介

kubernetes,簡稱K8s,是用8代替8個字元“ubernete”而成的縮寫。是一個開源的,用於管理雲平臺中多個主機上的容器化的應用,Kubernetes的目標是讓部署容器化的應用簡單並且高效(powerful),Kubernetes提供了應用部署,規劃,更新,維護的一種機制。

Kubernetes是Google開源的一個容器編排引擎,它支援自動化部署、大規模可伸縮、應用容器化管理。在生產環境中部署一個應用程式時,通常要部署該應用的多個例項以便對應用請求進行負載均衡。 在Kubernetes中,我們可以建立多個容器,每個容器裡面執行一個應用例項,然後通過內建的負載均衡策略,實現對這一組應用例項的管理、發現、訪問,而這些細節都不需要運維人員去進行復雜的手工配置和處理。  

基本概念

Kubernetes 中的絕大部分概念都抽象成 Kubernetes 管理的一種資源物件

  • Master:Master 節點是 Kubernetes 叢集的控制節點,負責整個叢集的管理和控制。Master 節點上包含以下元件:
  • kube-apiserver:叢集控制的入口,提供 HTTP REST 服務
  • kube-controller-manager:Kubernetes 叢集中所有資源物件的自動化控制中心
  • kube-scheduler:負責 Pod 的排程
  • Node:Node 節點是 Kubernetes 叢集中的工作節點,Node 上的工作負載由 Master 節點分配,工作負載主要是執行容器應用。Node 節點上包含以下元件:

    • kubelet:負責 Pod 的建立、啟動、監控、重啟、銷燬等工作,同時與 Master 節點協作,實現叢集管理的基本功能。
    • kube-proxy:實現 Kubernetes Service 的通訊和負載均衡
    • 執行容器化(Pod)應用
  • Pod: Pod 是 Kubernetes 最基本的部署排程單元。每個 Pod 可以由一個或多個業務容器和一個根容器(Pause 容器)組成。一個 Pod 表示某個應用的一個例項

  • ReplicaSet:是 Pod 副本的抽象,用於解決 Pod 的擴容和伸縮
  • Deployment:Deployment 表示部署,在內部使用ReplicaSet 來實現。可以通過 Deployment 來生成相應的 ReplicaSet 完成 Pod 副本的建立
  • Service:Service 是 Kubernetes 最重要的資源物件。Kubernetes 中的 Service 物件可以對應微服務架構中的微服務。Service 定義了服務的訪問入口,服務的呼叫者通過這個地址訪問 Service 後端的 Pod 副本例項。Service 通過 Label Selector 同後端的 Pod 副本建立關係,Deployment 保證後端Pod 副本的數量,也就是保證服務的伸縮性。

Kubernetes 主要由以下幾個核心元件組成:

  • etcd 儲存了整個叢集的狀態,就是一個數據庫;
  • apiserver 提供了資源操作的唯一入口,並提供認證、授權、訪問控制、API 註冊和發現等機制;
  • controller manager 負責維護叢集的狀態,比如故障檢測、自動擴充套件、滾動更新等;
  • scheduler 負責資源的排程,按照預定的排程策略將 Pod 排程到相應的機器上;
  • kubelet 負責維護容器的生命週期,同時也負責 Volume(CSI)和網路(CNI)的管理;
  • Container runtime 負責映象管理以及 Pod 和容器的真正執行(CRI);
  • kube-proxy 負責為 Service 提供 cluster 內部的服務發現和負載均衡;

當然了除了上面的這些核心元件,還有一些推薦的外掛:

  • kube-dns 負責為整個叢集提供 DNS 服務
  • Ingress Controller 為服務提供外網入口
  • Heapster 提供資源監控
  • Dashboard 提供 GUI

元件通訊

Kubernetes 多元件之間的通訊原理:

  • apiserver 負責 etcd 儲存的所有操作,且只有 apiserver 才直接操作 etcd 叢集
  • apiserver 對內(叢集中的其他元件)和對外(使用者)提供統一的 REST API,其他元件均通過 apiserver 進行通訊

    • controller manager、scheduler、kube-proxy 和 kubelet 等均通過 apiserver watch API 監測資源變化情況,並對資源作相應的操作
    • 所有需要更新資源狀態的操作均通過 apiserver 的 REST API 進行
  • apiserver 也會直接呼叫 kubelet API(如 logs, exec, attach 等),預設不校驗 kubelet 證書,但可以通過 --kubelet-certificate-authority 開啟(而 GKE 通過 SSH 隧道保護它們之間的通訊)

比如最典型的建立 Pod 的流程:​​

  • 使用者通過 REST API 建立一個 Pod
  • apiserver 將其寫入 etcd
  • scheduluer 檢測到未繫結 Node 的 Pod,開始排程並更新 Pod 的 Node 繫結
  • kubelet 檢測到有新的 Pod 排程過來,通過 container runtime 執行該 Pod
  • kubelet 通過 container runtime 取到 Pod 狀態,並更新到 apiserver 中

 

叢集部署

 

使用kubeadm工具安裝

1. master和node 都用yum 安裝kubelet,kubeadm,docker
2. master 上初始化:kubeadm init
3. master 上啟動一個flannel的pod
4. node上加入叢集:kubeadm join

 

準備環境

Centos7  192.168.50.21 k8s-master  
Centos7  192.168.50.22 k8s-node01
Centos7  192.168.50.23 k8s-node02

修改主機名(3臺機器都需要修改)

hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node01
hostnamectl set-hostname k8s-node02

關閉防火牆

systemctl stop firewalld.service

配置docker yum源

yum install -y yum-utils device-mapper-persistent-data lvm2 wget
cd /etc/yum.repos.d
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

配置kubernetes yum 源

cd /opt/
wget https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
rpm --import yum-key.gpg
rpm --import rpm-package-key.gpg
cd /etc/yum.repos.d
vi kubernetes.repo 
輸入以下內容
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
enabled=1

yum repolist

master和node 安裝kubelet,kubeadm,docker

yum install docker 
yum install kubelet-1.13.1
yum install kubeadm-1.13.1

master 上安裝kubectl

yum install kubectl-1.13.1

docker的配置

配置私有倉庫和映象加速地址,私有倉庫配置參見 https://www.cnblogs.com/woxpp/p/11871886.html

vi /etc/docker/daemon.json

 

{
    "registry-mirror":[
        "http://hub-mirror.c.163.com"
    ],
    "insecure-registries":[
        "192.168.50.24:5000"
    ]
}

 

啟動docker

systemctl daemon-reload
systemctl start docker 
docker info

master 上初始化:kubeadm init 

vi /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
kubeadm init \
    --apiserver-advertise-address=192.168.50.21 \
    --image-repository registry.aliyuncs.com/google_containers \
    --kubernetes-version v1.13.1 \
    --pod-network-cidr=10.244.0.0/16

初始化命令說明:

--apiserver-advertise-address

指明用 Master 的哪個 interface 與 Cluster 的其他節點通訊。如果 Master 有多個 interface,建議明確指定,如果不指定,kubeadm 會自動選擇有預設閘道器的 interface。

--pod-network-cidr

指定 Pod 網路的範圍。Kubernetes 支援多種網路方案,而且不同網路方案對 --pod-network-cidr 有自己的要求,這裡設定為 10.244.0.0/16 是因為我們將使用 flannel 網路方案,必須設定成這個 CIDR。

--image-repository

Kubenetes預設Registries地址是 k8s.gcr.io,在國內並不能訪問 gcr.io,在1.13版本中我們可以增加–image-repository引數,預設值是 k8s.gcr.io,將其指定為阿里雲映象地址:registry.aliyuncs.com/google_containers。

--kubernetes-version=v1.13.1 

關閉版本探測,因為它的預設值是stable-1,會導致從https://dl.k8s.io/release/stable-1.txt下載最新的版本號,我們可以將其指定為固定版本(最新版:v1.13.1)來跳過網路請求。

初始化過程中

[preflight] You can also perform this action in beforehand using 'kubeadm config images pull' 是在下載映象檔案,過程比較慢。
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 24.002300 seconds 這個過程也比較慢 可以忽略
 
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.50.21]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.50.21 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.50.21 127.0.0.1 ::1]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 24.002300 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 7ax0k4.nxpjjifrqnbrpojv
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.50.21:6443 --token 7ax0k4.nxpjjifrqnbrpojv --discovery-token-ca-cert-hash sha256:95942f10859a71879c316e75498de02a8b627725c37dee33f74cd040e1cd9d6b

初始化過程說明:

1) [preflight] kubeadm 執行初始化前的檢查。
2) [kubelet-start] 生成kubelet的配置檔案”/var/lib/kubelet/config.yaml”
3) [certificates] 生成相關的各種token和證書
4) [kubeconfig] 生成 KubeConfig 檔案,kubelet 需要這個檔案與 Master 通訊
5) [control-plane] 安裝 Master 元件,會從指定的 Registry 下載元件的 Docker 映象。
6) [bootstraptoken] 生成token記錄下來,後邊使用kubeadm join往叢集中新增節點時會用到
7) [addons] 安裝附加元件 kube-proxy 和 kube-dns。
8) Kubernetes Master 初始化成功,提示如何配置常規使用者使用kubectl訪問叢集。
9) 提示如何安裝 Pod 網路。
10) 提示如何註冊其他節點到 Cluster。

 

異常情況:

 [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING Hostname]: hostname "k8s-master" could not be reached
        [WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 114.114.114.114:53: no such host
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'

執行

systemctl enable docker.service
systemctl enable kubelet.service

會提示以下錯誤

  [WARNING Hostname]: hostname "k8s-master" could not be reached
        [WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 114.114.114.114:53: no such host
error execution phase preflight: [preflight] Some fatal errors occurred:

配置host

cat >> /etc/hosts << EOF
192.168.50.21 k8s-master
192.168.50.22 k8s-node01
192.168.50.23 k8s-node02
EOF

再次執行初始化命令會出現

  [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2   --設定虛擬機器CPU個數大於2
        [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables

設定好虛擬機器CPU個數,重啟後再次執行:

kubeadm init \
    --apiserver-advertise-address=192.168.50.21 \
    --image-repository registry.aliyuncs.com/google_containers \
    --kubernetes-version v1.13.1 \
    --pod-network-cidr=10.244.0.0/16

 

[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull

解決辦法:docker.io倉庫對google的容器做了映象,可以通過下列命令下拉取相關映象

先看下需要用到哪些

kubeadm config images list

配置yum源

[root@k8s-master opt]# vi kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.13.1
imageRepository: registry.aliyuncs.com/google_containers
apiServer:
  certSANs:
  - 192.168.50.21
controlPlaneEndpoint: "192.168.50.20:16443"
networking:
  # This CIDR is a Calico default. Substitute or remove for your CNI provider.
  podSubnet: "172.168.0.0/16"
 kubeadm config images pull --config /opt/kubeadm-config.yaml

初始化master

kubeadm init --config=kubeadm-config.yaml  --upload-certs

 

xecution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
        [ERROR Port-10250]: Port 10250 is in use


kubeadm會自動檢查當前環境是否有上次命令執行的“殘留”。如果有,必須清理後再行執行init。我們可以通過”kubeadm reset”來清理環境,以備重來。

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

==原因==

這是因為kubelet沒啟動

==解決==

systemctl restart kubelet

如果啟動不了kubelet

kubelet.service - kubelet: The Kubernetes Node Agent

 

則可能是swap交換分割槽還開啟的原因
-關閉swap

swapoff -a

-配置kubelet

vi /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

 再次執行

kubeadm init \
    --apiserver-advertise-address=192.168.50.21 \
    --image-repository registry.aliyuncs.com/google_containers \
    --kubernetes-version v1.13.1 \
    --pod-network-cidr=10.244.0.0/16

 

 

配置 kubectl

kubectl 是管理 Kubernetes Cluster 的命令列工具,前面我們已經在所有的節點安裝了 kubectl。Master 初始化完成後需要做一些配置工作,然後 kubectl 就能使用了。
依照 kubeadm init 輸出的最後提示,推薦用 Linux 普通使用者執行 kubectl。

  • 建立普通使用者centos
#建立普通使用者並設定密碼123456
useradd centos && echo "centos:123456" | chpasswd centos

#追加sudo許可權,並配置sudo免密
sed -i '/^root/a\centos  ALL=(ALL)       NOPASSWD:ALL' /etc/sudoers

#儲存叢集安全配置檔案到當前使用者.kube目錄
su - centos
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#啟用 kubectl 命令自動補全功能(登出重新登入生效)
echo "source <(kubectl completion bash)" >> ~/.bashrc

需要這些配置命令的原因是:Kubernetes 叢集預設需要加密方式訪問。所以,這幾條命令,就是將剛剛部署生成的 Kubernetes 叢集的安全配置檔案,儲存到當前使用者的.kube 目錄下,kubectl 預設會使用這個目錄下的授權資訊訪問 Kubernetes 叢集。
如果不這麼做的話,我們每次都需要通過 export KUBECONFIG 環境變數告訴 kubectl 這個安全配置檔案的位置。
配置完成後centos使用者就可以使用 kubectl 命令管理叢集了。

檢視叢集狀態:

kubectl get cs

 

 

 

 

 

 部署網路外掛

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

  

kubectl get 重新檢查 Pod 的狀態

 

 

部署worker節點

 在master機器儲存生成號的映象檔案

docker save -o master.tar registry.aliyuncs.com/google_containers/kube-proxy:v1.13.1 registry.aliyuncs.com/google_containers/kube-apiserver:v1.13.1 registry.aliyuncs.com/google_containers/kube-controller-manager:v1.13.1 registry.aliyuncs.com/google_containers/kube-scheduler:v1.13.1  registry.aliyuncs.com/google_containers/coredns:1.2.6  registry.aliyuncs.com/google_containers/etcd:3.2.24 registry.aliyuncs.com/google_containers/pause:3.1

注意對應的版本號

將master上儲存的映象同步到節點上

scp master.tar node01:/root/
scp master.tar node02:/root/

將映象匯入本地,node01,node02

 docker load< master.tar

配置host,node01,node02

cat >> /etc/hosts << EOF
192.168.50.21 k8s-master
192.168.50.22 k8s-node01
192.168.50.23 k8s-node02
EOF

配置iptables,node01,node02

echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables

-關閉swap,node01,node02

swapoff -a

-配置kubelet,node01,node02

vi /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false" 
systemctl enable docker.service
systemctl enable kubelet.service

啟動docker,node01,node02

service docker strat

部署網路外掛,node01,node02

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

獲取join指令,master

kubeadm token create --print-join-command
kubeadm token create --print-join-command
kubeadm join 192.168.50.21:6443 --token n9g4nq.kf8ppgpgb3biz0n5 --discovery-token-ca-cert-hash sha256:95942f10859a71879c316e75498de02a8b627725c37dee33f74cd040e1cd9d6b

 

在子節點執行指令 ,node01,node02

kubeadm join 192.168.50.21:6443 --token n9g4nq.kf8ppgpgb3biz0n5 --discovery-token-ca-cert-hash sha256:95942f10859a71879c316e75498de02a8b627725c37dee33f74cd040e1cd9d6b
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.50.21:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.50.21:6443"
[discovery] Requesting info from "https://192.168.50.21:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.50.21:6443"
[discovery] Successfully established connection with API Server "192.168.50.21:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] WARNING: unable to stop the kubelet service momentarily: [exit status 4]
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-node01" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

在master上檢視節點狀態

kubectl get nodes

 

 這種狀態是錯誤的 ,只有一臺聯機正確

檢視node01,和node02發現 node01有些程序沒有完全啟動

刪除node01所有執行的容器,node01

docker stop $(docker ps -q) & docker rm $(docker ps -aq)

重置 kubeadm ,node01

kubeadm reset

獲取join指令,master

kubeadm token create --print-join-command

再次在node01上執行join

 

 

 

檢視node01映象執行狀態

  

檢視master狀態

  

nodes狀態全部為ready,由於每個節點都需要啟動若干元件,如果node節點的狀態是 NotReady,可以檢視所有節點pod狀態,確保所有pod成功拉取到映象並處於running狀態:

kubectl get pod --all-namespaces -o wide

  

配置kubernetes UI圖形化介面

建立kubernetes-dashboard.yaml

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# ------------------- Dashboard Secret ------------------- #

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque

---
# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
  # Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Deployment ------------------- #

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: registry.cn-hangzhou.aliyuncs.com/rsqlh/kubernetes-dashboard:v1.10.1  
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule

---
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
  - port: 443
    targetPort: 8443
    nodePort: 30000
  selector:
    k8s-app: kubernetes-dashboard

 

執行以下命令建立kubernetes-dashboard:

kubectl create -f kubernetes-dashboard.yaml

如果出現

Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": secrets "kubernetes-dashboard-certs" already exists
Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": serviceaccounts "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": roles.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" already exists
Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": rolebindings.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" already exists
Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": deployments.apps "kubernetes-dashboard" already exists

執行delete清理

kubectl delete -f kubernetes-dashboard.yaml

檢視元件執行狀態

kubectl get pods --all-namespaces

 

 

 ErrImagePull 拉取映象失敗

手動拉取 並重置tag

docker pull registry.cn-hangzhou.aliyuncs.com/rsqlh/kubernetes-dashboard:v1.10.1
docker tag registry.cn-hangzhou.aliyuncs.com/rsqlh/kubernetes-dashboard:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

重新建立

  

 

ImagePullBackOff

預設情況是會根據配置檔案中的映象地址去拉取映象,如果設定為IfNotPresent 和Never就會使用本地映象。

IfNotPresent :如果本地存在映象就優先使用本地映象。
Never:直接不再去拉取映象了,使用本地的;如果本地不存在就報異常了。

 spec:
      containers:
      - name: kubernetes-dashboard
        image: registry.cn-hangzhou.aliyuncs.com/rsqlh/kubernetes-dashboard:v1.10.1  
        imagePullPolicy: IfNotPresent

 

檢視對映狀態 

 kubectl get service  -n kube-system

 

 

 

 

建立能夠訪問 Dashboard 的使用者

 新建檔案 account.yaml ,內容如下:

# Create Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
# Create ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

 

 複製token登陸

 

 

configmaps is forbidden: User "system:serviceaccount:kube-system:admin-user" cannot list resource "configmaps" in API group "" in the namespace "default" 

 授權使用者

kubectl create clusterrolebinding test:admin-user --clusterrole=cluster-admin --serviceaccount=kube-system:admin-user

 

 

 NodePort方式,可以到任意一個節點的XXXX埠檢視

 Docker學習-Kubernetes - Spring Boot 應用

 

本文參考:

https://www.cnblogs.com/tylerzhou/p/10971336.html

https://www.cnblogs.com/zoujiaojiao/p/10986320.html