1. 程式人生 > >k8s實驗環境的快速搭建

k8s實驗環境的快速搭建

k8s docker

大部分搜索了網上的文章,有些自己遇到的問題也標記了處理方法!

Kubeadm 安裝k8s
準備環境:
1.配置好各節點hosts文件
2.關閉系統防火墻
3.關閉SElinux
4.關閉swap
5.配置系統內核參數使流過網橋的流量也進入iptables/netfilter框架中,在/etc/sysctl.conf中添加以下配置:
net.bridge.bridge-nf-call-iptables?=?1
net.bridge.bridge-nf-call-ip6tables?=?1
sysctl?-p
使用kubeadm安裝:
1.首先配置阿裏K8S YUM源?
cat?<<EOF?>?/etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF
yum?-y?install?epel-release
yum?clean?all
yum?makecache

2.安裝kubeadm和相關工具包
yum?-y?install?docker?kubelet?kubeadm?kubectl?kubernetes-cni
?
3.啟動Docker與kubelet服務

systemctl?enable?docker?&&?systemctl?start?docker

systemctl?enable?kubelet?&&?systemctl?start?kubelet

提示:此時kubelet的服務運行狀態是異常的,因為缺少主配置文件kubelet.conf。但可以暫不處理,因為在完成Master節點的初始化後才會生成這個配置文件。

4.下載K8S相關鏡像
因為無法直接訪問gcr.io下載鏡像,所以需要配置一個國內的容器鏡像加速器
配置一個阿裏雲的加速器:
登錄?https://cr.console.aliyun.com/
在頁面中找到並點擊鏡像加速按鈕,即可看到屬於自己的專屬加速鏈接,選擇Centos版本後即可看到配置方法。
?提示:在阿裏雲上使用 Docker 並配置阿裏雲鏡像加速器,可能會遇到 daemon.json 導致 docker daemon 無法啟動的問題,可以通過以下方法解決。

5.下載K8S相關鏡像
OK,解決完加速器的問題之後,開始下載k8s相關鏡像,下載後將鏡像名改為k8s.gcr.io/開頭的名字,以便kubeadm識別使用。

#!/bin/bash

images=(kube-proxy-amd64:v1.10.0?kube-scheduler-amd64:v1.10.0?kube-controller-manager-amd64:v1.10.0?kube-apiserver-amd64:v1.10.0etcd-amd64:3.1.12?pause-amd64:3.1?kubernetes-dashboard-amd64:v1.8.3?k8s-dns-sidecar-amd64:1.14.8?k8s-dns-kube-dns-amd64:1.14.8
k8s-dns-dnsmasq-nanny-amd64:1.14.8)
for?imageName?in?${images[@]}?;?do
br/>etcd-amd64:3.1.12?pause-amd64:3.1?kubernetes-dashboard-amd64:v1.8.3?k8s-dns-sidecar-amd64:1.14.8?k8s-dns-kube-dns-amd64:1.14.8
k8s-dns-dnsmasq-nanny-amd64:1.14.8)
for?imageName?in?${images[@]}?;?do
??docker?tag?keveon/$imageName?k8s.gcr.io/$imageName
??docker?rmi?keveon/$imageName
done
上面的shell腳本主要做了3件事,下載各種需要用到的容器鏡像、重新打標記為符合k8s命令規範的版本名稱、清除舊的容器鏡像。
提示:鏡像版本一定要和kubeadm安裝的版本一致,否則會出現time out問題。

6.初始化安裝K8S Master
執行上述shell腳本,等待下載完成後,執行kubeadm init --kubernetes-version=v1.10.0 --pod-network-cidr=10.224.0.0/16
提示:選項–kubernetes-version=v1.10.0是必須的,否則會因為訪問google網站被墻而無法執行命令。這裏使用v1.10.0版本,剛才前面也說到了下載的容器鏡像版本必須與K8S版本一致否則會出現time out。
上面的命令大約需要1分鐘的過程,期間可以觀察下tail -f /var/log/message日誌文件的輸出,掌握該配置過程和進度。上面最後一段的輸出信息保存一份,後續添加工作節點還要用到。

7.配置kubectl認證信息

#?對於非root用戶
mkdir?-p?$HOME/.kube
sudo?cp?-i?/etc/kubernetes/admin.conf?$HOME/.kube/config
sudo?chown?$(id?-u):$(id?-g)?$HOME/.kube/config
#?對於root用戶
export?KUBECONFIG=/etc/kubernetes/admin.conf
也可以直接放到~/.bash_profile
echo?"export?KUBECONFIG=/etc/kubernetes/admin.conf"?>>?~/.bash_profile
8.安裝flannel網絡
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
9.讓node1、node2加入集群
提示:細心的童鞋應該會發現,這段命令其實就是前面K8S Matser安裝成功後我讓你們保存的那段命令。
?默認情況下,Master節點不參與工作負載,但如果希望安裝出一個All-In-One的k8s環境,則可以執行以下命令,讓Master節點也成為一個Node節點:

kubectl?taint?nodes?--all?node-role.kubernetes.io/master-

10.驗證K8S Master是否搭建成功
#?查看節點狀態
kubectl?get?nodes
#?查看pods狀態
kubectl?get?pods?--all-namespaces
#?查看K8S集群狀態
kubectl?get?cs

初始化集群

[root@vm-for-lhz-test-191 ~]# kubeadm init --kubernetes-version=v1.10.0 --pod-network-cidr=10.224.0.0/16
[init] Using Kubernetes version: v1.10.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING Service-Docker]: docker service is not enabled, please run ‘systemctl enable docker.service‘
[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [vm-for-lhz-test-191 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.100.191]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [vm-for-lhz-test-191] and IPs [192.168.100.191]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 30.504825 seconds
[uploadconfig]?Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node vm-for-lhz-test-191 as master by adding a label and a taint
[markmaster] Master vm-for-lhz-test-191 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: nz32q6.6hgq3hhrmokdprnr
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

You can now join any number of machines by running the following on each node
as root:

如果忘記token可以在master節點上創建新的token kubeadm token create
[root@vm-for-lhz-test-192 ~]# kubeadm join 192.168.100.191:6443 --token b3cr72.33pjt6evrwtxrgsh --discovery-token-unsafe-skip-ca-verification
[preflight] Running pre-flight checks.
[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.100.191:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.100.191:6443"
[discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.100.191:6443"
[discovery] Successfully established connection with API Server "192.168.100.191:6443"

This node has joined the cluster:

  • Certificate signing request was sent to master and a response
    was received.
  • The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes‘ on the master to see this node join the cluster.

[root@vm-for-lhz-test-191 ~]# kubectl get nodes --all-namespaces=true
NAMESPACE NAME STATUS ROLES AGE VERSION
vm-for-lhz-test-191 Ready master 42m v1.10.0
vm-for-lhz-test-192 Ready <none> 3m v1.10.0
vm-for-lhz-test-193 Ready <none> 11m v1.10.0
vm-for-lhz-test-194 Ready <none> 3m v1.10.0

錯誤問題 需要更改docker的cgroups 為systemd的驅動
Apr 14 18:16:40 vm-for-lhz-test-193 kubelet: F0414 18:16:40.481654 18286 server.go:233] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"

vi /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd --exec-opt native.cgroupdriver=systemd

k8s實驗環境的快速搭建