基於kubeadm的kubernetes叢集部署
目標:使用kubeadm部署kubernetes叢集
環境:CentOS 7
步驟:基礎環境配置->kubernetes安裝前設定(源、映象及相關配置)->kubeadm部署(master)->啟用基於flannel的Pod網路->kubeadm加入node節點->dashboard元件安裝與使用->heapster監控元件安裝與使用->訪問測試
1.基礎環境配置
master和所有node節點都需要進行基礎環境配置。
(1)net相關項配置
設定ipv4轉發:
vim /etc/sysctl.d/k8s.conf:增加一行 net.ipv4.ip_forward = 1
sysctl -p /etc/sysctl.d/k8s.conf
centos7下net-bridge設定:
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
(2)docker安裝
yum install -y yum-utils device-mapper-persistent-data lvm2
檢視docker版本:
yum list docker-ce.x86_64 --showduplicates | sort -r
本文使用Kubernetes 1.10版本,因此選用驗證過的17.03.2docker版本。
yum makecache fast
yum install -y --setopt=obsoletes=0 docker-ce-17.03.2.ce-1.el7.centos
啟動docker:
systemctl enable docker
systemctl start docker
docker 1.13以上版本預設禁用iptables的forward呼叫鏈,因此需要執行開啟命令:
iptables -P FORWARD ACCEPT
2.kubernetes安裝前設定(源、映象及相關配置)
配置源路徑:使用國內的阿里源
vim /etc/yum.repos.d/kubernetes.repo
[kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
docker映象拉取,因為預設拉取路徑為google,因此使用國內映象。
(1)master節點
vim k8s.sh
docker pull cnych/kube-apiserver-amd64:v1.10.0
docker pull cnych/kube-scheduler-amd64:v1.10.0
docker pull cnych/kube-controller-manager-amd64:v1.10.0
docker pull cnych/kube-proxy-amd64:v1.10.0
docker pull cnych/k8s-dns-kube-dns-amd64:1.14.8
docker pull cnych/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker pull cnych/k8s-dns-sidecar-amd64:1.14.8
docker pull cnych/etcd-amd64:3.1.12
docker pull cnych/flannel:v0.10.0-amd64
docker pull cnych/pause-amd64:3.1
docker tag cnych/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-apiserver-amd64:v1.10.0
docker tag cnych/kube-scheduler-amd64:v1.10.0 k8s.gcr.io/kube-scheduler-amd64:v1.10.0
docker tag cnych/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager-amd64:v1.10.0
docker tag cnych/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0
docker tag cnych/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8
docker tag cnych/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker tag cnych/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8
docker tag cnych/etcd-amd64:3.1.12 k8s.gcr.io/etcd-amd64:3.1.12
docker tag cnych/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
docker tag cnych/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
chmod +x k8s.sh,然後執行./k8s.sh
docker images檢視當前映象:
執行安裝命令:
yum install -y kubelet kubeadm kubectl
配置kubelet:
docker info | grep Cgroup
修改k8s配置檔案:vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
修改為cgroupfs,與docker匹配
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
關閉swap:
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
(2)Node節點
vim k8s-node.sh
docker pull cnych/kube-proxy-amd64:v1.10.0
docker pull cnych/flannel:v0.10.0-amd64
docker pull cnych/pause-amd64:3.1
docker pull cnych/kubernetes-dashboard-amd64:v1.8.3
docker pull cnych/heapster-influxdb-amd64:v1.3.3
docker pull cnych/heapster-grafana-amd64:v4.4.3
docker pull cnych/heapster-amd64:v1.4.2
docker pull cnych/k8s-dns-kube-dns-amd64:1.14.8
docker pull cnych/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker pull cnych/k8s-dns-sidecar-amd64:1.14.8
docker tag cnych/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
docker tag cnych/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
docker tag cnych/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0
docker tag cnych/kubernetes-dashboard-amd64:v1.8.3 k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
docker tag cnych/heapster-influxdb-amd64:v1.3.3 k8s.gcr.io/heapster-influxdb-amd64:v1.3.3
docker tag cnych/heapster-grafana-amd64:v4.4.3 k8s.gcr.io/heapster-grafana-amd64:v4.4.3
docker tag cnych/heapster-amd64:v1.4.2 k8s.gcr.io/heapster-amd64:v1.4.2
docker tag cnych/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8
docker tag cnych/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker tag cnych/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8
此處包含了dashboard和heapster相關映象。
執行過程與master節點相同。
3.kubeadm部署(master)
在master節點執行初始化命令:
kubeadm init --kubernetes-version=v1.10.3 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.0.29
之後按說明執行mkdir -p $HOME/.kube等命令。
kubectl get cs檢視叢集狀態:
4.啟用基於flannel的Pod網路
啟動flannel:kubectl apply -f kube-flannel.yml
kubectl get pods --all-namespaces檢視狀態:
5.kubeadm加入node節點
在node節點上執行:
kubeadm join 192.168.0.29:6443 --token xxxx --discovery-token-ca-cert-hash xxxx
在master節點上執行:
kubectl get nodes
6.dashboard元件安裝與使用
建立admin賬戶配置檔案:kubernetes-dashboard-admin.rbac.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard-admin
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard-admin
namespace: kube-system
執行:kubectl create -f kubernetes-dashboard-admin.rbac.yaml
檢視kubernete-dashboard-admin的token:
kubectl -n kube-system get secret | grep kubernetes-dashboard-admin
kubectl describe -n kube-system secret/kubernetes-dashboard-admin-token-gsntr
其中的token即為dashboard登入用的令牌。
修改現有dashboard訪問方式為NodePort:修改type型別為NodePort
kubectl -n kube-system edit service kubernetes-dashboard
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-05-29T09:09:30Z
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "2647"
selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
uid: fe971993-631f-11e8-9154-fa163e598eb8
spec:
clusterIP: 10.101.128.69
externalTrafficPolicy: Cluster
ports:
- nodePort: 30269
port: 443
protocol: TCP
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
檢視NodePort埠:7.heapster監控元件安裝與使用
在master節點下載heapster配置檔案:
修改heapster.yaml,將版本改為1.4.2
vim k8s-heapster.sh
docker pull cnych/heapster-influxdb-amd64:v1.3.3
docker pull cnych/heapster-grafana-amd64:v4.4.3
docker pull cnych/heapster-amd64:v1.4.2
docker tag cnych/heapster-influxdb-amd64:v1.3.3 k8s.gcr.io/heapster-influxdb-amd64:v1.3.3
docker tag cnych/heapster-grafana-amd64:v4.4.3 k8s.gcr.io/heapster-grafana-amd64:v4.4.3
docker tag cnych/heapster-amd64:v1.4.2 k8s.gcr.io/heapster-amd64:v1.4.2
執行k8s-heapster.sh下載映象
啟用heapster:
cd ~/k8s/heapster/
kubectl create -f ./
8.訪問測試
檢視kube-system pod是否都正常執行:
kubectl get pods -n kube-system
選擇令牌登入:
主介面: