kubeadm快速部署kubernetes(HA)
當前版本的kubeadm原生並不支援部署HA模式叢集,但是實際上可以使用kubeadm部署後,再進行少量手動修改,即可實現HA模式的kubernetes叢集。本次部署基於Ubuntu16.04,並使用最新的docker版本:17.06,kubernetes適用1.7.x版本,本文采用1.7.6。
1 環境準備
準備了六臺機器作安裝測試工作,機器資訊如下:
IP | Name | Role | OS |
---|---|---|---|
172.16.2.1 | Master01 | Controller,etcd | Ubuntu16.04 |
172.16.2.2 | Master02 | Controller,etcd | Ubuntu16.04 |
172.16.2.3 | Master03 | Controller,etcd | Ubuntu16.04 |
172.16.2.11 | Node01 | Compute | Ubuntu16.04 |
172.16.2.12 | Node02 | Compute | Ubuntu16.04 |
172.16.2.13 | Node03 | Compute | Ubuntu16.04 |
172.16.2.100 | VIP | VIP | - |
2 安裝docker
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update && apt-upgrade
apt-get install docker-ce=17.06.0~ce-0~ubuntu
3 安裝etcd叢集
使用了docker-compose安裝,當然,如果覺得麻煩,也可以直接docker run。
Master01節點的ETCD的docker-compose.yml:
etcd:
image: quay. io/coreos/etcd:v3.1.5
command: etcd --name etcd-srv1 --data-dir=/var/etcd/calico-data --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls http://172.16.2.1:2379,http://172.16.2.1:2380 --initial-advertise-peer-urls http://172.16.2.1:2380 --listen-peer-urls http://0.0.0.0:2380 -initial-cluster-token etcd-cluster -initial-cluster "etcd-srv1=http://172.16.2.1:2380,etcd-srv2=http://172.16.2.2:2380,etcd-srv3=http://172.16.2.3:2380" -initial-cluster-state new
net: "bridge"
ports:
- "2379:2379"
- "2380:2380"
restart: always
stdin_open: true
tty: true
volumes:
- /store/etcd:/var/etcd
Master02節點的ETCD的docker-compose.yml:
etcd:
image: quay.io/coreos/etcd:v3.1.5
command: etcd --name etcd-srv2 --data-dir=/var/etcd/calico-data --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls http://172.16.2.2:2379,http://172.16.2.2:2380 --initial-advertise-peer-urls http://172.16.2.2:2380 --listen-peer-urls http://0.0.0.0:2380 -initial-cluster-token etcd-cluster -initial-cluster "etcd-srv1=http://172.16.2.1:2380,etcd-srv2=http://172.16.2.2:2380,etcd-srv3=http://172.16.2.3:2380" -initial-cluster-state new
net: "bridge"
ports:
- "2379:2379"
- "2380:2380"
restart: always
stdin_open: true
tty: true
volumes:
- /store/etcd:/var/etcd
Master03節點的ETCD的docker-compose.yml:
etcd:
image: quay.io/coreos/etcd:v3.1.5
command: etcd --name etcd-srv3 --data-dir=/var/etcd/calico-data --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls http://172.16.2.3:2379,http://172.16.2.3:2380 --initial-advertise-peer-urls http://172.16.2.3:2380 --listen-peer-urls http://0.0.0.0:2380 -initial-cluster-token etcd-cluster -initial-cluster "etcd-srv1=http://172.16.2.1:2380,etcd-srv2=http://172.16.2.2:2380,etcd-srv3=http://172.16.2.3:2380" -initial-cluster-state new
net: "bridge"
ports:
- "2379:2379"
- "2380:2380"
restart: always
stdin_open: true
tty: true
volumes:
- /store/etcd:/var/etcd
建立好docker-compose.yml檔案後,使用命令docker-compose up -d
部署。
3 安裝k8s工具包
三種方式:博主提供、官方源安裝和release工程編譯,apt-get方式因為不能直接使用google提供的源,非官方源中提供的版本比較老,如果要使用新版本,可以嘗試release工程編譯的方式或者用博主提供的包下載。
博主提供
一些比較懶得同學:-D,可以直接從博主提供的位置下載RPM工具包安裝,下載地址。
#安裝kubelet的依賴包
apt-get install -y socat ebtables
dpkg -i kubelet_1.7.6-00_amd64.deb kubeadm_1.7.6-00_amd64.deb kubernetes-cni_0.5.1-00_amd64.deb kubectl_1.7.6-00_amd64.deb
官方源安裝
跨越GFW方式不細說,你懂的。
apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubernetes-cni kubectl
預設安裝最新的stable版本,可以根據需要指定安裝版本
apt-get install -y kubeadm=1.7.6-00
,版本資訊可以使用命令檢視:apt-cache madison kubeadm
。
relese編譯
git clone https://github.com/kubernetes/release.git
docker build --tag=debian-packager debian
docker run --volume="$(pwd)/debian:/src" debian-packager
編譯完成後生成deb包到:debian/bin
,進入到該目錄後安裝deb包。
4 映象準備
4.1 下載docker映象
kubeadm方式安裝kubernetes叢集需要的映象在docker官方映象中並未提供,只能去google的官方映象庫:gcr.io
中下載,GFW咋辦?這裡針對k8s-1.7.6在DockerHub上做了跳板映象,各位可以直接下載,dashboard的版本並未緊跟kubelet主線版本,用哪個版本都可以,本文使用kubernetes-dashboard-amd64:v1.7.0。
kubernetes-1.7.6所需要的映象:
- etcd-amd64:3.0.17
- pause-amd64:3.0
- kube-proxy-amd64:v1.7.6
- kube-scheduler-amd64:v1.7.6
- kube-controller-manager-amd64:v1.7.6
- kube-apiserver-amd64:v1.7.6
- kubernetes-dashboard-amd64:v1.7.0
- k8s-dns-sidecar-amd64:1.14.4
- k8s-dns-kube-dns-amd64:1.14.4
- k8s-dns-dnsmasq-nanny-amd64:1.14.4
偷下懶吧,直接執行以下指令碼,提前下載好映象,後邊的動作就快了:
#!/bin/bash
images=(kube-proxy-amd64:v1.7.6 kube-scheduler-amd64:v1.7.6 kube-controller-manager-amd64:v1.7.6 kube-apiserver-amd64:v1.7.6 etcd-amd64:3.0.17 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.6.1 k8s-dns-sidecar-amd64:1.14.4 k8s-dns-kube-dns-amd64:1.14.4 k8s-dns-dnsmasq-nanny-amd64:1.14.4)
for imageName in ${images[@]} ; do
docker pull cloudnil/$imageName
done
4.2 KUBE_REPO_PREFIX配置
通過KUBE_REPO_PREFIX配置官方映象包的倉庫位置,才可以直接使用從DockerHub上下載的映象,請使用以下命令增加配置:1.KUBE_REPO_PREFIX環境變數 2.KUBELET_EXTRA_ARGS引數。
sed -i '/mesg n/i\export KUBE_REPO_PREFIX=cloudnil' ~/.profile
source ~/.profile
cat > /etc/systemd/system/kubelet.service.d/20-extra-args.conf <<EOF
[Service]
Environment="KUBELET_EXTRA_ARGS=--pod-infra-container-image=cloudnil/pause-amd64:3.0"
EOF
systemctl daemon-reload
systemctl restart kubelet
5 安裝master節點
由於kubeadm和kubelet安裝過程中會生成/etc/kubernetes
目錄,而kubeadm init
會先檢測該目錄是否存在,所以我們先使用kubeadm初始化環境。
kubeadm reset
kubeadm init --api-advertise-addresses=172.16.2.1 --use-kubernetes-version v1.7.6
如果使用外部etcd叢集,以前的kubeadm版本的--external-etcd-endpoints
引數已經沒有了,所以要使用–config引數外掛配置檔案kubeadm-config.yml:
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
# networking:
# podSubnet: 10.244.0.0/16
apiServerCertSANs:
- master01
- master02
- master03
- 172.16.2.1
- 172.16.2.2
- 172.16.2.3
- 172.16.2.100
etcd:
endpoints:
- http://172.16.2.1:2379
- http://172.16.2.2:2379
- http://172.16.2.3:2379
token: 67e411.zc3617bb21ad7ee3
kubernetesVersion: v1.7.6
PS:token
是使用指令kubeadm token generate
生成的。
初始化指令:
kubeadm init --config kubeadm-config.yml
說明:如果打算使用flannel網路,請去掉
networking
註釋。如果有多網絡卡的,請根據實際情況配置--api-advertise-addresses=<ip-address>
,單網絡卡情況可以省略。
安裝過程大概2-3分鐘,輸出結果如下:
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[init] Using Kubernetes version: v1.7.6
[tokens] Generated token: "67e411.zc3617bb21ad7ee3"
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 21.317580 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 6.556101 seconds
[apiclient] Creating a test deployment
[apiclient] Test deployment succeeded
[addons] Created essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node:
kubeadm join --token=67e411.zc3617bb21ad7ee3 172.16.2.1
修改/etc/kubernetes/manifests/kube-apiserver.yaml
中的admission-control
策略:
[email protected]:/etc/kubernetes/manifests# vi kube-apiserver.yaml
#- --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds
複製/etc/kubernetes
到Master02和Master03
scp -r /etc/kubernetes 172.16.2.2:/etc/kubernetes
scp -r /etc/kubernetes 172.16.2.3:/etc/kubernetes
修改/etc/kubernetes
和/etc/kubernetes/manifests
以下檔案:
#Master02
[email protected]:/etc/kubernetes# sed -i 's/172.16.2.1:6443/172.16.2.2:6443/g' `grep 172.16.2.1:6443 . -rl`
[email protected]:/etc/kubernetes# sed -i 's/--advertise-address=172.16.2.1/--advertise-address=172.16.2.2/g' manifests/kube-apiserver.yaml
#Master03
[email protected]:/etc/kubernetes# sed -i 's/172.16.2.1:6443/172.16.2.3:6443/g' `grep 172.16.2.1:6443 . -rl`
[email protected]:/etc/kubernetes# sed -i 's/--advertise-address=172.16.2.1/--advertise-address=172.16.2.3/g' manifests/kube-apiserver.yaml
Master02和Master03節點重啟kubelet
服務。
systemctl restart kubelet
6 安裝keepalived
Master01、Master02、Master03上分別安裝keepalived,配置VIP為172.16.2.100
。
詳細過程略過,網路上很多教程,請各位自行查閱。
7 安裝Node節點
Master節點安裝好了Node節點就簡單了。
kubeadm reset
kubeadm join --token=67e411.zc3617bb21ad7ee3 172.16.2.100
輸出結果如下:
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[tokens] Validating provided token
[discovery] Created cluster info discovery client, requesting info from "http://172.16.2.100:9898/cluster-info/v1/?token-id=f11877"
[discovery] Cluster info object received, verifying signature using given token
[discovery] Cluster info signature and contents are valid, will use API endpoints [https://172.16.2.100:6443]
[bootstrap] Trying to connect to endpoint https://172.16.2.100:6443
[bootstrap] Detected server version: v1.7.6
[bootstrap] Successfully established connection with endpoint "https://172.16.2.100:6443"
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server:
Issuer: CN=kubernetes | Subject: CN=system:node:yournode | CA: false
Not before: 2017-06-28 19:44:00 +0000 UTC Not After: 2018-06-28 19:44:00 +0000 UTC
[csr] Generating kubelet configuration
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
安裝完成後可以檢視下狀態,未安裝網路元件,所以全部都是NotReady狀態:
NAME STATUS AGE VERSION
master01 NotReady 1d v1.7.6
master02 NotReady 1d v1.7.6
master03 NotReady 1d v1.7.6
node01 NotReady 1d v1.7.6
node02 NotReady 1d v1.7.6
node03 NotReady 1d v1.7.6
8 安裝Calico網路
網路元件選擇很多,可以根據自己的需要選擇calico、weave、flannel,calico效能最好,flannel的vxlan也不錯,預設的UDP效能較差,weave的效能比較差,測試環境用下可以,生產環境不建議使用。Addons中有配置好的yaml,所以本文中嘗試calico網路,。
kubectl apply -f https://docs.projectcalico.org/v2.5/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml
如果使用了外部etcd,去掉etcd相關配置內容,並修改etcd_endpoints: [ETCD_ENDPOINTS]
:
# Calico Version v2.5.1
# https://docs.projectcalico.org/v2.5/releases#v2.5.1
# This manifest includes the following component versions:
# calico/node:v2.5.1
# calico/cni:v1.10.0
# calico/kube-policy-controller:v0.7.0
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# The location of your etcd cluster. This uses the Service clusterIP defined below.
etcd_endpoints: "http://172.16.2.1:2379,http://172.16.2.2:2379,http://172.16.2.3:2379"
# Configure the Calico backend to use.
calico_backend: "bird"
# The CNI network configuration to install on each node.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.1.0",
"type": "calico",
"etcd_endpoints": "__ETCD_ENDPOINTS__",
"log_level": "info",
"mtu": 1500,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s",
"k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
"k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
},
"kubernetes": {
"kubeconfig": "/etc/cni/net.d/__KUBECONFIG_FILENAME__"
}
}
---
# This manifest installs the calico/node container, as well
# as the Calico CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
template:
metadata:
labels:
k8s-app: calico-node
annotations:
# Mark this pod as a critical add-on; when enabled, the critical add-on scheduler
# reserves resources for critical add-on pods so that they can be rescheduled after
# a failure. This annotation works in tandem with the toleration below.
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
hostNetwork: true
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
# Allow this pod to be rescheduled while the node is in "critical add-ons only" mode.
# This, along with the annotation above marks this pod as a critical add-on.
- key: CriticalAddonsOnly
operator: Exists
serviceAccountName: calico-cni-plugin
containers:
# Runs calico/node container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
image: quay.io/calico/node:v2.5.1
env:
# The location of the Calico etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# Enable BGP. Disable to enforce policy only.
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "kubeadm,bgp"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Configure the IP Pool from which Pod IPs will be chosen.
- name: CALICO_IPV4POOL_CIDR
value: "10.68.0.0/16"
- name: CALICO_IPV4POOL_IPIP
value: "always"
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
value: "1440"
# Set Felix logging to "info"
- name: FELIX_LOGSEVERITYSCREEN
value: "info"
# Auto-detect the BGP IP address.
- name: IP
value: ""
- name: FELIX_HEALTHENABLED
value: "true"
securityContext:
privileged: true
resources:
requests:
cpu: 250m
livenessProbe:
httpGet:
path: /liveness
port: 9099
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
httpGet:
path: /readiness
port: 9099
periodSeconds: 10
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
# This container installs the Calico CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: quay.io/calico/cni:v1.10.0
command: ["/install-cni.sh"]
env:
# The location of the Calico etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# The CNI network config to install on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
volumes:
# Used by calico/node.
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
# Used to install CNI.
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/cni/net.d
---
# This manifest deploys the Calico policy controller on Kubernetes.
# See https://github.com/projectcalico/k8s-policy
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: calico-policy-controller
namespace: kube-system
labels:
k8s-app: calico-policy
spec:
# The policy controller can only have a single active instance.
replicas: 1
strategy:
type: Recreate
template:
metadata:
name: calico-policy-controller
namespace: kube-system
labels:
k8s-app: calico-policy-controller
annotations:
# Mark this pod as a critical add-on; when enabled, the critical add-on scheduler
# reserves resources for critical add-on pods so that they can be rescheduled after
# a failure. This annotation works in tandem with the toleration below.
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
# The policy controller must run in the host network namespace so that
# it isn't governed by policy that would prevent it from working.
hostNetwork: true
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
# Allow this pod to be rescheduled while the node is in "critical add-ons only" mode.
# This, along with the annotation above marks this pod as a critical add-on.
- key: CriticalAddonsOnly
operator: Exists
serviceAccountName: calico-policy-controller
containers:
- name: calico-policy-controller
image: quay.io/calico/kube-policy-controller:v0.7.0
env:
# The location of the Calico etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# The location of the Kubernetes API. Use the default Kubernetes
# service for API access.
- name: K8S_API
value: "https://kubernetes.default:443"
# Since we're running in the host namespace and might not have KubeDNS
# access, configure the container's /etc/hosts to resolve
# kubernetes.default to the correct service clusterIP.
- name: CONFIGURE_ETC_HOSTS
value: "true"
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: calico-cni-plugin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-cni-plugin
subjects:
- kind: ServiceAccount
name: calico-cni-plugin
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: calico-cni-plugin
namespace: kube-system
rules:
- apiGroups: [""]
resources:
- pods
- nodes
verbs:
- get
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-cni-plugin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: calico-policy-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-policy-controller
subjects:
- kind: ServiceAccount
name: calico-policy-controller
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: calico-policy-controller
namespace: kube-system
rules:
- apiGroups:
- ""
- extensions
resources:
- pods
- namespaces
- networkpolicies
verbs:
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-policy-controller
namespace: kube-system
檢查各節點元件執行狀態:
NAME READY STATUS RESTARTS AGE
calico-node-0cjx5 2/2 Running 0 1d
calico-node-1vj9s 2/2 Running 0 1d
calico-node-222v0 2/2 Running 0 1d
calico-node-7nqj7 2/2 Running 0 1d
calico-node-7tvh9 2/2 Running 0 1d
calico-node-86313 2/2 Running 2 1d
calico-policy-controller-3691403067-43wm6 1/1 Running 3 1d
kube-apiserver-master01 1/1 Running 3 1d
kube-apiserver-master02 1/1 Running 1 1d
kube-apiserver-master03 1/1 Running 1 1d
kube-controller-manager-master01 1/1 Running 4 1d
kube-controller-manager-master02 1/1 Running 2 1d
kube-controller-manager-master03 1/1 Running 2 1d
kube-dns-4099109879-3hqtq 3/3 Running 0 1d
kube-proxy-43j51 1/1 Running 1 1d
kube-proxy-4z8mx 1/1 Running 0 1d
kube-proxy-8w1xh 1/1 Running 0 1d
kube-proxy-g2hv8 1/1 Running 1 1d
kube-proxy-hzkmc 1/1 Running 1 1d
kube-proxy-l91xr 1/1 Running 0 1d
kube-scheduler-master01 1/1 Running 4 1d
kube-scheduler-master02 1/1 Running 2 1d
kube-scheduler-master03 1/1 Running 2 1d
說明:kube-dns需要等calico配置完成後才是running狀態。
8 DNS叢集部署
刪除原單點kube-dns
kubectl delete deploy kube-dns -n kube-system
部署多例項的kube-dns叢集,參考配置kube-dns.yml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
k8s-app: kube-dns
name: kube-dns
namespace: kube-system
spec:
replicas: 3
selector:
matchLabels:
k8s-app: kube-dns
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
k8s-app: kube-dns
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values:
- kube-dns
topologyKey: "kubernetes.io/hostname"
containers:
- args:
- --domain=cluster.local.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --v=2
env:
- name: PROMETHEUS_PORT
value: "10055"
image: cloudnil/k8s-dns-kube-dns-amd64:1.14.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthcheck/kubedns
port: 10054
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: kubedns
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readiness
port: 8081
scheme: HTTP
initialDelaySeconds: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /kube-dns-config
name: kube-dns-config
- args:
- -v=2
- -logtostderr
- -configDir=/etc/k8s/dns/dnsmasq-nanny
- -restartDnsmasq=true
- --
- -k
- --cache-size=1000
- --log-facility=-
- --server=/cluster.local/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
image: cloudnil/k8s-dns-dnsmasq-nanny-amd64:1.14.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthcheck/dnsmasq
port: 10054
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: dnsmasq
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
resources:
requests:
cpu: 150m
memory: 20Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/k8s/dns/dnsmasq-nanny
name: kube-dns-config
- args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
image: cloudnil/k8s-dns-sidecar-amd64:1.14.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: sidecar
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
cpu: 10m
memory: 20Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: Default
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: kube-dns
serviceAccountName: kube-dns
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
- key: CriticalAddonsOnly
operator: Exists
volumes:
- configMap:
defaultMode: 420
name: kube-dns
optional: true
name: kube-dns-config
9 部署Dashboard
下載kubernetes-dashboard.yaml
curl -O https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
修改配置內容,部署到default的namespace,增加ingress配置,後邊配置了nginx-ingress後就可以直接繫結域名訪問了。
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: default
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: default
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: cloudnil/kubernetes-dashboard-amd64:v1.7.0
ports:
- containerPort: 9090
protocol: TCP
args:
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: default
spec:
ports:
- port: 80
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: default
spec:
rules:
- host: dashboard.cloudnil.com
http:
paths:
- path: /
backend:
serviceName: kubernetes-dashboard
servicePort: 80
10 Dashboard服務暴露到公網
kubernetes中的Service暴露到外部有三種方式,分別是:
- LoadBlancer Service
- NodePort Service
- Ingress
LoadBlancer Service是kubernetes深度結合雲平臺的一個元件;當使用LoadBlancer Service暴露服務時,實際上是通過向底層雲平臺申請建立一個負載均衡器來向外暴露服務;目前LoadBlancer Service支援的雲平臺已經相對完善,比如國外的GCE、DigitalOcean,國內的 阿里雲,私有云 Openstack 等等,由於LoadBlancer Service深度結合了雲平臺,所以只能在一些雲平臺上來使用。
NodePort Service顧名思義,實質上就是通過在叢集的每個node上暴露一個埠,然後將這個埠對映到某個具體的service來實現的,雖然每個node的埠有很多(0~65535),但是由於安全性和易用性(服務多了就亂了,還有埠衝突問題)實際使用可能並不多。
Ingress可以實現使用nginx等開源的反向代理負載均衡器實現對外暴露服務,可以理解Ingress就是用於配置域名轉發的一個東西,在nginx中就類似upstream