kubeadm部署kubernetes叢集
阿新 • • 發佈:2021-11-10
- 本文使用kubeadm的方式部署kubernetes v1.19.11單master叢集,所有的映象和yml檔案提前下載好。
1.Linux系統修改配置
- 配置主機名
vim /etc/hosts
192.168.1.1 master
- 關閉swap分割槽
swapoff -a
- 配置核心引數,將橋接的IPv4流量傳遞到iptables的鏈
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
sysctl --system
- 關閉防火牆
systemctl stop firewalld && systemctl disable firewalld
2.使用github和aliyun拉取k8s的相關元件,這裡以kube-apisever映象為例:
FROM kubeimage/kube-apiserver-amd64:v1.19.11
MAINTAINER zhengwei <[email protected]>
在阿里雲的映象倉庫中建立新的映象,選擇github中存放Dockerfile的倉庫,然後建立,在建立好的映象中選擇路徑和版本號,然後構建映象。這裡列舉一下需要拉取的映象。
docker pull kubeimage/kube-apiserver-amd64:v1.19.11 docker pull kubeimage/kube-controller-manager-amd64:v1.19.11 docker pull kubeimage/kube-proxy-amd64:v1.19.11 docker pull kubeimage/kube-scheduler-amd64:v1.19.11 docker pull gcr.io/google-containers/etcd:3.4.13-0 docker pull gcr.io/google-containers/pause:3.2 docker pull gcr.io/google-containers/coredns:1.7.0
映象拉取到伺服器後,使用docker save kubeimage/kube-apiserver-amd64:v1.19.11 -o kube-apiserver-amd64.img
的命令將所有需要的映象打包下載到本地。
3.離線安裝docker
安裝docker所需依賴包
rpm -ivh device-mapper-event-1.02.170-6.el7_9.5.x86_64.rpm \ device-mapper-1.02.170-6.el7_9.5.x86_64.rpm \ device-mapper-event-libs-1.02.170-6.el7_9.5.x86_64.rpm \ device-mapper-libs-1.02.170-6.el7_9.5.x86_64.rpm \ device-mapper-persistent-data-0.8.5-3.el7_9.2.x86_64.rpm \ libxml2-python-2.9.1-6.el7_9.6.x86_64.rpm \ libxml2-2.9.1-6.el7_9.6.x86_64.rpm \ python-chardet-2.2.1-3.el7.noarch.rpm \ python-kitchen-1.1.1-5.el7.noarch.rpm \ yum-utils-1.1.31-54.el7_8.noarch.rpm \ lvm2-2.02.187-6.el7_9.5.x86_64.rpm \ lvm2-libs-2.02.187-6.el7_9.5.x86_64.rpm
安裝docker
rpm -ivh audit-libs-python-2.8.5-4.el7.x86_64.rpm \
container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm \
checkpolicy-2.5-8.el7.x86_64.rpm \
docker-ce-20.10.10-3.el7.x86_64.rpm \
containerd.io-1.4.11-3.1.el7.x86_64.rpm \
docker-ce-rootless-extras-20.10.10-3.el7.x86_64.rpm \
fuse-overlayfs-0.7.2-6.el7_8.x86_64.rpm \
libsemanage-python-2.5-14.el7.x86_64.rpm \
libcgroup-0.41-21.el7.x86_64.rpm \
fuse3-libs-3.6.1-4.el7.x86_64.rpm \
python-IPy-0.75-6.el7.noarch.rpm \
policycoreutils-python-2.5-34.el7.x86_64.rpm \
docker-scan-plugin-0.9.0-3.el7.x86_64.rpm \
slirp4netns-0.4.3-4.el7_8.x86_64.rpm \
setools-libs-3.3.8-4.el7.x86_64.rpm \
docker-ce-cli-20.10.10-3.el7.x86_64.rpm
安裝完畢後,使用systemctl start docker
啟動服務,使用docker info
命令測試一下是否安裝成功
[root@VM-0-9-centos /]# docker info
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.6.3-docker)
scan: Docker Scan (Docker Inc., v0.9.0)
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 20.10.10
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 5b46e404f6b9f661a205e28d59c982d3634148f8
runc version: v1.0.2-0-g52b36a2
init version: de40ad0
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-1160.31.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.795GiB
Name: VM-0-9-centos
ID: FPWS:CMYP:BG3D:4S4Y:WV4Y:437N:FIME:MGA7:7LX2:TOKM:DTH5:OU4J
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
4.安裝kubelet、kubeadm、kubectl
rpm -ivh libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm \
libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm \
libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm \
socat-1.7.3.2-2.el7.x86_64.rpm \
conntrack-tools-1.4.4-7.el7.x86_64.rpm \
cri-tools-1.19.0-0.x86_64.rpm \
kubernetes-cni-0.8.7-0.x86_64.rpm \
kubectl-1.19.11-0.x86_64.rpm \
kubelet-1.19.11-0.x86_64.rpm \
kubeadm-1.19.11-0.x86_64.rpm
## 所有節點執行
systemctl enable kubelet
5.初始化k8s叢集
kubeadm init --kubernetes-version=1.19.11 \
--apiserver-advertise-address=192.168.1.1 \
--service-cidr=10.10.0.0/16 \
--pod-network-cidr=10.122.0.0/16
W0408 09:36:36.121603 14098 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
[WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master01.paas.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.10.0.1 192.168.122.21]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master01.paas.com localhost] and IPs [192.168.122.21 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master01.paas.com localhost] and IPs [192.168.122.21 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0408 09:36:43.343191 14098 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0408 09:36:43.344303 14098 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 23.002541 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master01.paas.com as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master01.paas.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: v2r5a4.veazy2xhzetpktfz
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.1:6443 --token v2r5a4.veazy2xhzetpktfz \
--discovery-token-ca-cert-hash sha256:daded8514c8350f7c238204979039ff9884d5b595ca950ba8bbce80724fd65d4
執行命令,配置config檔案
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
6.node節匯入映象,新增到叢集
## 需要匯入的映象
docker load -i k8s.gcr.io/kube-proxy:v1.19.11
docker load -i k8s.gcr.io/coredns:1.7.0
docker load -i k8s.gcr.io/pause:3.2
## 新增到master節點
kubeadm join 192.168.1.1:6443 --token v2r5a4.veazy2xhzetpktfz \
--discovery-token-ca-cert-hash sha256:daded8514c8350f7c238204979039ff9884d5b595ca950ba8bbce80724fd65d4
7.部署網路外掛flannel
- 匯出匯入映象
docker pull rancher/mirrored-flannelcni-flannel-cni-plugin:v1.2
docker save rancher/mirrored-flannelcni-flannel-cni-plugin:v1.2 -o mirrored-flannelcni-flannel-cni-plugin.img
docker load -i mirrored-flannelcni-flannel-cni-plugin.img
docker pull quay.io/coreos/flannel:v0.15.0
docker save quay.io/coreos/flannel:v0.15.0 -o flannel.img
docker load -i flannel.img
- 執行
kube-flannel.yml
檔案
## kubectl apply -f kube-flannel.yml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.2
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
image: quay.io/coreos/flannel:v0.15.0
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.15.0
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
問題1:kube-scheduler和kube-controller-manager元件一直重啟的問題。
cd /etc/kubernetes/mainfests
vim kube-controller-manager.yaml
#- --port=0
vim kube-scheduler.yaml
#- --port=0