1. 程式人生 > >k8s搭建部署

k8s搭建部署

openss sch tps start firewalld lte 初始 date命令 ati

1.服務器虛擬機準備(虛擬機要能上網)

192.168.1.11 cpu >=2c Mem >=2G hostname master /dev/vda1 50G

192.168.1.12 cpu >=2c Mem >=2G hostname node /dev/vda1 50G

2.軟件版本

系統類型 Kubernetes版本 docker版本 kubeadm版本 kubectl版本 kubelet版本

CentOS7.5.1804 v1.13 18.06.1-ce v1.13 v1.13 v1.13

3.環境初始化操作

  1.配置hostname

hostnamectl set-hostname master
hostnamectl set-hostname node

  

  2.配置/etc/hosts

技術分享圖片
127.0.0.1        localhost localhost.localdomain
localhost4 localhost4.localdomain4
::1              localhost localhost.localdomain
localhost6 localhost6.localdomain6    


192.168.1.11     master
192.168.1.12     node
      
技術分享圖片

  3.關閉防火墻、Selinux、swap

技術分享圖片
# 停防火墻
systemctl stop  firewalld
systemctl disable firewalld


關閉Selinux
setenforce 0
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config


# 關閉Swap
swapoff -a
sed -i ‘s/.*swap.*/#&/‘ /etc/fstab

# 加載br_netfilter
modprobe br_netfilter
技術分享圖片

  4.配置內核參數 /etc/sysctl.d/k8s.conf

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

EOF

# 生效文件
sysctl -p /etc/sysctl.d/k8s.conf 

  

  5.修改Linux 資源配置文件,調高ulimit最大打開數和systemctl管理的服務文件最大打開數 \

技術分享圖片
echo "* soft nofile 655360" >> /etc/security/limits.conf
echo "* hard nofile 655360" >> /etc/security/limits.conf
echo "* soft nproc 655360" >> /etc/security/limits.conf
echo "* hard nproc 655360" >> /etc/security/limits.conf
echo "* soft memlock unlimited" >> /etc/security/limits.conf
echo "* hard memlock unlimited" >> /etc/security/limits.conf
echo "DefaultLimitNOFILE=1024000" >> /etc/systemd/system.conf
echo "DefaultLimitNPROC=1024000" >> /etc/systemd/system.conf
技術分享圖片

  6.配置國內tencent yum源、epel源、Kubernetes源地址

技術分享圖片
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos7_base.repo


wget -O /etc/yum.repos.d/epel.repo http://mirrors.cloud.tencent.com/repo/epel-7.repo

#註意虛擬機要有wget yum clean all && yum makecache #配置國內Kubernetes源地址 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
技術分享圖片

  

  8.安裝依賴包

yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp bash-completion
yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools vim libtool-ltdl

  9.配置時間同步,所有節點都需要

yum install chrony –y
systemctl enable chronyd.service && systemctl start chronyd.service 
systemctl status chronyd.service
chronyc sources
#物理機上寫有時間服務器或者網絡同步時間

10.配置節點間ssh互信

配置ssh互信,那麽節點之間就能無密訪問,方便日後執行自動化部署

ssh-keygen ssh-keygen

ssh-copy-id node # 到master上拷貝公鑰到其他節點,這裏需要輸入 yes和密碼

  11.初始化環境配置檢查

    - 重啟,做完以上所有操作,最好reboot重啟一遍

    - ping 每個節點hostname 看是否能ping通
    - ssh 對方hostname看互信是否無密碼訪問成功
    - 執行date命令查看每個節點時間是否正確
    - 執行 ulimit -Hn 看下最大文件打開數是否是655360
    - cat /etc/sysconfig/selinux |grep disabled 查看下每個節點selinux是否都是disabled狀態

安裝docker ,所有節點都需要裝

 1.設置docker yum源

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

  2.安裝docker

技術分享圖片
# 列出docker 版本信息
yum list docker-ce --showduplicates | sort -r

#  安裝docker 指定18.06.1
yum install -y docker-ce-18.06.1.ce-3.el7
systemctl restart docker 

# 配置鏡像加速器和docker數據存放路徑
tee /etc/docker/daemon.json <<-‘EOF‘
{
"registry-mirrors": ["https://q2hy3fzi.mirror.aliyuncs.com"],
"graph": "/tol/docker-data"
}
EOF
技術分享圖片

  

  3.啟動docker

技術分享圖片
systemctl daemon-reload 
systemctl restart docker
systemctl enable docker
systemctl status docker

# docker --version
技術分享圖片

  安裝kubeadm、kubelet、kubectl,所有節點

  ? kubeadm: 部署集群用的命令
  ? kubelet: 在集群中每臺機器上都要運行的組件,負責管理pod、容器的生命周期
  ? kubectl: 集群管理工具

  安裝工具

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable kubelet && systemctl start kubelet

鏡像下載準備

  1.初始化獲取要下載的鏡像列表

技術分享圖片
# 查看依賴需要安裝的鏡像列表
kubeadm config images list


# 生成默認kubeadm.conf文件
kubeadm config print init-defaults > kubeadm.conf
技術分享圖片

  

  2.繞過墻下載鏡像的方法

sed -i "s/imageRepository: .*/imageRepository: registry.aliyuncs.com\/google_containers/g" kubeadm.conf

  

  3.指定kubeadm安裝的Kubernetes版本

sed -i "s/kubernetesVersion: .*/kubernetesVersion: v1.13.0/g" kubeadm.conf

  

  4.下載需要的鏡像

kubeadm config images pull --config kubeadm.conf

docker images

  

  5.docker tag 鏡像

技術分享圖片
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.13.0 k8s.gcr.io/kube-apiserver:v1.13.0
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.13.0 k8s.gcr.io/kube-controller-manager:v1.13.0
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.13.0 k8s.gcr.io/kube-scheduler:v1.13.0
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.13.0 k8s.gcr.io/kube-proxy:v1.13.0docker tag registry.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1docker tag registry.aliyuncs.com/google_containers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24docker tag registry.aliyuncs.com/google_containers/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
技術分享圖片

  

  6.docker rmi 清理下載的鏡像

技術分享圖片
docker rmi registry.aliyuncs.com/google_containers/kube-apiserver:v1.13.0
docker rmi registry.aliyuncs.com/google_containers/kube-controller-manager:v1.13.0
docker rmi registry.aliyuncs.com/google_containers/kube-scheduler:v1.13.0
docker rmi registry.aliyuncs.com/google_containers/kube-proxy:v1.13.0
docker rmi registry.aliyuncs.com/google_containers/pause:3.1
docker rmi registry.aliyuncs.com/google_containers/etcd:3.2.24
docker rmi registry.aliyuncs.com/google_containers/coredns:1.2.6
技術分享圖片

部署master節點

  1.kubeadm init 初始化master節點

技術分享圖片
# 定義POD的網段為: 172.22.0.0/16 ,api server地址就是master本機IP地址
kubeadm init --kubernetes-version=v1.13.0 --pod-network-cidr=172.22.0.0/16 --apiserver-advertise-address=192.168.1.11
ls /etc/kubernetes/

# kubeadm reset
# kubeadm init --kubernetes-version=v1.13.0 --pod-network-cidr=172.22.0.0/16 --apiserver-advertise-address=192.168.1.11
#  記錄下面的信息
kubeadm join 192.168.1.11:6443 --token iazwtj.v3ajyq9kyqftg3et --discovery-token-ca-cert-hash sha256:27aaefd2afc4e75fd34c31365abd3a7357bb4bba7552056bb4a9695fcde14ef5

技術分享圖片

    

  2.驗證測試

技術分享圖片
# 配置kubectl命令
mkdir -p /root/.kube
cp /etc/kubernetes/admin.conf /root/.kube/config

# 執行獲取pods列表命令,查看相關狀態
kubectl get pods --all-namespaces

# 查看集群的健康狀態
kubectl get cs 
技術分享圖片

部署calico網絡

  1.下載calico 官方鏡像

docker pull calico/node:v3.1.4
docker pull calico/cni:v3.1.4
docker pull calico/typha:v3.1.4

  2.tag 這三個calico鏡像

docker tag calico/node:v3.1.4 quay.io/calico/node:v3.1.4
docker tag calico/cni:v3.1.4 quay.io/calico/cni:v3.1.4
docker tag calico/typha:v3.1.4 quay.io/calico/typha:v3.1.4

  3.刪除原有鏡像

docker rmi calico/node:v3.1.4
docker rmi calico/cni:v3.1.4
docker rmi calico/typha:v3.1.4

  4.部署calico

技術分享圖片
curl https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml -O

kubectl apply -f rbac-kdd.yaml

curl https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/policy-only/1.7/calico.yaml -O #把ConfigMap 下的 typha_service_name 值由none變成 calico-typha sed -i ‘s/typha_service_name: "none"/typha_service_name: "calico-typha"/g‘ calico.yaml #設置 Deployment 類目的 spec 下的replicas值為1 sed -i ‘s/replicas: 0/replicas: 1/g‘ calico.yaml #找到CALICO_IPV4POOL_CIDR,然後值修改成之前定義好的POD網段,我這裏是172.22.0.0/16 sed -i ‘s/192.168.0.0/172.22.0.0/g‘ calico.yaml #把 CALICO_NETWORKING_BACKEND 值設置為 bird ,這個值是設置BGP網絡後端模式
sed -i ‘/name: CALICO_NETWORKING_BACKEND/{n;s/value: "none"/value: "bird"/;}‘ calico.yaml


技術分享圖片

  5.部署calico.yaml

kubectl apply -f calico.yamlwget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml  
kubectl get pods --all-namespaces

部署node節點

  1.下載鏡像

  

技術分享圖片
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.13.0
docker pull registry.aliyuncs.com/google_containers/pause:3.1
docker pull calico/node:v3.1.4
docker pull calico/cni:v3.1.4
docker pull calico/typha:v3.1.4
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.13.0 k8s.gcr.io/kube-proxy:v1.13.0
docker tag registry.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag calico/node:v3.1.4 quay.io/calico/node:v3.1.4
docker tag calico/cni:v3.1.4 quay.io/calico/cni:v3.1.4
docker tag calico/typha:v3.1.4 quay.io/calico/typha:v3.1.4
docker rmi registry.aliyuncs.com/google_containers/kube-proxy:v1.13.0
docker rmi registry.aliyuncs.com/google_containers/pause:3.1
docker rmi calico/node:v3.1.4
docker rmi calico/cni:v3.1.4
docker rmi calico/typha:v3.1.4
技術分享圖片

  2.把node加入到集群

kubeadm join 192.168.1.11:6443 --token iazwtj.v3ajyq9kyqftg3et --discovery-token-ca-cert-hash sha256:27aaefd2afc4e75fd34c31365abd3a7357bb4bba7552056bb4a9695fcde14ef5

  3.在master上查看

部署dashboard

   1. 生成私鑰和證書簽名請求 

技術分享圖片
mkdir -p /etc/kubernetes/certs
cd /etc/kubernetes/certs
openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048
openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key
# 刪除剛才生成的dashboard.pass.key
rm -rf dashboard.pass.key

openssl req -new -key dashboard.key -out dashboard.csr

# 生成SSL證書
openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt
# 創建secret
技術分享圖片

  

  2.創建secret

kubectl create secret generic kubernetes-dashboard-certs --from-file=/etc/kubernetes/certs -n kube-system

  

  3.下載dashboard鏡像、tag鏡像(在全部節點上)

技術分享圖片
docker pull registry.cn-hangzhou.aliyuncs.com/kubernete/kubernetes-dashboard-amd64:v1.10.0

docker tag registry.cn-hangzhou.aliyuncs.com/kubernete/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard:v1.10.0

docker rmi registry.cn-hangzhou.aliyuncs.com/kubernete/kubernetes-dashboard-amd64:v1.10.0
技術分享圖片

  

  4.下載 kubernetes-dashboard.yaml 部署文件(在master上執行) 

技術分享圖片 View Code

 

  5 創建dashboard的pod

kubectl create -f kubernetes-dashboard.yaml

  6.查看服務器運行狀態

kubectl get deployment kubernetes-dashboard -n kube-system
kubectl --namespace kube-system get pods -o wide
kubectl get services kubernetes-dashboard -n kube-system netstat -ntlp|grep 30005

  

  7. Dashboard BUG處理

  kubectl create -f kube-dashboard-access.yaml

技術分享圖片
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

---

k8s搭建部署