1. 程式人生 > 其它 >kubeadm使用外部etcd部署kubernetes v1.17.3 高可用叢集

kubeadm使用外部etcd部署kubernetes v1.17.3 高可用叢集

文章轉載自:https://mp.weixin.qq.com/s?__biz=MzI1MDgwNzQ1MQ==&mid=2247483891&idx=1&sn=17dcd7cd0645df509c8e49059a2f00d7&chksm=e9fdd407de8a5d119d439b70dc2c381ec2eceddb63ed43767c2e1b7cffefe077e41955568cb5&cur_album_id=1341273083637989377&scene=189#wechat_redirect

環境準備

架構圖

IP地址規劃

使用阿里伺服器Centos 7.7映象,預設作業系統版本3.10.0-1062.9.1.el7.x86_64。

注意:由於阿里雲伺服器,無法使用VIP,沒有辦法使用keepalive+nginx使用三節點VIP,這裡在kubeadm init初始化配置檔案中指定了一個master01節點的IP。

如果你的環境可以使用VIP,可能參考:第五篇 安裝keepalived與Nginx

伺服器初始化

所有伺服器進行初始化,只需要對master和node節點就可以,指令碼內容在阿里提供的虛擬機器上面已經得到驗證。

主要做了以下操作,安裝一些必要依賴包、禁用ipv6、停止預設網路管理功能、啟用時間同步、載入ipvs模組、修改核心引數、禁用swap、關閉防火牆等,另外還可以修改下主機名;

#!/bin/bash

# 1. install common tools,these commands are not required.
source /etc/profile
yum -y install chrony bridge-utils chrony ipvsadm ipset sysstat conntrack libseccomp wget tcpdump screen vim nfs-utils bind-utils wget socat telnet sshpass net-tools sysstat lrzsz yum-utils device-mapper-persistent-data lvm2 tree nc lsof strace nmon iptraf iftop rpcbind mlocate

# 2. disable IPv6
if [ $(cat /etc/default/grub |grep 'ipv6.disable=1' |grep GRUB_CMDLINE_LINUX|wc -l) -eq 0 ];then
    sed -i 's/GRUB_CMDLINE_LINUX="/GRUB_CMDLINE_LINUX="ipv6.disable=1 /' /etc/default/grub
    /usr/sbin/grub2-mkconfig -o /boot/grub2/grub.cfg
fi

# 3. disable NetworkManager
systemctl stop NetworkManager
systemctl disable NetworkManager

# 3.
systemctl enable chronyd.service
systemctl start chronyd.service
# 4. add bridge-nf-call-ip6tables ,notice: You may need to run '/usr/sbin/modprobe br_netfilter' this commond after reboot.
cat > /etc/rc.sysinit << EOF
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
EOF

cat > /etc/sysconfig/modules/br_netfilter.modules << EOF
modprobe br_netfilter
EOF

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
chmod 755 /etc/sysconfig/modules/br_netfilter.modules

# 5. add route forwarding
[ $(cat /etc/sysctl.conf | grep "net.ipv4.ip_forward=1" |wc -l) -eq 0 ] && echo "net.ipv4.ip_forward=1" >>/etc/sysctl.conf
[ $(cat /etc/sysctl.conf | grep "net.bridge.bridge-nf-call-iptables=1" |wc -l) -eq 0 ] && echo "net.bridge.bridge-nf-call-iptables=1" >>/etc/sysctl.conf
[ $(cat /etc/sysctl.conf | grep "net.bridge.bridge-nf-call-ip6tables=1" |wc -l) -eq 0 ] && echo "net.bridge.bridge-nf-call-ip6tables=1" >>/etc/sysctl.conf
[ $(cat /etc/sysctl.conf | grep "fs.may_detach_mounts=1" |wc -l) -eq 0 ] && echo "fs.may_detach_mounts=1" >>/etc/sysctl.conf
[ $(cat /etc/sysctl.conf | grep "vm.overcommit_memory=1" |wc -l) -eq 0 ] && echo "vm.overcommit_memory=1" >>/etc/sysctl.conf
[ $(cat /etc/sysctl.conf | grep "vm.panic_on_oom=0" |wc -l) -eq 0 ] && echo "vm.panic_on_oom=0" >>/etc/sysctl.conf
[ $(cat /etc/sysctl.conf | grep "vm.swappiness=0" |wc -l) -eq 0 ] && echo "vm.swappiness=0" >>/etc/sysctl.conf
[ $(cat /etc/sysctl.conf | grep "fs.inotify.max_user_watches=89100" |wc -l) -eq 0 ] && echo "fs.inotify.max_user_watches=89100" >>/etc/sysctl.conf
[ $(cat /etc/sysctl.conf | grep "fs.file-max=52706963" |wc -l) -eq 0 ] && echo "fs.file-max=52706963" >>/etc/sysctl.conf
[ $(cat /etc/sysctl.conf | grep "fs.nr_open=52706963" |wc -l) -eq 0 ] && echo "fs.nr_open=52706963" >>/etc/sysctl.conf
[ $(cat /etc/sysctl.conf | grep "net.netfilter.nf_conntrack_max=2310720" |wc -l) -eq 0 ] && echo "net.netfilter.nf_conntrack_max=2310720" >>/etc/sysctl.conf
/usr/sbin/sysctl -p


# 6. modify limit file
[ $(cat /etc/security/limits.conf|grep '* soft nproc 10240000'|wc -l) -eq 0 ]&&echo '* soft nproc 10240000' >>/etc/security/limits.conf
[ $(cat /etc/security/limits.conf|grep '* hard nproc 10240000'|wc -l) -eq 0 ]&&echo '* hard nproc 10240000' >>/etc/security/limits.conf
[ $(cat /etc/security/limits.conf|grep '* soft nofile 10240000'|wc -l) -eq 0 ]&&echo '* soft nofile 10240000' >>/etc/security/limits.conf
[ $(cat /etc/security/limits.conf|grep '* hard nofile 10240000'|wc -l) -eq 0 ]&&echo '* hard nofile 10240000' >>/etc/security/limits.conf

# 6. disable selinux
sed -i '/SELINUX=/s/enforcing/disabled/' /etc/selinux/config

# 6. Close the swap partition
/usr/sbin/swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

# 7. disable firewalld
systemctl stop firewalld
systemctl disable firewalld

# 8. reset iptables
yum install -y iptables-services
/usr/sbin/iptables -P FORWARD ACCEPT
/usr/sbin/iptables -X
/usr/sbin/iptables -F -t nat
/usr/sbin/iptables -X -t nat

reboot

安裝etcd

CA簽發根證書

由於使用TLS安全認證功能,需要為etcd訪問ca證書和私鑰,證書籤發原理可以參考:第三篇 PKI基礎概念、cfssl工具介紹及kubernetes中證書

#!/bin/bash

# 1. download cfssl related files.
while true;
do
        echo "Download cfssl, please wait a monment." &&\
        curl -L -C - -O https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 && \
        curl -L -C - -O https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 && \
        curl -L -C - -O https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
        if [ $? -eq 0 ];then
                echo "cfssl download success."
                break
        else
                echo "cfssl download failed."
                break
        fi
done

# 2. Create a binary dirctory to store kubernetes related files.
if [ ! -d /usr/kubernetes/bin/ ];then
        mkdir -p /usr/kubernetes/bin/
fi

# 3. copy binary files to before create a binary dirctory.
mv cfssl_linux-amd64 /usr/kubernetes/bin/cfssl
mv cfssljson_linux-amd64 /usr/kubernetes/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/kubernetes/bin/cfssl-certinfo
chmod +x /usr/kubernetes/bin/{cfssl,cfssljson,cfssl-certinfo}

# 4. add environment variables
[ $(cat /etc/profile|grep 'PATH=/usr/kubernetes/bin'|wc -l ) -eq 0 ] && echo 'PATH=/usr/kubernetes/bin:$PATH' >>/etc/profile && source /etc/profile || source /etc/profile

# 5. create a CA certificate directory and access this directory
CA_SSL=/etc/kubernetes/ssl/ca
[ ! -d ${CA_SSL} ] && mkdir -p ${CA_SSL}
cd $CA_SSL

## cfssl print-defaults config > config.json
## cfssl print-defaults csr > csr.json
# 我們這裡不使用上面兩行命令生成

# 可以定義多個profiles,分別指定不同的過期時間,使用場景等引數,後續簽名證書時使用某個profile;
# signing: 表示該證書可用於簽名其它證書,生成的ca.pem證書中的CA=TRUE;
# server auth: 表示client 可以用該CA 對server 提供的證書進行校驗;
# client auth: 表示server 可以用該CA 對client 提供的證書進行驗證。
cat > ${CA_SSL}/ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

# CN: Common Name, kube-apiserver從證書中提取該欄位作為請求的使用者名稱(User Name);瀏覽器使用該欄位驗證網站是否合法;
# O: Organization,kube-apiserver 從證書中提取該欄位作為請求使用者所屬的組(Group);

cat > ${CA_SSL}/ca-csr.json <<EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
           "OU": "System"
        }
    ]
}
EOF

# 6. generate ca.pem, ca-key.pem
cfssl gencert -initca ca-csr.json | cfssljson -bare ca

[ $? -eq 0 ] && echo "CA certificate and private key generated successfully." || echo "CA certificate and private key generation failed."
[root@ops ~]#

使用私有CA為ETCD簽發證書和私鑰

#!/bin/bash

# 2. create csr file.
source /etc/profile

ETCD_SSL="/etc/kubernetes/ssl/etcd/"

[ ! -d ${ETCD_SSL} ] && mkdir ${ETCD_SSL}
cat >$ETCD_SSL/etcd-csr.json << EOF
{
    "CN": "etcd",
    "hosts": [
    "172.17.173.15",
    "172.17.173.16",
    "172.17.173.17"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
           "O": "k8s",
           "OU": "System"
        }
    ]
}
EOF

# 3. Determine if the ca required file exits.
[ ! -f /etc/kubernetes/ssl/ca/ca.pem ] && echo "no ca.pem file." && exit 0
[ ! -f /etc/kubernetes/ssl/ca/ca-key.pem ] && echo "no ca-key.pem file" && exit 0
[ ! -f /etc/kubernetes/ssl/ca/ca-config.json ] && echo "no ca-config.json file" && exit 0

# 4. generate etcd private key and public key.
cd $ETCD_SSL
cfssl gencert -ca=/etc/kubernetes/ssl/ca/ca.pem \
  -ca-key=/etc/kubernetes/ssl/ca/ca-key.pem \
  -config=/etc/kubernetes/ssl/ca/ca-config.json \
  -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

[ $? -eq 0 ] && echo "Etcd certificate and private key generated successfully." || echo "Etcd certificate and private key generation failed."

把私有CA與ETCD的私有證書和金鑰copy到etcd伺服器上面;

注意這裡ssh-copy-id這步我省略了。
[root@ops ~]# cat ip2
172.17.173.15 etcd01
172.17.173.16 etcd02
172.17.173.17 etcd03
172.17.173.18 node02
172.17.173.19 master01
172.17.173.20 master02
172.17.173.21 master03
172.17.173.22 node01
[root@ops ~]#
[root@ops ~]# for i in `cat ip2|grep etcd|gawk '{print $1}'`
> do
> scp -r /etc/kubernetes $i:/etc/
> done
[root@ops ~]#

etcd安裝

etcd的三臺機器分別執行此指令碼,以下指令碼為完成etcd的全部安裝過程,注意下載etcd安裝包時,可以提前下載好,因為國內網路,大家都懂的;

[root@ops ~]# cat 3.sh
#!/bin/bash

# 1. env info
source /etc/profile
declare -A dict

dict=(['etcd01']=172.17.173.15 ['etcd02']=172.17.173.16 ['etcd03']=172.17.173.17)
#IP=`ip a |grep inet|grep -v 127.0.0.1|grep -v 172.17|gawk -F/ '{print $1}'|gawk '{print $NF}'`
IP=`ip a |grep inet|grep -v 127.0.0.1|gawk -F/ '{print $1}'|gawk '{print $NF}'`

for key in $(echo ${!dict[*]})
do
    if [[ "$IP" == "${dict[$key]}" ]];then
        LOCALIP=$IP
        LOCAL_ETCD_NAME=$key
    fi
done

if [[ "$LOCALIP" == "" || "$LOCAL_ETCD_NAME" == "" ]];then
    echo "Get localhost IP failed." && exit 1
fi

# 2. download etcd source code and decompress.
CURRENT_DIR=`pwd`
cd $CURRENT_DIR
curl -L -C - -O https://github.com/etcd-io/etcd/releases/download/v3.3.18/etcd-v3.3.18-linux-amd64.tar.gz
#( [ $? -eq 0 ] && echo "etcd source code download success." ) || ( echo "etcd source code download failed." && exit 1 )

/usr/bin/tar -zxf etcd-v3.3.18-linux-amd64.tar.gz
cp etcd-v3.3.18-linux-amd64/etc* /usr/local/bin/
#rm -rf etcd-v3.3.18-linux-amd64*

# 3. deploy etcd config and enable etcd.service.

ETCD_SSL="/etc/kubernetes/ssl/etcd/"
ETCD_CONF=/etc/etcd/etcd.conf
ETCD_SERVICE=/usr/lib/systemd/system/etcd.service

[ ! -d /data/etcd/ ] && mkdir -p /data/etcd/
[ ! -d /etc/etcd/ ] && mkdir -p /etc/etcd/

# 3.1 create /etc/etcd/etcd.conf configure file.
cat > $ETCD_CONF << EOF
#[Member]
ETCD_NAME="${LOCAL_ETCD_NAME}"
ETCD_DATA_DIR="/data/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${LOCALIP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${LOCALIP}:2379"
ETCD_LISTEN_CLIENT_URLS2="http://127.0.0.1:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${LOCALIP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${LOCALIP}:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://${dict['etcd01']}:2380,etcd02=https://${dict['etcd02']}:2380,etcd03=https://${dict['etcd03']}:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

# 3.2 create etcd.service
cat>$ETCD_SERVICE<<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target


[Service]
Type=notify
EnvironmentFile=$ETCD_CONF
ExecStart=/usr/local/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},\${ETCD_LISTEN_CLIENT_URLS2} \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/etc/kubernetes/ssl/etcd/etcd.pem \
--key-file=/etc/kubernetes/ssl/etcd/etcd-key.pem \
--peer-cert-file=/etc/kubernetes/ssl/etcd/etcd.pem \
--peer-key-file=/etc/kubernetes/ssl/etcd/etcd-key.pem \
--trusted-ca-file=/etc/kubernetes/ssl/ca/ca.pem \
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

# 4. enable etcd.service and start
systemctl daemon-reload
systemctl enable etcd.service
systemctl start etcd.service
systemctl status etcd.service
[root@ops ~]#

驗證etcd安裝結果

#!/bin/bash
declare -A dict
dict=(['etcd01']=172.17.173.15 ['etcd02']=172.17.173.16 ['etcd03']=172.17.173.17)

cd /usr/local/bin
etcdctl --ca-file=/etc/kubernetes/ssl/ca/ca.pem \
--cert-file=/etc/kubernetes/ssl/etcd/etcd.pem \
--key-file=/etc/kubernetes/ssl/etcd/etcd-key.pem \
--endpoints="https://${dict['etcd01']}:2379,https://${dict['etcd02']}:2379,https://${dict['etcd03']}:2379" cluster-health

etcdctl --ca-file=/etc/kubernetes/ssl/ca/ca.pem \
--cert-file=/etc/kubernetes/ssl/etcd/etcd.pem \
--key-file=/etc/kubernetes/ssl/etcd/etcd-key.pem \
--endpoints="https://${dict['etcd01']}:2379,https://${dict['etcd02']}:2379,https://${dict['etcd03']}:2379" member list

結果如下:
member 1ad1e168a6f672a1 is healthy: got healthy result from https://172.17.173.16:2379
member 68b047a9be8ab72e is healthy: got healthy result from https://172.17.173.15:2379
member 85e6e69d2915ec95 is healthy: got healthy result from https://172.17.173.17:2379
cluster is healthy
1ad1e168a6f672a1: name=etcd02 peerURLs=https://172.17.173.16:2380 clientURLs=https://172.17.173.16:2379 isLeader=false
68b047a9be8ab72e: name=etcd01 peerURLs=https://172.17.173.15:2380 clientURLs=https://172.17.173.15:2379 isLeader=true
85e6e69d2915ec95: name=etcd03 peerURLs=https://172.17.173.17:2380 clientURLs=https://172.17.173.17:2379 isLeader=false

到此,etcd安裝完成,具體引數配置可以參考之前文章:第四篇 Etcd儲存元件高可用部署

所有節點安裝docker引擎

安裝docker執行時

[root@ops ~]# for i in `cat ip2|grep -v etcd|gawk '{print $1}'`
> do
> ssh $i "yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo &&yum makecache && yum -y install docker-ce"
> done
[root@ops ~]#

所有節點啟動執行時

systemctl daemon-reload
systemctl enable docker.service
systemctl start docker.service

初始化Master01

配置映象源

所有節點分發repo檔案,用阿里雲映象源安裝kubeadm、kubelet、kubectl等,分發過程略過;

[root@ops ~]# cat kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
[root@ops ~]#

安裝

# master節點安裝三個元件
yum -y install kubelet kubeadm kubectl

# node節點安裝兩個元件
yum -y install kubelet kubeadm

配置kubelet並設定開機啟動

現在不用著急啟動,使用kubeadm初始化或者加入叢集時,會自動啟動。

# 修改所有這點這個配置檔案,swap相關
cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

# 設定開機啟動
systemctl enable kubelet.service

建立初始化配置檔案

生成初始化配置檔案,在此基礎上面修改即可

# 生成預設配置檔案
kubeadm config print init-defaults

# 可以根據元件生成
kubeadm config print init-defaults --component-configs KubeProxyConfiguration

這裡我們的預設配置檔案如下,使用外部etcd的方式,還有一個注意點,修改pod網段及kubeproxy執行模式,把此檔案copy到master01節點上面執行,注意還需要把etcd的證書copy到master節點;

[root@ops ~]# cat kube-adm.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.173.19
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master-01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
controlPlaneEndpoint: "172.17.173.19:6443"
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  external:
    endpoints:
    - https://172.17.173.15:2379
    - https://172.17.173.16:2379
    - https://172.17.173.17:2379
    caFile: /etc/kubernetes/ssl/ca/ca.pem
    certFile: /etc/kubernetes/ssl/etcd/etcd.pem
    keyFile: /etc/kubernetes/ssl/etcd/etcd-key.pem
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.17.0
networking:
  dnsDomain: cluster.local
  podSubnet: "192.168.224.0/24"
  serviceSubnet: 10.96.0.0/12
scheduler: {}

---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
[root@ops ~]# mv kube-adm.yaml kubeadm-config.yaml
[root@ops ~]# cat kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.173.19
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master-01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
controlPlaneEndpoint: "172.17.173.19:6443"
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  external:
    endpoints:
    - https://172.17.173.15:2379
    - https://172.17.173.16:2379
    - https://172.17.173.17:2379
    caFile: /etc/kubernetes/ssl/ca/ca.pem
    certFile: /etc/kubernetes/ssl/etcd/etcd.pem
    keyFile: /etc/kubernetes/ssl/etcd/etcd-key.pem
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.17.0
networking:
  dnsDomain: cluster.local
  podSubnet: "192.168.224.0/24"
  serviceSubnet: 10.96.0.0/12
scheduler: {}

---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
[root@ops ~]#

初始化安裝

注意,在國內映象下載都很慢,需要自己想辦法解決此問題;

[root@master01 ~]# kubeadm init --config=kubeadm-config.yaml

.......

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 172.17.173.19:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:821462688751102d95bba01f74b5d6ae5c8a50b5a918f03903905fe05027ef78 \
    --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.17.173.19:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:821462688751102d95bba01f74b5d6ae5c8a50b5a918f03903905fe05027ef78
[root@master01 ~]#

建立kubeconfig配置檔案

[root@master01 ~]# mkdir .kube
[root@master01 ~]# cd .kube/
[root@master01 .kube]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master01 .kube]# ls
config
[root@master01 .kube]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master-01   NotReady master 2m12s v1.17.3
[root@master01 .kube]#

證書copy

需要把master節點生成證書copy到其它master 節點

#!/bin/bash
ssh 172.17.173.20 "mkdir -p /etc/kubernetes/pki"
scp /etc/kubernetes/pki/ca.* 172.17.173.20:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* 172.17.173.20:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* 172.17.173.20:/etc/kubernetes/pki/
scp /etc/kubernetes/admin.conf 172.17.173.20:/etc/kubernetes/

ssh 172.17.173.21 "mkdir -p /etc/kubernetes/pki"
scp /etc/kubernetes/pki/ca.* 172.17.173.21:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* 172.17.173.21:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* 172.17.173.21:/etc/kubernetes/pki/
scp /etc/kubernetes/admin.conf 172.17.173.21:/etc/kubernetes/

其它Master加入control-plane

master02與master03均執行下面的操作,注意這裡的命令是master01生成的,切莫貼上複製;

[root@master02 ~]# kubeadm join 172.17.173.19:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:821462688751102d95bba01f74b5d6ae5c8a50b5a918f03903905fe05027ef78 --control-plane --ignore-preflight-errors=Swap
[preflight] Running pre-flight checks
  [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
  [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at

......

[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[control-plane-join] using external etcd - no local stacked instance added
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node master02 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master02 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.


To start administering your cluster from this node, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

[root@master02 ~]#

Node節點加入叢集

node01與node02均做相同的操作即可,注意這裡的命令是master01生成的,切莫貼上複製;

[root@node02 ~]# kubeadm join 172.17.173.19:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:821462688751102d95bba01f74b5d6ae5c8a50b5a918f03903905fe05027ef78 --ignore-preflight-errors=Swap
W0220 18:12:44.122557   12912 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
  [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
  [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
  [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@node02 ~]#

master01節點驗證

此時所有節點都沒有ready,是因為我們沒有部署CNI網路外掛,並且coredns也處於pending狀態,不用管的,部署成網路外掛CNI,自動就會變為running狀態了。

[root@master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master-01   NotReady master 27m v1.17.3
master02 NotReady master 6m15s v1.17.3
master03 NotReady master 10s     v1.17.3
node01 NotReady <none> 57s     v1.17.3
node02 NotReady <none> 3m12s v1.17.3
[root@master01 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6955765f44-9lwgk 0/1     Pending 0          27m
coredns-6955765f44-rhsps 0/1     Pending 0          27m
kube-apiserver-k8s-master-01            1/1     Running 0          27m
kube-apiserver-master02 1/1     Running 0          6m23s
kube-apiserver-master03 1/1     Running 0          17s
kube-controller-manager-k8s-master-01   1/1     Running 0          27m
kube-controller-manager-master02 1/1     Running 0          6m23s
kube-controller-manager-master03 1/1     Running 0          18s
kube-proxy-2hlgz 1/1     Running 0          6m23s
kube-proxy-8tptz 1/1     Running 0          3m20s
kube-proxy-cj55k 1/1     Running 0          18s
kube-proxy-f2lfv 1/1     Running 0          27m
kube-proxy-jg4sp 1/1     Running 0          65s
kube-scheduler-k8s-master-01            1/1     Running 0          27m
kube-scheduler-master02 1/1     Running 0          6m23s
kube-scheduler-master03 1/1     Running 0          17s
[root@master01 ~]#

部署CNI外掛calico

這裡使用calico網路外掛,下載連結,這個yaml檔案中的image下載,在國內也很慢,需要你想辦法解決此問題;

wget https://docs.projectcalico.org/v3.11/manifests/calico.yaml

你可以根據你自定義的pod網段修改CALICO_IPV4POOL_CIDR對應的網段,預設是192.168.0.0/16,這裡預設儲存是etcd,還可以使用我們之前建立的etcd叢集;由於這裡沒有做特別的修改,兼於檔案太長,此配置不粘貼出來了。

[root@master01 ~]# kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
[root@master01 ~]#

驗證叢集基本功能

叢集狀態

[root@master01 ~]# kubectl get nodes
kuNAME STATUS ROLES AGE VERSION
k8s-master-01   Ready master 37m v1.17.3
master02 Ready master 16m v1.17.3
master03 Ready master 10m v1.17.3
node01 Ready <none> 11m v1.17.3
node02 Ready <none> 13m v1.17.3
[root@master01 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-5b644bc49c-pvz58 1/1     Running 0          3m5s
calico-node-9bg8w 1/1     Running 0          3m5s
calico-node-d2xnr 1/1     Running 0          3m5s
calico-node-fjn7x 1/1     Running 0          3m5s
calico-node-gs7zt 1/1     Running 0          3m5s
calico-node-pt46g 1/1     Running 0          3m5s
coredns-6955765f44-9lwgk 1/1     Running 0          37m
coredns-6955765f44-rhsps 1/1     Running 0          37m
kube-apiserver-k8s-master-01               1/1     Running 0          37m
kube-apiserver-master02 1/1     Running 0          16m
kube-apiserver-master03 1/1     Running 0          10m
kube-controller-manager-k8s-master-01      1/1     Running 0          37m
kube-controller-manager-master02 1/1     Running 0          16m
kube-controller-manager-master03 1/1     Running 0          10m
kube-proxy-2hlgz 1/1     Running 0          16m
kube-proxy-8tptz 1/1     Running 0          13m
kube-proxy-cj55k 1/1     Running 0          10m
kube-proxy-f2lfv 1/1     Running 0          37m
kube-proxy-jg4sp 1/1     Running 0          11m
kube-scheduler-k8s-master-01               1/1     Running 0          37m
kube-scheduler-master02 1/1     Running 0          16m
kube-scheduler-master03 1/1     Running 0          10m
[root@master01 ~]#

建立demo

[root@master01 ~]# cat demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: default-deployment-nginx
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      run: default-deployment-nginx
  template:
    metadata:
      labels:
        run: default-deployment-nginx
    spec:
      containers:
      - name: default-deployment-nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: default-svc-nginx
  namespace: default
spec:
  selector:
    run: default-deployment-nginx
  type: ClusterIP
  ports:
    - name: nginx-port
      port: 80
      targetPort: 80
[root@master01 ~]#

訪問

[root@master01 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
default-deployment-nginx-54bbbcf9f5-4rq7f 1/1 Running 0 18m
[root@master01 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-svc-nginx ClusterIP 10.96.15.216 <none>        80/TCP 18m
kubernetes ClusterIP 10.96.0.1 <none>        443/TCP 59m
[root@master01 ~]# ping 10.96.15.216
PING 10.96.15.216 (10.96.15.216) 56(84) bytes of data.
64 bytes from 10.96.15.216: icmp_seq=1 ttl=64 time=0.136 ms
64 bytes from 10.96.15.216: icmp_seq=2 ttl=64 time=0.064 ms
^C
--- 10.96.15.216 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.064/0.100/0.136/0.036 ms
[root@master01 ~]# curl 10.96.15.216
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@master01 ~]#

建立另外的pod驗證dns

[root@master01 ~]# kubectl get pods -n test
NAME READY STATUS RESTARTS AGE
test3-deployment-nginx-8ddffb97b-w576p 1/1 Running 0 22m
[root@master01 ~]# kubectl get svc -n test
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test3-svc-nginx ClusterIP 10.96.77.86 <none>        80/TCP 22m
[root@master01 ~]# kubectl exec -it test3-deployment-nginx-8ddffb97b-w576p -n test /bin/bash
[root@test3-deployment-nginx-8ddffb97b-w576p /]#
[root@test3-deployment-nginx-8ddffb97b-w576p /]# curl test3-svc-nginx
AAAAAAAAAAAAAAAAA[root@test3-deployment-nginx-8ddffb97b-w576p /]#
[root@test3-deployment-nginx-8ddffb97b-w576p /]# ping default-svc-nginx.default
PING default-svc-nginx.default.svc.cluster.local (10.96.15.216) 56(84) bytes of data.
64 bytes from default-svc-nginx.default.svc.cluster.local (10.96.15.216): icmp_seq=1 ttl=64 time=0.040 ms
64 bytes from default-svc-nginx.default.svc.cluster.local (10.96.15.216): icmp_seq=2 ttl=64 time=0.082 ms
^C
--- default-svc-nginx.default.svc.cluster.local ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 2ms
rtt min/avg/max/mdev = 0.040/0.061/0.082/0.021 ms
[root@test3-deployment-nginx-8ddffb97b-w576p /]# curl default-svc-nginx
curl: (6) Could not resolve host: default-svc-nginx
[root@test3-deployment-nginx-8ddffb97b-w576p /]# curl default-svc-nginx.default
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@test3-deployment-nginx-8ddffb97b-w576p /]#

從上可以看出pod不僅可以訪問自己的service名稱,也可訪問其它名稱空間的serviceName.NAMESPACE。

總結

kuberadm安裝kubernetes v1.17.3還是非常簡單的,有一個關鍵的點,大家需要注意一下,如果你弄高可用叢集,建議在預設init時,指定的配置檔案當中,要指定這個 controlPlaneEndpoint: "172.17.173.19:6443",初始化完成後才會有kubeadm join加入master節點control-plane命令引數和node節點加入叢集的命令引數;還有一個地方需要注意,master01初始化完成後,需要把生成的pki下面的證書和私鑰copy到其它的master節點,再執行kubeadm join,否則失敗。