Kubernets及叢集詳細部署
**環境伺服器資訊及節點介紹
OS:CentOS Linux release 7.3.1611 (Core)
主機名 IP地址 備註
lc13 192.168.56.168 master and etcd
lc14 192.168.56.169 master and etcd
lc15 192.168.56.170 master and etcd
lc16 192.168.56.171 node
VIP 192.168.56.174
軟體版本:
docker 17.03.2-ce
socat-1.7.3.2-2.el7.x86_64
kubelet-1.10.0-0.x86_64
kubernetes-cni-0.6.0-0.x86_64
kubectl-1.10.0-0.x86_64
kubeadm-1.10.0-0.x86_64
1、環境初始化
1.分別設定4臺主機(4臺機器上分別執行)
hostnamectl set-hostname lc13
hostnamectl set-hostname lc14
hostnamectl set-hostname lc15
hostnamectl set-hostname lc16
2.分別對4臺主機配置主機對映(4臺機器上分別執行)
cat < /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.56.169 lc13
192.168.56.170 lc14
192.168.56.171 lc15
192.168.56.172 lc16
EOF
3.在lc13、lc14、lc15、lc16上執行ssh免密碼登陸配置
ssh-keygen #一路回車即可
ssh-copy-id lc14 會提示是否
Are you sure you want to continue connecting (yes/no)? yes
ssh-copy-id lc15
Are you sure you want to continue connecting (yes/no)? yes
[email protected]’s password: 提示輸入lc15的root密碼
ssh-copy-id lc16
Are you sure you want to continue connecting (yes/no)? yes
[email protected]’s password: 提示輸入lc16的root密碼
4、四臺主機配置、停防火牆、關閉Swap、關閉Selinux、設定核心、K8S的yum源、安裝依賴包、配置ntp。
systemctl stop firewalld
systemctl disable firewalld
swapoff -a
sed -i ‘s/.swap.
setenforce 0
sed -i “s/^SELINUX=enforcing/SELINUX=disabled/g” /etc/sysconfig/selinux
sed -i “s/^SELINUX=enforcing/SELINUX=disabled/g” /etc/selinux/config
sed -i “s/^SELINUX=permissive/SELINUX=disabled/g” /etc/sysconfig/selinux
sed -i “s/^SELINUX=permissive/SELINUX=disabled/g” /etc/selinux/config
modprobe br_netfilter
cat < /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
ls /proc/sys/net/bridge
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
EOF
yum install -y epel-release
yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim ntpdate libseccomp libtool-ltdl
echo ‘*/30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1’ > /tmp/crontab2.tmp
crontab /tmp/crontab2.tmp
systemctl start ntpdate.service
此處採用本地ntp伺服器,因此設定
crontab -e
*/30 * * * * /usr/sbin/ntpdate 172.16.200.2
hwclock -w
echo “* soft nofile 65536” >> /etc/security/limits.conf
echo “* hard nofile 65536” >> /etc/security/limits.conf
echo “* soft nproc 65536” >> /etc/security/limits.conf
echo “* hard nproc 65536” >> /etc/security/limits.conf
echo “* soft memlock unlimited” >> /etc/security/limits.conf
echo “* hard memlock unlimited” >> /etc/security/limits.conf
2、安裝、配置keepalived(3臺主節點)
1、安裝keepalived
yum install -y keepalived
systemctl enable keepalived
lc13的keepalived.conf配置
[[email protected] ~]# cat < /etc/keepalived/keepalived.conf
cat < /etc/keepalived/keepalived.conf
global_defs {
router_id LVS_k8s
}
vrrp_script CheckK8sMaster {
script “curl -k https://192.168.56.174:6443”
interval 3
timeout 9
fall 2
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 61
priority 100
advert_int 1
mcast_src_ip 192.168.56.169
nopreempt
authentication {
auth_type PASS
auth_pass sqP05dQgMSlzrxHj
}
unicast_peer {
192.168.56.170
192.168.56.171
}
virtual_ipaddress {
192.168.56.174/24
}
track_script {
CheckK8sMaster
}
}
EOF
lc14的keepalived.conf配置
[[email protected] ~]# cat < /etc/keepalived/keepalived.conf
global_defs {
router_id LVS_k8s
}
global_defs {
router_id LVS_k8s
}
vrrp_script CheckK8sMaster {
script “curl -k https://192.168.56.174:6443”
interval 3
timeout 9
fall 2
rise 2
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 61
priority 90
advert_int 1
mcast_src_ip 192.168.56.170
nopreempt
authentication {
auth_type PASS
auth_pass sqP05dQgMSlzrxHj
}
unicast_peer {
192.168.56.169
192.168.56.171
}
virtual_ipaddress {
192.168.56.174/24
}
track_script {
CheckK8sMaster
}
}
EOF
lc15的keepalived.conf配置
[[email protected] ~]# cat < /etc/keepalived/keepalived.conf
global_defs {
router_id LVS_k8s
}
global_defs {
router_id LVS_k8s
}
vrrp_script CheckK8sMaster {
script “curl -k https://192.168.56.174:6443”
interval 3
timeout 9
fall 2
rise 2
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 61
priority 80
advert_int 1
mcast_src_ip 192.168.56.171
nopreempt
authentication {
auth_type PASS
auth_pass sqP05dQgMSlzrxHj
}
unicast_peer {
192.168.56.169
192.168.56.170
}
virtual_ipaddress {
192.168.56.174/24
}
track_script {
CheckK8sMaster
}
}
EOF
2.啟動keepalived
systemctl restart keepalived
檢視VIP繫結到了lc13上
[[email protected] ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:29:ec:75 brd ff:ff:ff:ff:ff:ff
inet 192.168.56.169/24 brd 192.168.56.255 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.56.174/24 scope global secondary ens33
valid_lft forever preferred_lft forever
inet6 fe80::6dac:68b3:d51c:7934/64 scope link
valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
link/ether 52:54:00:60:6f:7a brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
link/ether 52:54:00:60:6f:7a brd ff:ff:ff:ff:ff:ff
3、建立etcd證書(LC13上執行即可)
1.設定cfssl環境
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
export PATH=/usr/local/bin:$PATH
2.建立CA配置檔案(下面配置的IP為etc節點的IP)
mkdir /root/ssl
cd /root/ssl
cat > ca-config.json <<EOF
{
“signing”: {
“default”: {
“expiry”: “8760h”
},
“profiles”: {
“kubernetes-Soulmate”: {
“usages”: [
“signing”,
“key encipherment”,
“server auth”,
“client auth”
],
“expiry”: “8760h”
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
“CN”: “kubernetes-Soulmate”,
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “shanghai”,
“L”: “shanghai”,
“O”: “k8s”,
“OU”: “System”
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
cat > etcd-csr.json <<EOF
{
“CN”: “etcd”,
“hosts”: [
“127.0.0.1”,
“192.168.56.169”,
“192.168.56.170”,
“192.168.56.171”
],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “shanghai”,
“L”: “shanghai”,
“O”: “k8s”,
“OU”: “System”
}
]
}
EOF
cfssl gencert -ca=ca.pem
-ca-key=ca-key.pem
-config=ca-config.json
-profile=kubernetes-Soulmate etcd-csr.json | cfssljson -bare etcd
3.lc13分發etcd證書到lc14、lc15上
mkdir -p /etc/etcd/ssl
cp etcd.pem etcd-key.pem ca.pem /etc/etcd/ssl/
ssh -n lc14 “mkdir -p /etc/etcd/ssl && exit”
ssh -n lc15 “mkdir -p /etc/etcd/ssl && exit”
scp -r /etc/etcd/ssl/.pem lc14:/etc/etcd/ssl/
scp -r /etc/etcd/ssl/.pem lc15:/etc/etcd/ssl/
4、安裝配置etcd(三主節點,lc13、lc14、lc15)
1.安裝etcd
yum install etcd -y
mkdir -p /var/lib/etcd
lc13的etcd.service
cat </etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd
–name lc13
–cert-file=/etc/etcd/ssl/etcd.pem
–key-file=/etc/etcd/ssl/etcd-key.pem
–peer-cert-file=/etc/etcd/ssl/etcd.pem
–peer-key-file=/etc/etcd/ssl/etcd-key.pem
–trusted-ca-file=/etc/etcd/ssl/ca.pem
–peer-trusted-ca-file=/etc/etcd/ssl/ca.pem
–initial-advertise-peer-urls https://192.168.56.169:2380
–listen-peer-urls https://192.168.56.169:2380
–listen-client-urls https://192.168.56.169:2379,http://127.0.0.1:2379
–advertise-client-urls https://192.168.56.169:2379
–initial-cluster-token etcd-cluster-0
–initial-cluster lc13=https://192.168.56.169:2380,lc14=https://192.168.56.170:2380,lc15=https://192.168.56.171:2380
–initial-cluster-state new
–data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
lc14的etcd.service
cat </etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd
–name lc14
–cert-file=/etc/etcd/ssl/etcd.pem
–key-file=/etc/etcd/ssl/etcd-key.pem
–peer-cert-file=/etc/etcd/ssl/etcd.pem
–peer-key-file=/etc/etcd/ssl/etcd-key.pem
–trusted-ca-file=/etc/etcd/ssl/ca.pem
–peer-trusted-ca-file=/etc/etcd/ssl/ca.pem
–initial-advertise-peer-urls https://192.168.56.170:2380
–listen-peer-urls https://192.168.56.170:2380
–listen-client-urls https://192.168.56.170:2379,http://127.0.0.1:2379
–advertise-client-urls https://192.168.56.170:2379
–initial-cluster-token etcd-cluster-0
–initial-cluster lc13=https://192.168.56.169:2380,lc14=https://192.168.56.170:2380,lc15=https://192.168.56.171:2380
–initial-cluster-state new
–data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
lc15的etcd.service
cat </etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd
–name lc15
–cert-file=/etc/etcd/ssl/etcd.pem
–key-file=/etc/etcd/ssl/etcd-key.pem
–peer-cert-file=/etc/etcd/ssl/etcd.pem
–peer-key-file=/etc/etcd/ssl/etcd-key.pem
–trusted-ca-file=/etc/etcd/ssl/ca.pem
–peer-trusted-ca-file=/etc/etcd/ssl/ca.pem
–initial-advertise-peer-urls https://192.168.56.171:2380
–listen-peer-urls https://192.168.56.171:2380
–listen-client-urls https://192.168.56.171:2379,http://127.0.0.1:2379
–advertise-client-urls https://192.168.56.171:2379
–initial-cluster-token etcd-cluster-0
–initial-cluster lc13=https://192.168.56.169:2380,lc14=https://192.168.56.170:2380,lc15=https://192.168.56.171:2380
–initial-cluster-state new
–data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
2.新增自啟動(etc叢集最少2個節點才能啟動,啟動報錯看mesages日誌)
cd /etc/systemd/system/
mv etcd.service /usr/lib/systemd/system/
systemctl enable etcd
systemctl start etcd
systemctl status etcd
3.在三個etcd節點執行一下命令檢查
etcdctl --endpoints=https://192.168.56.169:2379,https://192.168.56.170:2379,https://192.168.56.171:2379
–ca-file=/etc/etcd/ssl/ca.pem
–cert-file=/etc/etcd/ssl/etcd.pem
–key-file=/etc/etcd/ssl/etcd-key.pem cluster-health
或者加證書檢測
etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.56.169:2379,https://192.168.56.170:2379,https://192.168.56.171:2379 cluster-health
5、所有節點安裝docker
curl https://download.docker.com/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo
yum install -y docker-ce
systemctl start docker
systemctl status docker
systemctl enable docker
由於kubeadm目前支援docker最高版本是17.03.x,18.06.1不可以使用,因此需要解除安裝掉docker-ce,
rpm -e --nodeps docker-ce-18.06.1.ce-3.el7.x86_64,解除安裝之後下載並安裝docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm和docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm
yum install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm -y
yum install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm -y 但報錯,因此
wget https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm 到/app目錄
wget https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm 到/app目錄
採用rpm -ivh docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm 和rpm -ivh docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm報錯:
error: Failed dependencies:
docker-selinux conflicts with docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch
需要解除安裝老版本的 docker 及其相關依賴
yum remove docker docker-common container-selinux docker-selinux docker-engine
再執行
rpm -ivh wget docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm
rpm -ivh docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm
修改配置檔案 vim /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd --graph /app/docker -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --registry-mirror=https://ms3cfraz.mirror.aliyuncs.com
mkdir /app/docker
systemctl enable docker
systemctl start docker
systemctl status docker
6、安裝、配置kubeadm
1.所有節點安裝kubelet Kubeadm kubectl
yum install -y kubelet kubeadm kubectl 安裝的是1.11.2的版本
本實驗中需要1.10.0-0的版本
yum install -y kubelet-1.10.0-0 kubeadm-1.10.0-0 kubectl-1.10.0-0
2.所有節點修改kubelet配置檔案
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
#修改這一行
Environment=“KUBELET_CGROUP_ARGS=–cgroup-driver=cgroupfs” #將systemd變成cgroupfs。 新版中已經沒有這一行(1.11.2版沒有,而1.10.0-0版中有)
#新增這一行
Environment=“KUBELET_EXTRA_ARGS=–v=2 --fail-swap-on=false --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.0”
3.所有節點修改完配置檔案一定要重新載入配置配置
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet
4.命令補全
yum install -y bash-completion #Centos7.3.1611(Core)這個包已經有了。
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo “source <(kubectl completion bash)” >> ~/.bashrc
7、初始化叢集
1.lc13、lc14、lc15新增叢集初始配置檔案(叢集配置檔案一樣) /root/config.yaml
cat < config.yaml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
etcd:
endpoints:
- https://192.168.56.169:2379
- https://192.168.56.170:2379
- https://192.168.56.171:2379
caFile: /etc/etcd/ssl/ca.pem
certFile: /etc/etcd/ssl/etcd.pem
keyFile: /etc/etcd/ssl/etcd-key.pem
dataDir: /var/lib/etcd
networking:
podSubnet: 10.244.0.0/16
kubernetesVersion: 1.10.0
api:
advertiseAddress: “192.168.56.174”
token: “b99a00.a144ef80536d4344”
tokenTTL: “0s”
apiServerCertSANs: - lc13
- lc14
- lc15
- 192.168.56.169
- 192.168.56.170
- 192.168.56.171
- 192.168.56.172
- 192.168.56.174
featureGates:
CoreDNS: true
imageRepository: “registry.cn-hangzhou.aliyuncs.com/k8sth”
EOF
2.首先lc13初始化叢集
注意:配置檔案定義podnetwork是10.244.0.0/16
kubeadmin init –hlep可以看出,service預設網段是10.96.0.0/12
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf預設dns地址cluster-dns=10.96.0.10
執行以下命令初始化:
kubeadm init --config config.yaml
初始化失敗後處理辦法
kubeadm reset
#或
rm -rf /etc/kubernetes/.conf
rm -rf /etc/kubernetes/manifests/.yaml
docker ps -a |awk ‘{print $1}’ |xargs docker rm -f
systemctl stop kubelet
初始化後正常的結果
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown (id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.56.174:6443 --token b99a00.a144ef80536d4344 --discovery-token-ca-cert-hash sha256:d90c7367df38583eed25ebc942a63e12844f2d9c9bf7d74f76dac8e7f4da520e
初始化lc14、LC15、LC16節點叢集
kubeadm join 192.168.56.174:6443 --token b99a00.a144ef80536d4344 --discovery-token-ca-cert-hash sha256:d90c7367df38583eed25ebc942a63e12844f2d9c9bf7d74f76dac8e7f4da520e
3.lc13上面執行如下命令
[[email protected] ~]# mkdir -p $HOME/.kube
[[email protected] ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[[email protected] ~]# sudo chown (id -g) $HOME/.kube/config
4.kubeadm生成證書密碼檔案分發到lc14和lc15上面去
scp -r /etc/kubernetes/pki lc14:/etc/kubernetes/
scp -r /etc/kubernetes/pki lc15:/etc/kubernetes/
5.部署flannel網路,只需要在LC13執行就行
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
##版本資訊:quay.io/coreos/flannel:v0.10.0-amd64
kubectl create -f kube-flannel.yml
執行命令
[[email protected] ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
lc13 NotReady master 8m v1.10.0
lc14 NotReady 4m v1.10.0
lc15 NotReady 4m v1.10.0
#第一次執行可能狀態是 NotReady,此命令多重新整理幾次即可
[[email protected] ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
lc13 Ready master 19m v1.10.0
lc14 Ready 15m v1.10.0
lc15 Ready 15m v1.10.0
[[email protected] ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7997f8864c-g5bft 1/1 Running 0 18m
kube-system coredns-7997f8864c-pllg9 1/1 Running 0 18m
kube-system kube-apiserver-lc13 1/1 Running 0 18m
kube-system kube-controller-manager-lc13 1/1 Running 0 19m
kube-system kube-flannel-ds-amd64-7gx5v 1/1 Running 0 11m
kube-system kube-flannel-ds-amd64-97t6g 1/1 Running 0 11m
kube-system kube-flannel-ds-amd64-mkvdd 1/1 Running 0 11m
kube-system kube-proxy-4rmzp 1/1 Running 0 15m
kube-system kube-proxy-cr6z6 1/1 Running 0 18m
kube-system kube-proxy-tgnmt 1/1 Running 0 15m
kube-system kube-scheduler-lc13 1/1 Running 0 18m
6.部署dashboard
建立kubernetes-dashboard.yaml
新曾yaml檔案
touch kubernetes-dashboard.yaml
vim kubernetes-dashboard.yaml
[[email protected] ~]# cat kubernetes-dashboard.yaml
# Copyright 2017 The Kubernetes Authors.
# Licensed under the Apache License, Version 2.0 (the “License”);
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an “AS IS” BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.8.
# Example usage: kubectl create -f <this_file>
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
Allow Dashboard to create ‘kubernetes-dashboard-key-holder’ secret.
- apiGroups: [""]
resources: [“secrets”]
verbs: [“create”]Allow Dashboard to create ‘kubernetes-dashboard-settings’ config map.
- apiGroups: [""]
resources: [“configmaps”]
verbs: [“create”]Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: [“secrets”]
resourceNames: [“kubernetes-dashboard-key-holder”, “kubernetes-dashboard-certs”]
verbs: [“get”, “update”, “delete”]Allow Dashboard to get and update ‘kubernetes-dashboard-settings’ config map.
- apiGroups: [""]
resources: [“configmaps”]
resourceNames: [“kubernetes-dashboard-settings”]
verbs: [“get”, “update”]Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: [“services”]
resourceNames: [“heapster”]
verbs: [“proxy”] - apiGroups: [""]
resources: [“services/proxy”]
resourceNames: [“heapster”, “http:heapster:”, “https:heapster:”]
verbs: [“get”]
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
—
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1beta2
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
nodeSelector:
node-role.kubernetes.io/master: “”
containers:
- name: kubernetes-dashboard
image: registry.cn-hangzhou.aliyuncs.com/k8sth/kubernetes-dashboard-amd64:v1.8.3
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
****# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30000
selector:
k8s-app: kubernetes-dashboard
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system**
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
[[email protected] ~]# kubectl create -f kubernetes-dashboard.yaml
secret “kubernetes-dashboard-certs” created
serviceaccount “kubernetes-dashboard” created
role.rbac.authorization.k8s.io “kubernetes-dashboard-minimal” created
rolebinding.rbac.authorization.k8s.io “kubernetes-dashboard-minimal” created
deployment.apps “kubernetes-dashboard” created
service “kubernetes-dashboard” created
serviceaccount “admin-user” created
clusterrolebinding.rbac.authorization.k8s.io “admin-user” created
[[email protected] ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk ‘{print $1}’)
Name: admin-user-token-6bgvm
Namespace: kube-system
Labels:
Annotations: kubernetes.io/service-account.name=admin-user
kubernetes.io/service-account.uid=03fe4021-a784-11e8-8ed2-000c2929ec75
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTZiZ3ZtIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIwM2ZlNDAyMS1hNzg0LTExZTgtOGVkMi0wMDBjMjkyOWVjNzUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.Nrmdywn5iH-fMleQnUU5K2NXzA5QfHJ4F1LEvkTFTnf_SQ9AYQwzksBUprXJq0LmJupbft1QQy7kPjESn0q3T9GezAhgMUoPXEoJBD_DtOmfdxRlaM2X_MZOLTfyO6VWX1ghXz21jgmTIPsPOURXlopAQJELrjp9Uox0fQrteof6gDH_HuR0q8IwPQF1WCQIgk2X85HnyED_25LfQ2kPd8iG8qSfQwWkucWAMQ7hrKJXT0gJqZBLZnML9Ly935kZyeVoY6CDAN2LyvyzSOb7MLh_wysNejgndmSNSQDEliYL_74h6tL00XBma50COWy7F50DDaAt9ZRBb6GZkOkngw
通過firefox 訪問dashboard,輸入上面的token,即可登陸
https://192.168.56.169:30000/#!/login
7.安裝heapster
首先建立並配置檔案
heapster檔案資訊
mkdir kube-heapster
cd kube-heapster
mkdir influxdb
mkdir rbac
在kube-heapster/influxdb下新建grafana.yaml、heapster.yaml和influxdb.yaml三個檔案
在kube-heapster/rbac下新建heapster-rbac.yaml檔案
新建grafana.yaml檔案
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: monitoring-grafana
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: grafana
spec:
nodeSelector:
node-role.kubernetes.io/master: “”
containers:
- name: grafana
image: registry.cn-hangzhou.aliyuncs.com/k8sth/heapster-grafana-amd64:v4.4.3
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
protocol: TCP
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certificates
readOnly: true
- mountPath: /var
name: grafana-storage
env:
- name: INFLUXDB_HOST
value: monitoring-influxdb
- name: GF_SERVER_HTTP_PORT
value: “3000”
# The following env variables are required to make Grafana accessible via
# the kubernetes api-server proxy. On production clusters, we recommend
# removing these env variables, setup auth for grafana, and expose the grafana
# service using a LoadBalancer or a public IP.
- name: GF_AUTH_BASIC_ENABLED
value: “false”
- name: GF_AUTH_ANONYMOUS_ENABLED
value: “true”
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Admin
- name: GF_SERVER_ROOT_URL
# If you’re only using the API Server proxy, set this value instead:
# value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
value: /
volumes:
- name: ca-certificates
hostPath:
path: /etc/ssl/certs
- name: grafana-storage
emptyDir: {}
apiVersion: v1
kind: Service
metadata:
labels:
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: ‘true’
kubernetes.io/name: monitoring-grafana
name: monitoring-grafana
namespace: kube-system
spec:
# In a production setup, we recommend accessing Grafana through an external Loadbalancer
# or through a public IP.
# type: LoadBalancer
# You could also use NodePort to expose the service at a randomly-generated port
# type: NodePort
ports:
- port: 80
targetPort: 3000
selector:
k8s-app: grafana
新建heapster.yaml
[[email protected] influxdb]# cat heapster.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: heapster
namespace: kube-system
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: heapster
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: heapster
spec:
serviceAccountName: heapster
nodeSelector:
node-role.kubernetes.io/master: “”
containers:
- name: heapster
image: registry.cn-hangzhou.aliyuncs.com/k8sth/heapster-amd64:v1.4.2
imagePullPolicy: IfNotPresent
command:
- /heapster
- --source=kubernetes:https://kubernetes.default
- --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
—
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: ‘true’
kubernetes.io/name: Heapster
name: heapster
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 8082
selector:
k8s-app: heapster
新建influxdb.yaml 檔案
[[email protected] influxdb]# cat influxdb.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: monitoring-influxdb
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: influxdb
spec:
nodeSelector:
node-role.kubernetes.io/master: “”
containers:- name: influxdb
image: registry.cn-hangzhou.aliyuncs.com/k8sth/heapster-influxdb-amd64:v1.3.3
imagePullPolicy: IfNotPresent
volumeMounts:- mountPath: /data
name: influxdb-storage
volumes:
- mountPath: /data
- name: influxdb-storage
emptyDir: {}
- name: influxdb
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: ‘true’
kubernetes.io/name: monitoring-influxdb
name: monitoring-influxdb
namespace: kube-system
spec:
ports:
- port: 8086
targetPort: 8086
selector:
k8s-app: influxdb
新建heapster-rbac.yaml 檔案
[[email protected] rabac]# cat heapster-rbac.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: heapster
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:heapster
subjects: - kind: ServiceAccount
name: heapster
namespace: kube-system
[[email protected] ~]# tree kube-heapster/
kube-heapster/
├── influxdb
│ ├── grafana.yaml
│ ├── heapster.yaml
│ └── influxdb.yaml
└── rabac
└── heapster-rbac.yaml
2 directories, 4 files
[[email protected] ~]# kubectl create -f kube-heapster/influxdb/
deployment.extensions “monitoring-grafana” created
service “monitoring-grafana” created
serviceaccount “heapster” created
deployment.extensions “heapster” created
service “heapster” created
deployment.extensions “monitoring-influxdb” created
service “monitoring-influxdb” created
[[email protected] kube-heapster]# cd
[[email protected] ~]# kubectl create -f kube-heapster/rbac/
clusterrolebinding.rbac.authorization.k8s.io “heapster” created
[[email protected] ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7997f8864c-g5bft 1/1 Running 1 2d
kube-system coredns-7997f8864c-pllg9 1/1 Running 1 2d
kube-system heapster-647b89cd4b-4x84v 0/1 Pending 0 14m
kube-system kube-apiserver-lc13 1/1 Running 30 2d
kube-system kube-controller-manager-lc13 1/1 Running 3 2d
kube-system kube-flannel-ds-amd64-7gx5v 1/1 Running 1 2d
kube-system kube-flannel-ds-amd64-97t6g 1/1 Running 1 2d
kube-system kube-flannel-ds-amd64-9lqqd 1/1 Running 1 2d
kube-system kube-flannel-ds-amd64-mkvdd 1/1 Running 1 2d
kube-system kube-proxy-4rmzp 1/1 Running 1 2d
kube-system kube-proxy-cr6z6 1/1 Running 1 2d
kube-system kube-proxy-jwtj2 1/1 Running 1 2d
kube-system kube-proxy-tgnmt 1/1 Running 1 2d
kube-system kube-scheduler-lc13 1/1 Running 3 2d
kube-system kubernetes-dashboard-7b44ff9b77-hsqqx 1/1 Running 1 2d
kube-system monitoring-grafana-74bdd98b7d-dkqk2 0/1 Pending 0 14m
kube-system monitoring-influxdb-55bbd4b96-5v797 0/1 Pending 0 14m
訪問https://192.168.56.169:30000/#!/login即可看到監控資訊
9.在lc14和lc15上面分別執行初始化
kubeadm init --config config.yaml
#初始化的結果和node01的結果完全一樣
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown (id -g) $HOME/.kube/config
#注意:之前已經將lc14和lc15加入到lc13 Cluter中,因此初始化lc 14和lc15之前需要停止kubelet.service 的服務
[[email protected] ~]# systemctl stop kubelet.service
[[email protected] ~]# kubeadm init --config config.yaml
[init] Using Kubernetes version: v1.10.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[certificates] Using the existing ca certificate and key.
[certificates] Using the existing apiserver certificate and key.
[certificates] Using the existing apiserver-kubelet-client certificate and key.
[certificates] Using the existing etcd/ca certificate and key.
[certificates] Using the existing etcd/server certificate and key.
[certificates] Using the existing etcd/peer certificate and key.
[certificates] Using the existing etcd/healthcheck-client certificate and key.
[certificates] Using the existing apiserver-etcd-client certificate and key.
[certificates] Using the existing sa key.
[certificates] Using the existing front-proxy-ca certificate and key.
[certificates] Using the existing front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in “/etc/kubernetes/pki”
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/admin.conf”
[kubeconfig] Using existing up-to-date KubeConfig file: “/etc/kubernetes/kubelet.conf”
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/controller-manager.conf”
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/scheduler.conf”
[controlplane] Wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml”
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml”
[controlplane] Wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”.
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 0.016134 seconds
[uploadconfig] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[markmaster] Will mark node lc14 as master by adding a label and a taint
[markmaster] Master lc14 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: b99a00.a144ef80536d4344
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the “cluster-info” ConfigMap in the “kube-public” namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown (id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.56.174:6443 --token b99a00.a144ef80536d4344 --discovery-token-ca-cert-hash sha256:d90c7367df38583eed25ebc942a63e12844f2d9c9bf7d74f76dac8e7f4da520e
[[email protected] ~]# kubeadm init --config config.yaml
[init] Using Kubernetes version: v1.10.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Some fatal errors occurred:
[ERROR Port-10250]: Port 10250 is in use #此處是沒有停止kubelet.service服務因此報錯
[preflight] If you know what you are doing, you can make a check non-fatal with--ignore-preflight-errors=...
[[email protected] ~]# systemctl stop kubelet.service
[[email protected] ~]# kubeadm init --config config.yaml
[init] Using Kubernetes version: v1.10.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[certificates] Using the existing ca certificate and key.
[certificates] Using the existing apiserver certificate and key.
[certificates] Using the existing apiserver-kubelet-client certificate and key.
[certificates] Using the existing etcd/ca certificate and key.
[certificates] Using the existing etcd/server certificate and key.
[certificates] Using the existing etcd/peer certificate and key.
[certificates] Using the existing etcd/healthcheck-client certificate and key.
[certificates] Using the existing apiserver-etcd-client certificate and key.
[certificates] Using the existing sa key.
[certificates] Using the existing front-proxy-ca certificate and key.
[certificates] Using the existing front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in “/etc/kubernetes/pki”
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/admin.conf”
[kubeconfig] Using existing up-to-date KubeConfig file: “/etc/kubernetes/kubelet.conf”
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/controller-manager.conf”
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/scheduler.conf”
[controlplane] Wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml”
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml”
[controlplane] Wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”.
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 0.016196 seconds
[uploadconfig] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[markmaster] Will mark node lc15 as master by adding a label and a taint
[markmaster] Master lc15 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: b99a00.a144ef80536d4344
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the “cluster-info” ConfigMap in the “kube-public” namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown (id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.56.174:6443 --token b99a00.a144ef80536d4344 --discovery-token-ca-cert-hash sha256:d90c7367df38583eed25ebc942a63e12844f2d9c9bf7d74f76dac8e7f4da520e
[[email protected] ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
lc13 Ready master 2d v1.10.0
lc14 Ready 2d v1.10.0
lc15 Ready 2d v1.10.0
lc16 Ready 2d v1.10.0
再一次執行kubectl get nodes可以看到lc13\lc14\lc15節點都是Master
[[email protected] ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
lc13 Ready master 2d v1.10.0
lc14 Ready master 2d v1.10.0
lc15 Ready master 2d v1.10.0
lc16 Ready 2d v1.10.0
[[email protected] ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system coredns-7997f8864c-g5bft 1/1 Running 1 2d 10.244.0.7 lc13
kube-system coredns-7997f8864c-pllg9 1/1 Running 1 2d 10.244.0.5 lc13
kube-system heapster-647b89cd4b-4x84v 0/1 Pending 0 32m
kube-system kube-apiserver-lc13 1/1 Running 30 2d 192.168.56.169 lc13
kube-system kube-apiserver-lc14 1/1 Running 0 4m 192.168.56.170 lc14
kube-system kube-apiserver-lc15 1/1 Running 0