k8s 1.8.10 叢集搭建
k8s 1.8.10 Build
1.環境資訊
- OS–>centos7.2
- kubernetes–>1.8.10
- flannel–>0.9.1
- docker–>17.12.0-ce
- etcd–>3.2.12
- dashboard–>1.8.3
2.環境配置
所有節點
hosts檔案
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.50 k8s01
192.168.1.51 k8s02
192.168.1.52 k8s03
關閉防火牆以及selinux
service stop firewalld && systemctl disable firewalld
sed -i 's/enforcing/disabled/g' /etc/selinux/config
setenforce 0
時鐘同步
yum install -y ntp
vi /etc/ntp.conf
restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 202.120.2.101 iburst
另外2個節點
server k8s01 iburst
配置核心引數
cat << EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF
配置生效
modeprob bridge
sysctl -p /etc/sysctl.d/k8s.conf
centos7.3以及以上版本modprobe br_netfilter
關閉交換分割槽
swapoff -a
需要注意的是,在/etc/fstab中註釋掉swap
設定iptables策略
iptables -nL(如果為accept,下面可省略)
/sbin/iptables -P FORWARD ACCEPT
echo "sleep 60 && /sbin/iptables -P FORWARD ACCEPT" >> /etc/rc.local
安裝依賴
yum install -y epel-release
yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget
3.建立 CA 證書和祕鑰
(使用CloudFlare 的 PKI 工具集 cfssl 來生成 Certificate Authority (CA) 證書和祕鑰檔案)
操作在master節點k8s01上進行,只執行一次,然後拷貝到其他節點
安裝cfssl
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
export PATH=/usr/local/bin:$PATH
mkdir ~/ssl
cd ~/ssl
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "8760h"
}
}
}
}
EOF
ca-config.json:可以定義多個 profiles,分別指定不同的過期時間、使用場景等引數;後續在簽名證書時使用某個 profile;
signing:表示該證書可用於簽名其它證書;生成的 ca.pem 證書中 CA=TRUE;
server auth:表示 client 可以用該 CA 對 server 提供的證書進行驗證;
client auth:表示 server 可以用該 CA 對 client 提供的證書進行驗證;
建立CA證書籤名請求
cat > ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
“CN”:Common Name,kube-apiserver 從證書中提取該欄位作為請求的使用者名稱 (User Name);瀏覽器使用該欄位驗證網站是否合法;
“O”:Organization,kube-apiserver 從證書中提取該欄位作為請求使用者所屬的組 (Group);
生成 CA 證書和私鑰
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
建立 kubernetes 證書籤名請求檔案
cat > kubernetes-csr.json << EOF
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.1.50",
"192.168.1.51",
"192.168.1.52",
"10.254.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
生成 kubernetes 證書和私鑰
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
建立admin證書
cat > admin-csr.json << EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
kube-apiserver 使用 RBAC 對客戶端(如 kubelet、kube-proxy、Pod)請求進行授權;
kube-apiserver 預定義了一些 RBAC 使用的 RoleBindings,如 cluster-admin 將 Group system:masters 與 Role cluster-admin 繫結,該 Role 授予了呼叫kube-apiserver 的所有 API的許可權;
OU 指定該證書的 Group 為 system:masters,kubelet 使用該證書訪問 kube-apiserver 時 ,由於證書被 CA 簽名,所以認證通過,同時由於證書使用者組為經過預授權的 system:masters,所以被授予訪問所有 API 的許可權
生成 admin 證書和私鑰
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
ll admin*
建立 kube-proxy 證書
cat > kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
CN 指定該證書的 User 為 system:kube-proxy;
kube-apiserver 預定義的 RoleBinding cluster-admin 將User system:kube-proxy 與 Role system:node-proxier 繫結,該 Role 授予了呼叫 kube-apiserver Proxy 相關 API 的許可權;
生成 kube-proxy 客戶端證書和私鑰
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
將生成的證書和祕鑰檔案(字尾名為.pem)拷貝到所有機器的 /etc/kubernetes/ssl 目錄下
mkdir -p /etc/kubernetes/ssl
cp *.pem /etc/kubernetes/ssl
scp *.pem k8s02:/etc/kubernetes/ssl
scp *.pem k8s03:/etc/kubernetes/ssl
4.部署etcd叢集
wget https://github.com/coreos/etcd/releases/download/v3.2.12/etcd-v3.2.12-linux-amd64.tar.gz
tar -zxf etcd-v3.2.12-linux-amd64.tar.gz
cp etcd-v3.2.12-linux-amd64/etcd* /usr/local/bin
scp etcd-v3.2.12-linux-amd64/etcd* k8s02:/usr/local/bin
scp etcd-v3.2.12-linux-amd64/etcd* k8s03:/usr/local/bin
所有節點建立etcd工作目錄(記錄k8s叢集資訊)
mkdir -p /var/lib/etcd
建立systemd unit檔案
cat > etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \
--name k8s01 \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--initial-advertise-peer-urls https://192.168.1.50:2380 \
--listen-peer-urls https://192.168.1.50:2380 \
--listen-client-urls https://192.168.1.50:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://192.168.1.50:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster k8s01=https://192.168.1.50:2380,k8s02=https://192.168.1.51:2380,k8s03=https://192.168.1.52:2
--initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
cp etcd.service /etc/systemd/system/
scp etcd.service k8s02:/etc/systemd/system/
scp etcd.service k8s03:/etc/systemd/system/
在k8s02和k8s03節點修改/etc/systemd/system/etcd.service
修改為節點對應資訊
--name
--initial-advertise-peer-urls
--listen-peer-urls
--listen-client-urls
啟動服務
systemctl daemon-reload && systemctl start etcd && systemctl enable etcd
最先啟動的節點,會卡住一段時間,直到另外2個節點加入
驗證服務
etcdctl --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/kubernetes.pem --key-file=/etc/kubernetes/ssl/kubernetes-key.pem cluster-health
member b031925c340617d5 is healthy: got healthy result from https://192.168.1.51:2379
member c23156aeff22aa06 is healthy: got healthy result from https://192.168.1.52:2379
member d0fbccae5731ed67 is healthy: got healthy result from https://192.168.1.50:2379
cluster is healthy
5.安裝flannel
wget https://github.com/coreos/flannel/releases/download/v0.9.1/flannel-v0.9.1-linux-amd64.tar.gz
tar -xzf flannel-v0.9.1-linux-amd64.tar.gz
cp {flanneld,mk-docker-opts.sh} /usr/local/bin
scp {flanneld,mk-docker-opts.sh} k8s02:/usr/local/bin
scp {flanneld,mk-docker-opts.sh} k8s03:/usr/local/bin
向etcd寫入網段資訊
etcdctl --endpoints=https://192.168.1.50:2379,https://192.168.1.51:2379,https://192.168.1.52:2379 --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/kubernetes.pem --key-file=/etc/kubernetes/ssl/kubernetes-key.pem mkdir /kubernetes/network
etcdctl --endpoints=https://192.168.1.50:2379,https://192.168.1.51:2379,https://192.168.1.52:2379 --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/kubernetes.pem --key-file=/etc/kubernetes/ssl/kubernetes-key.pem mk /kubernetes/network/config '{"Network":"10.1.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'
建立systemd unit檔案
cat > flanneld.service << EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
[Service]
Type=notify
ExecStart=/usr/local/bin/flanneld \
-etcd-cafile=/etc/kubernetes/ssl/ca.pem \
-etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
-etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
-etcd-endpoints=https://192.168.1.50:2379,https://192.168.1.51:2379,https://192.168.1.52:2379 \
-etcd-prefix=/kubernetes/network
ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
EOF
cp flanneld.service /etc/systemd/system/
scp flanneld.service k8s02:/etc/systemd/system/
scp flanneld.service k8s03:/etc/systemd/system/
啟動flannel
systemctl daemon-reload && systemctl start flanneld && systemctl enable flanneld
驗證服務
etcdctl --endpoints=https://192.168.1.50:2379,https://192.168.1.51:2379,https://192.168.1.52:2379 --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/kubernetes.pem --key-file=/etc/kubernetes/ssl/kubernetes-key.pem ls /kubernetes/network/subnets
輸出如下
/kubernetes/network/subnets/10.1.75.0-24
/kubernetes/network/subnets/10.1.22.0-24
/kubernetes/network/subnets/10.1.47.0-24
6.部署kubectl工具,建立kubeconfig檔案 裝
wget https://dl.k8s.io/v1.8.10/kubernetes-client-linux-amd64.tar.gz
tar -xzvf kubernetes-client-linux-amd64.tar.gz
cd kubernetes/client/bin/
chmod a+x kubernetes/client/bin/kube*
建立/root/.kube/config
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.1.50:6443
kubectl config set-credentials admin --client-certificate=/etc/kubernetes/ssl/admin.pem --embed-certs=true --client-key=/etc/kubernetes/ssl/admin-key.pem
kubectl config set-context kubernetes --cluster=kubernetes --user=admin
kubectl config use-context kubernetes
建立bootstrap.kubeconfig
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
mv token.csv /etc/kubernetes/
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.1.50:6443 --kubeconfig=bootstrap.kubeconfig
kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=bootstrap.kubeconfig
kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
mv bootstrap.kubeconfig /etc/kubernetes/
建立kube-proxy.kubeconfig
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.1.50:6443 --kubeconfig=kube-proxy.kubeconfig
#設定客戶端認證引數
kubectl config set-credentials kube-proxy --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
#設定上下文引數
kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
設定預設上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
mv kube-proxy.kubeconfig /etc/kubernetes/
將生成的bootstrap.kubeconfig,kube-proxy.kubeconfig檔案拷貝到其它節點的/etc/kubernetes目錄下
scp /etc/kubernetes/kube-proxy.kubeconfig k8s02:/etc/kubernetes/
scp /etc/kubernetes/bootstrap.kubeconfig k8s02:/etc/kubernetes/
scp /etc/kubernetes/kube-proxy.kubeconfig k8s03:/etc/kubernetes/
scp /etc/kubernetes/bootstrap.kubeconfig k8s03:/etc/kubernetes/
7.部署master節點
wget https://dl.k8s.io/v1.8.10/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cp -r kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/local/bin/
配置和啟動kube-apiserver
cat > kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--logtostderr=true \
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriion \
--advertise-address=192.168.1.50 \
--bind-address=192.168.1.50 \
--insecure-bind-address=192.168.1.50 \
--authorization-mode=Node,RBAC \
--runtime-config=rbac.authorization.k8s.io/v1alpha1 \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/etc/kubernetes/token.csv \
--service-cluster-ip-range=10.254.0.0/16 \
--service-node-port-range=8400-10000 \
--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/etc/kubernetes/ssl/ca.pem \
--etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
--etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
--etcd-servers=https://192.168.1.50:2379,https://192.168.1.51:2379,https://192.168.1.52:2379 \
--enable-swagger-ui=true \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/lib/audit.log \
--anonymous-auth=false \
--basic-auth-file=/etc/kubernetes/basic_auth_file \
--event-ttl=1h \
--v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
cp kube-apiserver.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver
配置和啟動 kube-controller-manager
cat > kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--logtostderr=true \
--address=0.0.0.0 \
--master=http://192.168.1.50:8080 \
--allocate-node-cidrs=true \
--service-cluster-ip-range=10.254.0.0/16 \
--cluster-cidr=10.1.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
--root-ca-file=/etc/kubernetes/ssl/ca.pem \
--leader-elect=true \
--v=2
Restart=on-failure
LimitNOFILE=65536
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
cp kube-controller-manager.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
配置和啟動 kube-scheduler
cat > kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--logtostderr=true \
--address=0.0.0.0 \
--master=http://192.168.1.50:8080 \
--leader-elect=true \
--v=2
Restart=on-failure
LimitNOFILE=65536
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
cp kube-scheduler.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
驗證master
kubectl get componentstatuses
```bash
輸出如下
<div class="se-preview-section-delimiter"></div>
```vim
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
8.部署Node節點3個節點都作為node節點
配置和啟動docker
wget https://download.docker.com/linux/static/stable/x86_64/docker-17.12.0-ce.tgz
tar -xvf docker-17.12.0-ce.tgz
cp docker/docker* /usr/local/bin
cat > docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.io
[Service]
Environment="PATH=/usr/local/bin:/bin:/sbin:/usr/bin:/usr/sbin"
EnvironmentFile=-/run/flannel/subnet.env
EnvironmentFile=-/run/flannel/docker
ExecStart=/usr/local/bin/dockerd \
--exec-opt native.cgroupdriver=cgroupfs \
--log-level=error \
--log-driver=json-file
ExecReload=/bin/kill -s HUP $MAINPID
ExecStartPost=/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
啟動
cp docker.service /etc/systemd/system/docker.service
systemctl daemon-reload
systemctl enable docker
systemctl start docker
systemctl status docker
安裝和配置 kubelet
kubelet 啟動時向 kube-apiserver 傳送 TLS bootstrapping 請求,需要先將 bootstrap token 檔案中的 kubelet-bootstrap 使用者賦予 system:node-bootstrapper 角色,然後 kubelet 才有許可權建立認證請求
master點執行
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
scp -r kubernetes/server/bin/{kube-proxy,kubelet} k8s02:/usr/local/bin/
scp -r kubernetes/server/bin/{kube-proxy,kubelet} k8s03:/usr/local/bin/
所有節點建立kubelet 工作目錄
mkdir -p /var/lib/kubelet
配置啟動kubelet
cat > kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
--address=192.168.1.50 \
--hostname-override=192.168.1.50 \
--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest \
--experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--require-kubeconfig \
--cert-dir=/etc/kubernetes/ssl \
--container-runtime=docker \
--cluster-dns=10.254.0.2 \
--cluster-domain=cluster.local \
--hairpin-mode promiscuous-bridge \
--allow-privileged=true \
--serialize-image-pulls=false \
--register-node=true \
--logtostderr=true \
--cgroup-driver=cgroupfs \
--v=2
Restart=on-failure
KillMode=process
LimitNOFILE=65536
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
cp kubelet.service /etc/systemd/system/kubelet.service
scp kubelet.service k8s02:/etc/systemd/system/kubelet.service
scp kubelet.service k8s03:/etc/systemd/system/kubelet.service
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet
#k8s02和k8s03節點注意修改節點的地址
kubelet 首次啟動時向 kube-apiserver 傳送證書籤名請求,必須授權通過後,Node才會加入到叢集中
在三個節點都部署完kubelet之後,在master節點執行授權操作
查詢授權請求
kubectl get csr
授權
kubectl certificate approve xxx xxx xxx
驗證
kubectl get node
配置和啟動kube-proxy
mkdir -p /var/lib/kube-proxy
cat > kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
--bind-address=192.168.1.50 \
--hostname-override=192.168.1.50 \
--cluster-cidr=10.254.0.0/16 \
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
--logtostderr=true \
--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
cp kube-proxy.service /etc/systemd/system/
scp kube-proxy.service k8s02:/etc/systemd/system/
scp kube-proxy.service k8s03:/etc/systemd/system/
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy
#k8s02和k8s03注意修改對應IP
配置dns外掛
wget https://github.com/kubernetes/kubernetes/releases/download/v1.8.10/kubernetes.tar.gz
tar xzvf kubernetes.tar.gz
cd /root/kubernetes/cluster/addons/dns
mv kubedns-svc.yaml.sed kubedns-svc.yaml
#把檔案中$DNS_SERVER_IP替換成10.254.0.2
sed -i 's/$DNS_SERVER_IP/10.254.0.2/g' ./kubedns-svc.yaml
mv ./kubedns-controller.yaml.sed ./kubedns-controller.yaml
#把$DNS_DOMAIN替換成cluster.local
sed -i 's/$DNS_DOMAIN/cluster.local/g' ./kubedns-controller.yaml
ls *.yaml
kubedns-cm.yaml kubedns-controller.yaml kubedns-sa.yaml kubedns-svc.yaml
kubectl create -f .
配置dashboard
wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
修改如下:
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
kubectl create -f kubernetes-dashboard.yaml
kubectl get pod -n kube-system -o wide
#檢視pod所在節點