1. 程式人生 > >k8s+docker實戰(長篇)

k8s+docker實戰(長篇)

文章所有用到的檔案都在這個壓縮包裡

連結:https://pan.baidu.com/s/1ib7pUGtEDp_DqsuO5jOrAA 密碼:vvtx

首先本文參照Hek_watermelon的部落格編寫,解決了部署中遇到的一些問題,傳送門https://blog.csdn.net/hekanhyde/article/details/78595236

下面開始

安裝docker

Centos 6

yum installdocker-engine-1.7.1-1.el6.x86_64.rpm 

Centos 7

先刪除之前的安裝包

yum remove docker docker-commondocker-selinux docker-engine –y

配置官方yum源

yum install -y yum-utilsdevice-mapper-persistent-data lvm2

由於科學上網,使用阿里的映象源

yum-config-manager --add-repohttp://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum-config-manager --enable docker-ce-edge

yum-config-manager --enabledocker-ce-testing

yum-config-manager --disable docker-ce-edge

yum erase docker-engine-selinux -y

yum makecache fast

安裝docker-ce

yum install docker-ce –y

缺哪個yum哪個,如果連不了外網需要調yum代理,在/etc/yum.repo.d下複製進docker-ce.repo,代理到外網伺服器

下面6、7相同

由於科學上網,需要編輯/etc/docker/daemon.json檔案,否則無法下載image映象

加入

{

  "registry-mirrors":["https://registry.docker-cn.com"]

}

service docker start

cfssl

安裝cfssl金鑰工具

wgethttps://pkg.cfssl.org/R1.2/cfssl_linux-amd64

chmod +x cfssl_linux-amd64

mv cfssl_linux-amd64 /usr/local/bin/cfssl

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

chmod +x cfssljson_linux-amd64

mv cfssljson_linux-amd64/usr/local/bin/cfssljson

wgethttps://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

chmod +x cfssl-certinfo_linux-amd64

mv cfssl-certinfo_linux-amd64/usr/local/bin/cfssl-certinfo

export PATH=/usr/local/bin:$PATH

建立etcd相關證書金鑰

建立etcd跟CA證書

mkdir ~/etcd_ssl ~/kubernets_ssl

cd ~/etcd_ssl/

cat > etcd-root-ca-csr.json << EOF

{

 "key": {

   "algo": "rsa",

   "size": 4096

  },

 "names": [

    {

     "O": "etcd",

     "OU": "etcd Security",

     "L": "Beijing",

     "ST": "Beijing",

     "C": "CN"

    }

  ],

 "CN": "etcd-root-ca"

}

EOF

建立etcd群集證書配置檔案

cat > etcd-gencert.json << EOF 

{                                

 "signing": {                   

   "default": {                 

     "expiry": "8760h"          

   },                           

   "profiles": {                

     "etcd": {            

       "usages": [              

           "signing",           

           "key encipherment",  

           "server auth",

           "client auth" 

       ], 

       "expiry": "8760h" 

     } 

   } 

 } 

EOF

生成etcd證書籤名請求(csr)

cat > etcd-csr.json << EOF

{

 "key": {

   "algo": "rsa",

   "size": 4096

  },

 "names": [

    {

     "O": "etcd",

     "OU": "etcd Security",

     "L": "Beijing",

     "ST": "Beijing",

     "C": "CN"

    }

  ],

 "CN": "etcd",

 "hosts": [

   "127.0.0.1",

   "localhost",

   "172.16.68.83",  //此三行替換需安裝k8s叢集IP地址

   "172.16.68.85",

   "172.16.68.86"

  ]

}

EOF

"hosts":表明指定授權使用該證書的 etcd 節點 IP,如果只寫127.0.0.1,和本機網絡卡IP,則需要在3臺etcd節點上分別進行證書籤名請求,本次為了方便將所有節點的IP都寫入。後續只需要將證書進行復制即可

生成etcd證書

cfssl gencert --initca=trueetcd-root-ca-csr.json \

| cfssljson --bare etcd-root-ca

建立根CA

cfssl gencert --ca etcd-root-ca.pem \

--ca-key etcd-root-ca-key.pem \

--config etcd-gencert.json \

-profile=etcd etcd-csr.json | cfssljson--bare etcd

移除.csr .json

rm *.csr *.json

生成kubernets相關證書祕鑰

建立kubernets 根CA證書

cd ~/kubernets_ssl/

cat > k8s-root-ca-csr.json << EOF

{

 "CN": "kubernetes",

 "key": {

   "algo": "rsa",

   "size": 4096

  },

 "names": [

    {

     "C": "CN",

     "ST": "BeiJing",

     "L": "BeiJing",

     "O": "k8s",

     "OU": "System"

    }

  ]

}

EOF

建立kuber-apiserver所使用證書配置檔案

cat > k8s-gencert.json << EOF

{

 "signing": {

   "default": {

     "expiry": "87600h"

   },

   "profiles": {

     "kubernetes": {

       "usages": [

           "signing",

           "key encipherment",

           "server auth",

           "client auth"

       ],

       "expiry": "87600h"

     }

    }

  }

}

EOF

生成kube-apiserver證書籤名請求(csr)

cat > kubernetes-csr.json << EOF

{

   "CN": "kubernetes",

   "hosts": [

       "127.0.0.1",

       "10.254.0.1",

       "172.16.68.83",  //下三行換成自己IP

       "172.16.68.85",

       "172.18.68.86",

       "localhost",

       "kubernetes",

       "kubernetes.default",

       "kubernetes.default.svc",

       "kubernetes.default.svc.cluster",

       "kubernetes.default.svc.cluster.local"

   ],

   "key": {

       "algo": "rsa",

       "size": 2048

   },

   "names": [

       {

           "C": "CN",

           "ST": "BeiJing",

           "L": "BeiJing",

           "O": "k8s",

           "OU":"System"

       }

    ]

}

EOF

生成kube-apiserver所使用證書

cfssl gencert --initca=truek8s-root-ca-csr.json \

| cfssljson --bare k8s-root-ca

生成kubernet ca根證(k8s-root-ca.csr、k8s-root-ca.pem、k8s-root-ca-key.pem)

cfssl gencert --ca=k8s-root-ca.pem \

--ca-key=k8s-root-ca-key.pem \

--config k8s-gencert.json \

--profile kubernetes kubernetes-csr.json\

 |cfssljson --bare kubernetes

生成kubelet證書籤名請求(csr)

cat > admin-csr.json << EOF

{

 "CN": "admin",

 "hosts": [],

 "key": {

   "algo": "rsa",

   "size": 2048

  },

 "names": [

    {

     "C": "CN",

     "ST": "BeiJing",

     "L": "BeiJing",

     "O": "system:masters",

     "OU": "System"

    }

  ]

}

EOF

生成kubelet所使用證書

cfssl gencert --ca=k8s-root-ca.pem \

--ca-key=k8s-root-ca-key.pem \

--config k8s-gencert.json \

--profile kubernetes admin-csr.json\

 |cfssljson --bare admin

生成kube-proxy證書籤名請求(csr)

cat > kube-proxy-csr.json << EOF

{

 "CN": "system:kube-proxy",

 "hosts": [],

 "key": {

   "algo": "rsa",

   "size": 2048

  },

 "names": [

    {

     "C": "CN",

     "ST": "BeiJing",

     "L": "BeiJing",

     "O": "k8s",

     "OU": "System"

    }

  ]

}

EOF

生成kub-proxy所使用證書

cfssl gencert --ca=k8s-root-ca.pem \

--ca-key=k8s-root-ca-key.pem \

--config k8s-gencert.json \

--profile kubernetes kube-proxy-csr.json\

 |cfssljson --bare kube-proxy

rm *.csr *.json

以下基於centos7

etcd叢集搭建

安裝etcd

wgethttps://github.com/coreos/etcd/releases/download/v3.1.5/etcd-v3.1.5-linux-amd64.tar.gz

tar -xvf etcd-v3.1.5-linux-amd64.tar.gz

mv etcd-v3.1.5-linux-amd64/etcd*/usr/local/bin

cat > /usr/lib/systemd/system/etcd.service<< EOF

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target

[Service]

Type=notify

WorkingDirectory=/var/lib/etcd/

EnvironmentFile=-/etc/etcd/etcd.conf

User=etcd

# set GOMAXPROCS to number of processors

ExecStart=/bin/bash -c"GOMAXPROCS=$(nproc) /usr/local/bin/etcd --name=\"${ETCD_NAME}\"--data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\""

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

EOF

或者直接yum install etcd 注意此時的etcd是在/usr/bin下,對應的etcd.service的ExecStart屬性也是在/usr/bin下

配置etcd環境變數

vi /etc/etcd/etcd.conf

示例:展示兩臺伺服器的配置檔案,可以觀察哪些變化了,針對自己的IP進行修改

172.16.68.83:etcd.conf

# [member]

ETCD_NAME=cluster1

ETCD_DATA_DIR="/var/lib/etcd/cluster1.etcd"

ETCD_WAL_DIR="/var/lib/etcd/wal"

ETCD_SNAPSHOT_COUNT="100"

ETCD_HEARTBEAT_INTERVAL="100"

ETCD_ELECTION_TIMEOUT="1000"

ETCD_LISTEN_PEER_URLS="https://172.16.68.83:2380"

ETCD_LISTEN_CLIENT_URLS="https://172.16.68.83:2379,http://127.0.0.1:2379"

ETCD_MAX_SNAPSHOTS="5"

ETCD_MAX_WALS="5"

#ETCD_CORS=""

# [cluster]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.68.83:2380"

# if you use different ETCD_NAME (e.g.test), set ETCD_INITIAL_CLUSTER value for this name, i.e."test=http://..."

ETCD_INITIAL_CLUSTER="cluster1=https://172.16.68.83:2380,cluster2=https://172.16.68.85:2380,cluster3=https://172.16.68.86:2380"

ETCD_INITIAL_CLUSTER_STATE="new"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_ADVERTISE_CLIENT_URLS="https://172.16.68.83:2379"

#ETCD_DISCOVERY=""

#ETCD_DISCOVERY_SRV=""

#ETCD_DISCOVERY_FALLBACK="proxy"

#ETCD_DISCOVERY_PROXY=""

#ETCD_STRICT_RECONFIG_CHECK="false"

#ETCD_AUTO_COMPACTION_RETENTION="0"

# [proxy]

#ETCD_PROXY="off"

#ETCD_PROXY_FAILURE_WAIT="5000"

#ETCD_PROXY_REFRESH_INTERVAL="30000"

#ETCD_PROXY_DIAL_TIMEOUT="1000"

#ETCD_PROXY_WRITE_TIMEOUT="5000"

#ETCD_PROXY_READ_TIMEOUT="0"

# [security]

ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"

ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"

ETCD_CLIENT_CERT_AUTH="true"

ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-root-ca.pem"

ETCD_AUTO_TLS="true"

ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"

ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"

ETCD_PEER_CLIENT_CERT_AUTH="true"

ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-root-ca.pem"

ETCD_PEER_AUTO_TLS="true"

# [logging]

#ETCD_DEBUG="false"

# examples for -log-package-levelsetcdserver=WARNING,security=DEBUG

#ETCD_LOG_PACKAGE_LEVELS=""

172.16.68.85 : etcd.conf

# [member]

ETCD_NAME=cluster2

ETCD_DATA_DIR="/var/lib/etcd/cluster2.etcd"

ETCD_WAL_DIR="/var/lib/etcd/wal"

ETCD_SNAPSHOT_COUNT="100"

ETCD_HEARTBEAT_INTERVAL="100"

ETCD_ELECTION_TIMEOUT="1000"

ETCD_LISTEN_PEER_URLS="https://172.16.68.85:2380"

ETCD_LISTEN_CLIENT_URLS="https://172.16.68.85:2379,http://127.0.0.1:2379"

ETCD_MAX_SNAPSHOTS="5"

ETCD_MAX_WALS="5"

#ETCD_CORS=""

# [cluster]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.68.85:2380"

# if you use different ETCD_NAME (e.g.test), set ETCD_INITIAL_CLUSTER value for this name, i.e."test=http://..."

ETCD_INITIAL_CLUSTER="cluster1=https://172.16.68.83:2380,cluster2=https://172.16.68.85:2380,cluster3=https://172.16.68.86:2380"

ETCD_INITIAL_CLUSTER_STATE="new"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_ADVERTISE_CLIENT_URLS="https://172.16.68.85:2379"

#ETCD_DISCOVERY=""

#ETCD_DISCOVERY_SRV=""

#ETCD_DISCOVERY_FALLBACK="proxy"

#ETCD_DISCOVERY_PROXY=""

#ETCD_STRICT_RECONFIG_CHECK="false"

#ETCD_AUTO_COMPACTION_RETENTION="0"

# [proxy]

#ETCD_PROXY="off"

#ETCD_PROXY_FAILURE_WAIT="5000"

#ETCD_PROXY_REFRESH_INTERVAL="30000"

#ETCD_PROXY_DIAL_TIMEOUT="1000"

#ETCD_PROXY_WRITE_TIMEOUT="5000"

#ETCD_PROXY_READ_TIMEOUT="0"

# [security]

ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"

ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"

ETCD_CLIENT_CERT_AUTH="true"

ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-root-ca.pem"

ETCD_AUTO_TLS="true"

ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"

ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"

ETCD_PEER_CLIENT_CERT_AUTH="true"

ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-root-ca.pem"

ETCD_PEER_AUTO_TLS="true"

# [logging]

#ETCD_DEBUG="false"

# examples for -log-package-levelsetcdserver=WARNING,security=DEBUG

#ETCD_LOG_PACKAGE_LEVELS=""

屬性簡介

ETCD_NAME: etcd節點名稱,如果是靜態etcd cluster,必須與ETCD_INITIAL_CLUSTER中的名稱進行對應。

ETCD_INITIAL_CLUSTER_STATE: new為新建叢集,如果是加入一個已經存在的etcd叢集,需將該引數改為existing

ETCD_DATA_DIR=:存放etcdmember等db資料

ETCD_CLIENT_CERT_AUTH、ETCD_TRUSTED_CA_FILE、ETCD_CERT_FILE、ETCD_KEY_FILE等:為etcd TLS所需證書,制定之前建立的證書即可。

每臺etcd master上都要配置這個配置檔案。

在每臺etcd master上執行

systemctl daemon-reload

systemctl start etcd

systemctl enable etcd

檢查節點 狀態

export ETCDCTL_API=3

etcdctl--cacert=/etc/etcd/ssl/etcd-root-ca.pem \

--cert=/etc/etcd/ssl/etcd.pem \

--key=/etc/etcd/ssl/etcd-key.pem \

--endpoints=https://172.16.68.83:2379,https://172.16.68.85:2379,https://172.16.68.86:2379\

endpoint health

https://172.16.68.83:2379 is healthy:successfully committed proposal: took = 2.016793ms

https://172.16.68.85:2379 is healthy:successfully committed proposal: took = 2.005839ms

https://172.16.68.86:2379 is healthy:successfully committed proposal: took = 1.167565ms

安裝kubectl管理工具

複製kubernetes-server-linux-amd64.tar.gz到其中一臺伺服器

tar –zxvf kubernetes-server-linux-amd64.tar.gz

cd kubernetes

tar -xzvf kubernetes-src.tar.gz

cp -rserver/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet}/usr/local/bin/

分發kubernets相關證書

cd ~/kubernets_ssl/

for IP in 83 85 86; do

    ssh [email protected]$IP mkdir -p/etc/kubernetes/ssl

   scp *.pem [email protected]$IP:/etc/kubernetes/ssl

   ssh [email protected]$IP chown -R kube:kube /etc/kubernetes/ssl

done

將IP換成自己的IP即可

生成kubectl kubeconfig 檔案

在所有master上分別執行

# 設定叢集引數-在~/.kube/config加入ca證書

kubectl config set-cluster kubernetes \

 --certificate-authority=/etc/kubernetes/ssl/k8s-root-ca.pem \

 --embed-certs=true \

 --server=https://172.16.68.83:6443 //這個IP地址需要改為對應IP

# 設定客戶端認證引數-指定之前建立的admin證書對

kubectl config set-credentials admin \

 --client-certificate=/etc/kubernetes/ssl/admin.pem \

 --embed-certs=true \

 --client-key=/etc/kubernetes/ssl/admin-key.pem

# 設定上下文引數

kubectl config set-context kubernetes \

 --cluster=kubernetes \

 --user=admin

# 設定預設上下文

kubectl config use-context kubernetes

檢視生成的~/.kube/config

cat ~/.kube/config

apiVersion: v1

clusters:

- cluster:

   certificate-authority-data:...6VjV4dUFBZ3RQNVA0ZDVRY0wyVmF5KytJVm8rRGpPL2NxMlBCMDhEOWl2cHhvTlNDREhMVUpkMWMKSzVzV1ptY21CbTZVejdNTkxLZHBQNTNpR1ZqSFg3ZFpRbzVZd1R4cEZHNHMrdHpEYWRUTnVyeXpJa2d5cStDYgpxdWUzdmVpR0tGU0IxKzZkMmZCT2ZuRko3K0hxRWZaZDl5VitucTF2TlFOT042SXRIclJSUlBMTkljUWFPTmorCjI0dzZIdGpQeFA0b2wxeC8wcG1BNGJUSkd1aXBIUTAvbGJrZkcyRVpnK2UzcFE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==

   server: 172.16.68.83

 name: kubernetes

contexts:

- context:

   cluster: kubernetes

   user: admin

 name: kubernetes

current-context: kubernetes

kind: Config

preferences: {}

users:

- name: admin

 user:

as-user-extra:{}

至此kubectl管理工具安裝完成

Master搭建

service配置檔案/usr/lib/systemd/system/kube-apiserver.service內容:

/usr/lib/systemd/system/kube-apiserver.service

[Unit]

Description=KubernetesAPI Service

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=network.target

After=etcd.service

[Service]

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/apiserver

ExecStart=/usr/local/bin/kube-apiserver\

        $KUBE_LOGTOSTDERR \

        $KUBE_LOG_LEVEL \

        $KUBE_ETCD_SERVERS \

        $KUBE_API_ADDRESS \

        $KUBE_API_PORT \

        $KUBELET_PORT \

        $KUBE_ALLOW_PRIV \

        $KUBE_SERVICE_ADDRESSES \

        $KUBE_ADMISSION_CONTROL \

        $KUBE_API_ARGS

Restart=on-failure

Type=notify

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

/etc/kubernetes/config

###

# kubernetes system config

#

# The following values are used toconfigure various aspects of all

# kubernetes services, including

#

#  kube-apiserver.service

#  kube-controller-manager.service

#  kube-scheduler.service

#  kubelet.service

#  kube-proxy.service

# logging to stderr means we get it in thesystemd journal

KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug

KUBE_LOG_LEVEL="--v=2"

# Should this cluster be allowed to runprivileged docker containers

KUBE_ALLOW_PRIV="--allow-privileged=true"

# How the controller-manager, scheduler,and proxy find the apiserver

KUBE_MASTER="--master=http://127.0.0.1:8080"

Apiserver檔案仍然兩個例子用於比對

Node1:/etc/kubernetes/apiserver

###

# kubernetes system config

#

# The following values are used toconfigure the kube-apiserver

#

# The address on the local server to listento.

KUBE_API_ADDRESS="--advertise-address=172.16.68.83--insecure-bind-address=127.0.0.1 --bind-address=172.16.68.83"

# The port on the local server to listenon.

KUBE_API_PORT="--insecure-port=8080--secure-port=6443"

# Port minions listen on

#KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcdcluster

KUBE_ETCD_SERVERS="--etcd-servers=https://172.16.68.83:2379,https://172.16.68.85:2379,https://172.16.68.86:2379"

# Address range to use for services

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction"

# Add your own!

KUBE_API_ARGS="--authorization-mode=RBAC,Node\

              --runtime-config=batch/v2alpha1=true \

               --anonymous-auth=false \

               --kubelet-https=true \

               --enable-bootstrap-token-auth \

              --token-auth-file=/etc/kubernetes/token.csv \

              --service-node-port-range=30000-50000 \

              --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem\

              --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \

              --client-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \

              --service-account-key-file=/etc/kubernetes/ssl/k8s-root-ca.pem \

               --etcd-quorum-read=true \

               --storage-backend=etcd3 \

              --etcd-cafile=/etc/etcd/ssl/etcd-root-ca.pem \

              --etcd-certfile=/etc/etcd/ssl/etcd.pem \

               --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem\

               --enable-swagger-ui=true \

               --apiserver-count=3 \

              --audit-policy-file=/etc/kubernetes/audit-policy.yaml \

               --audit-log-maxage=30 \

               --audit-log-maxbackup=3 \

               --audit-log-maxsize=100 \

              --audit-log-path=/var/log/kube-audit/audit.log \

               --event-ttl=1h"

Node2:/etc/kubernetes/apiserver

###

# kubernetes system config

#

# The following values are used toconfigure the kube-apiserver

#

# The address on the local server to listento.

KUBE_API_ADDRESS="--advertise-address=172.16.68.85--insecure-bind-address=127.0.0.1 --bind-address=172.16.68.85"

# The port on the local server to listenon.

KUBE_API_PORT="--insecure-port=8080--secure-port=6443"

# Port minions listen on

#KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcdcluster

KUBE_ETCD_SERVERS="--etcd-servers=https://172.16.68.83:2379,https://172.16.68.85:2379,https://172.16.68.86:2379"

# Address range to use for services

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction"

# Add your own!

KUBE_API_ARGS="--authorization-mode=RBAC,Node\

              --runtime-config=batch/v2alpha1=true \

               --anonymous-auth=false \

               --kubelet-https=true \

               --enable-bootstrap-token-auth \

              --token-auth-file=/etc/kubernetes/token.csv \

              --service-node-port-range=30000-50000 \

              --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \

               --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem\

              --client-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \

              --service-account-key-file=/etc/kubernetes/ssl/k8s-root-ca.pem \

               --etcd-quorum-read=true \

              --storage-backend=etcd3 \

              --etcd-cafile=/etc/etcd/ssl/etcd-root-ca.pem \

              --etcd-certfile=/etc/etcd/ssl/etcd.pem \

              --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \

               --enable-swagger-ui=true \

              --apiserver-count=3 \

              --audit-policy-file=/etc/kubernetes/audit-policy.yaml \

               --audit-log-maxage=30 \

               --audit-log-maxbackup=3 \

               --audit-log-maxsize=100 \

               --audit-log-path=/var/log/kube-audit/audit.log\

               --event-ttl=1h"

簡介:

KUBE_API_ADDRESS:制定apiserver監聽的IP,http監聽127.0.0.1(不對外),https監聽本機網絡卡地址。

--authorization-mode=RBAC,Node:授權模型增加了 Node 引數,因為 1.8 後預設system:node role 不會自動授予 system:nodes 組

由於以上原因,–admission-control 同時增加了 NodeRestriction 引數

--enable-bootstrap-token-auth:用於開啟apiserver token認證,支援kubelet通過token的方式進行註冊。

--token-auth-file=/etc/kubernetes/token.csv:對應記錄token的檔案位置,後續需建立。

增加 --audit-policy-file引數用於指定高階審計配置

增加--runtime-config=batch/v2alpha1=true 引數用於cron job定時任務的支援。

建立對應的token檔案、kubelet TLS相關配置檔案、kube-proxy TLS相關配置檔案以及audit-prolicy.yaml檔案

##設定環境變數,生成token隨機數

exportKUBE_APISERVER="https://127.0.0.1:6443"

export BOOTSTRAP_TOKEN=$(head -c 16/dev/urandom | od -An -t x | tr -d ' ')

echo "Tokne: ${BOOTSTRAP_TOKEN}"

##建立對應的token檔案

cat > /etc/kubernetes/token.csv<<EOF

${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"

EOF

##建立kubelet以及kube-proxy的配置檔案

##kubelet配置檔案

kubectl config set-cluster kubernetes \

 --certificate-authority=k8s-root-ca.pem \

 --embed-certs=true \

 --server=${KUBE_APISERVER} \

 --kubeconfig=bootstrap.kubeconfig

kubectl config set-credentialskubelet-bootstrap \

 --token=${BOOTSTRAP_TOKEN} \

 --kubeconfig=bootstrap.kubeconfig

kubectl config set-context default \

 --cluster=kubernetes \

 --user=kubelet-bootstrap \

 --kubeconfig=bootstrap.kubeconfig

kubectl config use-context default--kubeconfig=bootstrap.kubeconfig

##kube-proxy配置檔案

kubectl config set-cluster kubernetes \

 --certificate-authority=k8s-root-ca.pem \

 --embed-certs=true \

 --server=${KUBE_APISERVER} \

 --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \

 --client-certificate=kube-proxy.pem \

 --client-key=kube-proxy-key.pem \

  --embed-certs=true \

 --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \

 --cluster=kubernetes \

 --user=kube-proxy \

 --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default--kubeconfig=kube-proxy.kubeconfig

##生成高階審計配置

cat >> audit-policy.yaml <<EOF

# Log all requests at the Metadata level.

apiVersion: audit.k8s.io/v1beta1

kind: Policy

rules:

- level: Metadata

EOF

分發token檔案、kubelet TLS相關配置檔案、kube-proxy TLS相關配置檔案以及audit-prolicy.yaml檔案至三臺master對應目錄

for IP in 83 85 86;do

   scp *.kubeconfig /etc/kubernetes/token.csv audit-policy.yaml [email protected]$IP:/etc/kubernetes

   ssh [email protected]$IP chown -R kube:kube /etc/kubernetes/ssl

done

設定 log 目錄許可權

for IP in 83 85 86;do

   ssh [email protected]$IP mkdir -p /var/log/kube-audit/usr/libexec/kubernetes

   ssh [email protected]$IP chown -R kube:kube /var/log/kube-audit/usr/libexec/kubernetes

   ssh [email protected]$IP chmod -R 755 /var/log/kube-audit/usr/libexec/kubernetes

done

kubectl啟動檔案

/usr/lib/systemd/system/kube-controller-manager.service

[Unit]

Description=Kubernetes Controller Manager

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/controller-manager

ExecStart=/usr/local/bin/kube-controller-manager\

       $KUBE_LOGTOSTDERR \

       $KUBE_LOG_LEVEL \

       $KUBE_MASTER \

       $KUBE_CONTROLLER_MANAGER_ARGS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

/etc/kubernetes/controller-manager

###

# The following values are used toconfigure the kubernetes controller-manager

# defaults from config and apiserver shouldbe adequate

# Add your own!

KUBE_CONTROLLER_MANAGER_ARGS="--address=0.0.0.0\

                              --service-cluster-ip-range=10.254.0.0/16\

                             --cluster-name=kubernetes \

                             --cluster-signing-cert-file=/etc/kubernetes/ssl/k8s-root-ca.pem \

                              --cluster-signing-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem\

                             --service-account-private-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem\

                             --root-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \

                             --leader-elect=true \

                             --node-monitor-grace-period=40s \

                             --node-monitor-period=5s \

                             --pod-eviction-timeout=5m0s"

Kube-scheduler啟動檔案

/usr/lib/systemd/system/kube-scheduler.service

[Unit]

Description=Kubernetes Scheduler Plugin

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/scheduler

ExecStart=/usr/local/bin/kube-scheduler \

           $KUBE_LOGTOSTDERR \

           $KUBE_LOG_LEVEL \

           $KUBE_MASTER \

           $KUBE_SCHEDULER_ARGS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

/etc/kubernetes/scheduler

###

# kubernetes scheduler config

# default config should be adequate

# Add your own!

KUBE_SCHEDULER_ARGS="--leader-elect=true--address=0.0.0.0"

注意每臺伺服器都要建立啟動檔案及配置檔案,每臺伺服器都要啟動kube-apiserver,kube-controller-manager,kube-scheduler

啟動服務並檢視群集元件狀態

sudo systemctl daemon-reload

sudo systemctl start kube-apiserver

sudo systemctl startkube-controller-manager

sudo systemctl start kube-scheduler

sudo systemctl enable kube-apiserver

sudo systemctl enablekube-controller-manager

sudo systemctl enable kube-scheduler

sudo kubectl get cs

NAME                 STATUS    MESSAGE              ERROR

scheduler            Healthy   ok                  

controller-manager   Healthy  ok                  

etcd-1               Healthy   {"health": "true"}  

etcd-2              Healthy   {"health": "true"}  

etcd-0               Healthy   {"health": "true"}

至此master節點基本部署完成

Node節點搭建

示例中以master中的83和85作為node,此環節請看完這個環節的文件後再動手部署

由於真實場景node與master不會再同一伺服器上,列出正常分離流程

Node節點中所需環境:

Dokcer

Kubelet

Kube-proxy

kubectl

還記得之前master部署中的壓縮包嗎,裡面有這兩個東西,參考剛才的流程將kubelet與kube-proxy放到/usr/local/bin

/usr/lib/systemd/system/kubelet.service

[Unit]

Description=Kubernetes Kubelet Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=docker.service

Requires=docker.service

[Service]

WorkingDirectory=/var/lib/kubelet

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/kubelet

ExecStart=/usr/local/bin/kubelet \

           $KUBE_LOGTOSTDERR \

           $KUBE_LOG_LEVEL \

           $KUBELET_API_SERVER \

           $KUBELET_ADDRESS \

           $KUBELET_PORT \

           $KUBELET_HOSTNAME \

           $KUBE_ALLOW_PRIV \

           $KUBELET_POD_INFRA_CONTAINER \

           $KUBELET_ARGS

Restart=on-failure

[Install]

WantedBy=multi-user.target

/usr/lib/systemd/system/kube-proxy.service

[Unit]

Description=Kubernetes Kube-Proxy Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=network.target

[Service]

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/proxy

ExecStart=/usr/local/bin/kube-proxy \

       $KUBE_LOGTOSTDERR \

       $KUBE_LOG_LEVEL \

       $KUBE_MASTER \

       $KUBE_PROXY_ARGS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

##分發kubernets證書(k8s-root-ca.pem):

cd /etc/kubernetes/ssl/

ssh [email protected].68.87mkdir /etc/kubernetes/ssl

scp [email protected]:/etc/kubernetes/ssl

##分發bootstrap.kubeconfig  kube-proxy.kubeconfig檔案或者node節點上重新生成這兩個配置檔案

##方法1:分發

$ cd /etc/kubernetes/

$ scp *[email protected]:/etc/kubernetes

##方法2:在node節點上操作生成對應kubelet配置檔案

##kubelet配置檔案

$ # 設定叢集引數

$ kubectl config set-cluster kubernetes \

 --certificate-authority=/etc/kubernetes/ssl/k8s-root-ca.pem \

 --embed-certs=true \

 --server=${KUBE_APISERVER} \

 --kubeconfig=bootstrap.kubeconfig

$ # 設定客戶端認證引數

$ kubectl config set-credentialskubelet-bootstrap \

 --token=${BOOTSTRAP_TOKEN} \

 --kubeconfig=bootstrap.kubeconfig

$ # 設定上下文引數

$ kubectl config set-context default \

  --cluster=kubernetes\

 --user=kubelet-bootstrap \

 --kubeconfig=bootstrap.kubeconfig

$ # 設定預設上下文

$ kubectl config use-context default--kubeconfig=bootstrap.kubeconfig

$ mv bootstrap.kubeconfig /etc/kubernetes/

####特別注意,${BOOTSTRAP_TOKEN}要寫成之前apiserver,token檔案裡的token欄位

##kube-proxy配置檔案

$ # 設定叢集引數

$ kubectl config set-cluster kubernetes \

 --certificate-authority=/etc/kubernetes/ssl/k8s-root-ca.pem \

 --embed-certs=true \

 --server=${KUBE_APISERVER} \

 --kubeconfig=kube-proxy.kubeconfig

$ # 設定客戶端認證引數

$ kubectl config set-credentials kube-proxy\

 --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \

 --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \

 --embed-certs=true \

 --kubeconfig=kube-proxy.kubeconfig

$ # 設定上下文引數

$ kubectl config set-context default \

 --cluster=kubernetes \

 --user=kube-proxy \

 --kubeconfig=kube-proxy.kubeconfig

$ # 設定預設上下文

$ kubectl config use-context default--kubeconfig=kube-proxy.kubeconfig

$ mv kube-proxy.kubeconfig /etc/kubernetes/

###設定屬主屬組

$ ssh [email protected] chown -R kube:kube/etc/kubernetes/ssl

修改通用配置檔案

/etc/kubernetes/config

###

# kubernetes system config

#

# The following values are used toconfigure various aspects of all

# kubernetes services, including

#

#  kube-apiserver.service

#  kube-controller-manager.service

#  kube-scheduler.service

#  kubelet.service

#  kube-proxy.service

# logging to stderr means we get it in thesystemd journal

KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug

KUBE_LOG_LEVEL="--v=2"

# Should this cluster be allowed to runprivileged docker containers

KUBE_ALLOW_PRIV="--allow-privileged=true"

# How the controller-manager, scheduler,and proxy find the apiserver

#KUBE_MASTER="--master=http://127.0.0.1:8080"

/etc/kubernetes/kubelet 注意要修改地址和hostname為本機域名

###

# kubernetes kubelet (minion) config

# The address for the info server to serveon (set to 0.0.0.0 or "" for all interfaces)

KUBELET_ADDRESS="--address=172.16.68.87"

# The port for the info server to serve on

# KUBELET_PORT="--port=10250"

# You may leave this blank to use theactual hostname

KUBELET_HOSTNAME="--hostname-override=cluster4"

# location of the api-server

# KUBELET_API_SERVER=""

# Add your own!

KUBELET_ARGS="--cgroup-driver=cgroupfs\

              --cluster-dns=10.254.0.2 \

              --resolv-conf=/etc/resolv.conf \

             --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig\

             --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \

              --fail-swap-on=false \

              --cert-dir=/etc/kubernetes/ssl \

              --cluster-domain=cluster.local. \

              --hairpin-mode=promiscuous-bridge\

              --serialize-image-pulls=false \

              --pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"

Pause-amd64會被牆,需要先load到docker中

複製gcr.io_google_containers_pause-amd64_3.0.tar到伺服器

docker load -i gcr.io_google_containers_pause-amd64_3.0.tar

/etc/kubernetes/proxy

###

# kubernetes proxy config

# default config should be adequate

# Add your own!

KUBE_PROXY_ARGS="--bind-address=172.16.68.87 \

                 --hostname-override=cluster4\

                --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \

                --cluster-cidr=10.254.0.0/16"

由於 HA 方案基於 Nginx 反代實現,所以每個 Node 要啟動一個 Nginx 負載均衡 Master

# 建立配置目錄

mkdir -p /etc/nginx

# 寫入代理配置

cat << EOF >>/etc/nginx/nginx.conf

error_log stderr notice;

worker_processes auto;

events {

 multi_accept on;

  useepoll;

 worker_connections 1024;

}

stream {

   upstream kube_apiserver {

       least_conn;

       server 172.16.68.83:6443;

       server 172.16.68.85:6443;

       server 172.16.68.86:6443;

    }

   server {

       listen        0.0.0.0:6443;

       proxy_pass    kube_apiserver;

       proxy_timeout 10m;

       proxy_connect_timeout 1s;

    }

}

EOF

# 更新許可權

chmod +r /etc/nginx/nginx.conf

nginx-proxy.service

cat << EOF >>/etc/systemd/system/nginx-proxy.service

[Unit]

Description=kubernetes apiserver dockerwrapper

Wants=docker.socket

After=docker.service

[Service]

User=root

PermissionsStartOnly=true

ExecStart=/usr/bin/docker run -p127.0.0.1:6443:6443 \\

                              -v/etc/nginx:/etc/nginx \\

                              --namenginx-proxy \\

                              --net=host \\

                             --restart=on-failure:5 \\

                              --memory=512M \\

                             nginx:1.13.5-alpine

ExecStartPre=-/usr/bin/docker rm -fnginx-proxy

ExecStop=/usr/bin/docker stop nginx-proxy

Restart=always

RestartSec=15s

TimeoutStartSec=30s

[Install]

WantedBy=multi-user.target

EOF

最後啟動 Nginx 代理即可

systemctl daemon-reload

systemctl start nginx-proxy

systemctl enable nginx-proxy

新增 Node

# 在任意 master 執行即可

kubectl create clusterrolebindingkubelet-bootstrap \

 --clusterrole=system:node-bootstrapper \

 --user=kubelet-bootstrap

然後啟動 kubelet

systemctl daemon-reload

systemctl start kubelet

systemctl enable kubelet

在任意master節點上檢視證書請求:

kubectl get csr

NAME                                                   AGE       REQUESTOR           CONDITION

node-csr-NzOwTOc5VkR7vFQyctMb99iKuUX69ls536k39aJLSog   1m       kubelet-bootstrap   Pending

approve就可以了:

kubectl certificate approve node-csr-NzOwTOc5VkR7vFQyctMb99iKuUX69ls536k39aJLSog

certificatesigningrequest"node-csr-NzOwTOc5VkR7vFQyctMb99iKuUX69ls536k39aJLSog" approved

kubectl get csr

NAME                                                  AGE       REQUESTOR           CONDITION

node-csr-NzOwTOc5VkR7vFQyctMb99iKuUX69ls536k39aJLSog   2m       kubelet-bootstrap  Approved,Issued

kubectl get nodes

NAME      STATUS    ROLES     AGE      VERSION

Cluster4  Ready     <none>    31s      v1.8.0

檢視node節點自動生成的客戶端啊證書對(ubelet-client.crt kubelet-client.key kubelet.crt kubelet.key

ls /etc/kubernetes/ssl/

k8s-root-ca.pem  kubelet-client.crt  kubelet-client.key  kubelet.crt kubelet.key

最後再啟動 kube-proxy 即可:

systemctl start kube-proxy

systemctl enable kube-proxy

如果是在master上建立node,那麼只需要修改bootstrap.kubeconfig和kube-proxy.kubeconfig,將其中的server由127.0.0.1改成IP。如:172.16.68.83即可,同時master上不需要安裝nginx負載均衡

檢視:

kubectl get nodes

NAME      STATUS    ROLES     AGE      VERSION

cluster1  Ready     <none>    3s       v1.8.0

clusrer2  Ready     <none>    8s       v1.8.0

cluster3  Ready     <none>    8s       v1.8.0

cluster4  Ready     <none>    9m       v1.8.0

至此,node節點部署完畢

Calico外掛部署

簡介:

Calico是一個純3層的資料中心網路方案,而且無縫整合像OpenStack這種IaaS雲架構,能夠提供可控的VM、容器、裸機之間的IP通訊。Calico不使用重疊網路比如flannel和libnetwork重疊網路驅動,它是一個純三層的方法,使用虛擬路由代替虛擬交換,每一臺虛擬路由通過BGP協議傳播可達資訊(路由)到剩餘資料中心。

Calico在每一個計算節點利用LinuxKernel實現了一個高效的vRouter來負責資料轉發,而每個vRouter通過BGP協議負責把自己上執行的workload的路由資訊像整個Calico網路內傳播——小規模部署可以直接互聯,大規模下可通過指定的BGP route reflector來完成。

Calico節點組網可以直接利用資料中心的網路結構(無論是L2或者L3),不需要額外的NAT,隧道或者Overlay Network。

Calico基於iptables還提供了豐富而靈活的網路Policy,保證通過各個節點上的ACLs來提供Workload的多租戶隔離、安全組以及其他可達性限制等功能。

上文都是複製的,本人感覺是用來為每一個docker(同宿主機或不同宿主機)容器賦予唯一IP,並且互相通訊

獲取最新的calico.yaml:

sudo mkdir ~/calico/

cd ~/calico/

修改calico.yaml檔案:

# 替換 Etcd 地址

sed -i '[email protected]*etcd_endpoints:.*@\ \etcd_endpoints:\ \"https://172.16.68.83:2379,https://172.16.68.85:2379,https://172.16.68.86:2379\"@gi'calico.yaml

# 替換 Etcd 證書

export ETCD_CERT=`cat/etc/etcd/ssl/etcd.pem | base64 | tr -d '\n'`

export ETCD_KEY=`cat/etc/etcd/ssl/etcd-key.pem | base64 | tr -d '\n'`

export ETCD_CA=`cat/etc/etcd/ssl/etcd-root-ca.pem | base64 | tr -d '\n'`

sed -i "[email protected]*etcd-cert:.*@\ \etcd-cert:\ ${ETCD_CERT}@gi" calico.yaml

sed -i "[email protected]*etcd-key:.*@\ \ etcd-key:\${ETCD_KEY}@gi" calico.yaml

sed -i "[email protected]*etcd-ca:.*@\ \ etcd-ca:\${ETCD_CA}@gi" calico.yaml

sed -i '[email protected]*etcd_ca:.*@\ \ etcd_ca:\"/calico-secrets/etcd-ca"@gi' calico.yaml

sed -i '[email protected]*etcd_cert:.*@\ \ etcd_cert:\"/calico-secrets/etcd-cert"@gi' calico.yaml

sed -i '[email protected]*etcd_key:.*@\ \ etcd_key:\"/calico-secrets/etcd-key"@gi' calico.yaml

也可以使用檔案包下calico.yaml檔案,需要修改end_points: “…”,設定為自己的IP。

修改kubelet配置

/etc/kubernetes/kubelet

###

# kubernetes kubelet (minion) config

# The address for the info server to serveon (set to 0.0.0.0 or "" for all interfaces)

KUBELET_ADDRESS="--address=172.16.68.83"

# The port for the info server to serve on

# KUBELET_PORT="--port=10250"

# You may leave this blank to use theactual hostname

KUBELET_HOSTNAME="--hostname-override=cluster1"

# location of the api-server

# KUBELET_API_SERVER=""

# Add your own!

KUBELET_ARGS="--cgroup-driver=cgroupfs\

             --network-plugin=cni \

              --cluster-dns=10.254.0.2 \

              --resolv-conf=/etc/resolv.conf \

             --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig\

             --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \

              --fail-swap-on=false \

              --cert-dir=/etc/kubernetes/ssl \

              --cluster-domain=cluster.local. \

              --hairpin-mode=promiscuous-bridge\

             --serialize-image-pulls=false \

             --pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"

根據官方文件要求 kubelet 配置必須增加--network-plugin=cni選項,所以需要修改 kubelet 配置

所有node節點都需要修改配置檔案,都需要重啟kubelet

systemctl daemon-reload

systemctl restart kubelet

建立calico Daemonset

或者使用配置包下的rbac

再建立calico的daemonset

kubectl create -f calico.yaml

檢查Daemonset和相應pod執行情況:

kubectl get pods -n kube-system

NAME                                     READY     STATUS    RESTARTS  AGE

calico-kube-controllers-94b7cb897-krckw   1/1      Running   0          29m

calico-node-5dc8z                         2/2       Running  0          29m

calico-node-gm9k8                         2/2       Running  0          29m

calico-node-kt5fk                         2/2       Running  0          29m

calico-node-xds45                         2/2       Running  0          29m

kubectl get ds -n kube-system

NAME          DESIRED   CURRENT  READY     UP-TO-DATE   AVAILABLE  NODE SELECTOR   AGE

calico-node   4        4         4         4            4           <none>          29m

重啟kubelet、docker:

systemctl restart kubelet

systemctl restart docker

測試跨主機通訊

建立測試例項:

mkdir ~/demo

cd ~/demo

cat << EOF >> demo.deploy.yml

apiVersion: apps/v1beta2

kind: Deployment

metadata:

 name: demo-deployment

spec:

 replicas: 4

 selector:

   matchLabels:

     app: demo

 template:

   metadata:

     labels:

       app: demo

   spec:

     containers:

     - name: demo

       image: mritd/demo

       imagePullPolicy: IfNotPresent

       ports:

       - containerPort: 80

EOF

kubectl create -f demo.deploy.yml

驗證通訊:

kubectl get pod -o wide

NAME                               READY     STATUS   RESTARTS   AGE       IP               NODE

demo-deployment-5fc9c54fb4-5pgfk   1/1      Running   0          2m        192.168.177.65   cluster4

demo-deployment-5fc9c54fb4-5svgl   1/1      Running   0          2m        192.168.33.193   cluster1

demo-deployment-5fc9c54fb4-dfcfd   1/1      Running   0          2m        192.168.188.1    cluster2

demo-deployment-5fc9c54fb4-dttvb   1/1      Running   0          2m       192.168.56.65    cluster3

kubectl exec -tidemo-deployment-5fc9c54fb4-5svgl bash

bash-4.3# ping 192.168.56.66

PING 192.168.56.66 (192.168.56.66): 56 databytes

64 bytes from 192.168.56.66: seq=0 ttl=62time=0.407 ms

^C

--- 192.168.56.66 ping statistics ---

1 packets transmitted, 1 packets received,0% packet loss

round-trip min/avg/max = 0.407/0.407/0.407ms

至此,群集網路元件calico搭建完成

Kube-dns外掛部署

簡介:

kube-dns用來為kubernetesservice分配子域名,在叢集中可以通過名稱訪問service。通常kube-dns會為service賦予一個名為“service名稱.namespace.svc.cluster.local”的A記錄,用來解析service的clusterip。

在實際應用中,如果訪問default namespace下的服務,則可以通過“service名稱”直接訪問。如果訪問其他namespace下的服務,則可以通過“service名稱.namespace”訪問。

上文複製,個人理解:上一個外掛calico為每一個pod賦予唯一IP,但各個pod間訪問卻不能通過IP,因為IP是生成的,各節點互相不知道,那麼dns對各個service建立父域名,對各個service下的pod建立子域名,那麼各個節點通過service名稱就可以互相訪問了。

複製檔案包下的kubedns資料夾到伺服器

部署服務

如果不太好訪問外網的話可以把需要的image load進來,在images資料夾下

kubectl create -f kube-dns.yaml

測試kubedns

建立兩組 Pod 和 Service,進入 Pod 中 curl 另一個Service 名稱看看是否能解析;同時還要測試一下外網能否解析

# 建立測試deply

cat > test.deploy.yml << EOF

apiVersion: apps/v1beta2

kind: Deployment

metadata:

 name: nginx-deployment

spec:

 replicas: 3

 selector:

   matchLabels:

     app: nginx

 template:

   metadata:

     labels:

       app: nginx

   spec:

     containers:

     - name: nginx

       image: nginx:1.13.5-alpine

       imagePullPolicy: IfNotPresent

       ports:

       - containerPort: 80

EOF

# 建立test.deploy對應service

$cat > test.service.yml << EOF

kind: Service

apiVersion: v1

metadata:

 name: nginx-service

spec:

 selector:

   app: nginx

 ports:

    -protocol: TCP

     port: 80

     targetPort: 80

     nodePort: 31000

 type: NodePort

EOF

# 為之前做的demo deploy建立service

$ cat > demo.service.yml << EOF

kind: Service

apiVersion: v1

metadata:

 name: demo-service

spec:

 selector:

   app: demo

 ports:

    -protocol: TCP

     port: 80

     targetPort: 80

     nodePort: 31001

 type: NodePort

EOF

# 建立:

kubectl create -f test.deploy.yml

kubectl create -f test.service.yml

kubectl create -f demo.service.yml

檢視

kubectl get pods -o wide

NAME                                READY     STATUS   RESTARTS   AGE       IP               NODE

demo-deployment-5fc9c54fb4-5pgfk    1/1      Running   1          5h        192.168.177.66   node.132

demo-deployment-5fc9c54fb4-5svgl    1/1      Running   1          5h       192.168.33.194   node.131

demo-deployment-5fc9c54fb4-dfcfd    1/1      Running   1          5h       192.168.188.2    node.133

demo-deployment-5fc9c54fb4-dttvb    1/1      Running   1          5h       192.168.56.66    node.134

nginx-deployment-5d56d45798-24ptc   1/1      Running   0          1m       192.168.33.195   node.131

nginx-deployment-5d56d45798-gjr6s   1/1      Running   0          1m        192.168.188.3    node.133

nginx-deployment-5d56d45798-wtfcg   1/1      Running   0          1m       192.168.177.68   node.132

kubectl get service -o wide

NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE       SELECTOR

demo-service    NodePort   10.254.23.220   <none>        80:31001/TCP   1m       app=demo

kubernetes      ClusterIP   10.254.0.1      <none>        443/TCP        22h       <none>

nginx-service   NodePort   10.254.197.49   <none>        80:31000/TCP   1m       app=nginx

# 測試dns解析-pod內部

kubectl exec -tidemo-deployment-5fc9c54fb4-5svgl bash

bash-4.3# curl http://nginx-service

<!DOCTYPE html>

<html>

<head>

<title>Welcome tonginx!</title>

<style>

   body {

       width: 35em;

       margin: 0 auto;

       font-family: Tahoma, Verdana, Arial, sans-serif;

    }

</style>

</head>

<body>

<h1>Welcome to nginx!</h1>

<p>If you see this page, the nginxweb server is successfully installed and

working. Further configuration isrequired.</p>

<p>For online documentation andsupport please refer to

<ahref="http://nginx.org/">nginx.org</a>.<br/>

Commercial support is available at

<ahref="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>

</body>

</html>

# 測試dns解析-訪問外網

bash-4.3# curl https://www.baidu.com

<!DOCTYPE html>

<!--STATUS OK--><html><head>...使用百度前必讀</a>&nbsp; <ahref=http://jianyi.baidu.com/ class=cp-feedback>意見反饋</a>&nbsp;京ICP證030173號&nbsp;<img src=//www.baidu.com/img/gs.gif> </p> </div> </div></div> </body> </html>

部署DNS自動擴容

檔案包下還剩下了兩個yaml,就是它們了,可以根據伺服器的數量及pod的數量自動擴充dns服務的部署數量

kubectl create –f dns-horizontal-autoscaler-rbac.yaml

kubectl create –f dns-horizontal-autoscaler.yaml

traefix外掛部署

簡介:

個人理解:外網需要訪問k8s叢集的服務,普通情況下可以通過nginx配置路徑到對應內網IP及路徑,但是容器化後IP地址都是浮動的,那麼每次從新部署都需要修改nginx配置檔案是不可能的。所以用traefix實現外網訪問時的路徑對應內網service以及路徑的轉化。

部署traefik

配置檔案包下的traefik資料夾複製到伺服器

kubectl apply -f traefik-rbac.yaml

kubectl apply -f traefik-deployment.yaml

type: NodePort: 對應的service採用nodeport方式,將在所有節點監聽對應埠

由於與本地環境埠衝突,web端對應埠更換為30080(預設80)

由於與本地環境埠衝突,wadmin端對應埠更換為38080(預設8080)

測試ingress

先實現traefik自己ui介面的ingress:

ui.yaml中,請將host配置的域名加入/etc/hosts中

訪問

172.16.68.83:38080

看到頁面就行了

測試:

kubectl get svc

NAME                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE

...

demo-service           NodePort    10.254.23.220    <none>        80:31001/TCP   7d

...

nginx-service          NodePort    10.254.197.49    <none>        80:31000/TCP   7d

利用上兩個之前建立的svc

檔案包裡的

kubectl create -f  demo-path-ingress.yaml

demo.bs.com:30080/demo   demo.bs.com:30080/nginx

能看到頁面

至此 traefik部署結束

Harbor外掛部署:

簡介:

企業級私有image庫。沒了

安裝docker-compose:

curl -Lhttps://github.com/docker/compose/releases/download/1.17.1/docker-compose-`uname-s`-`uname -m` -o /usr/local/bin/docker-compose

chmod +x /usr/local/bin/docker-compose

因為暫時沒有做HA,所以以基於一臺伺服器的harbor倉庫為例

複製檔案包下的harbor-offline-installer-v1.3.0-rc1.tgz到伺服器,

解壓並load到docker

tar -zxvfharbor-offline-installer-v1.3.0-rc1.tgz

cd harbor

docker load -i harbor.v1.3.0-rc1.tar.gz

建立harbor所需證書

cat > harbor-csr.json <<EOF

{

 "CN": "harbor",

 "hosts": [

   "127.0.0.1",

   "172.16.68.90"

  ],

 "key": {

   "algo": "rsa",

   "size": 2048

  },

 "names": [

    {

     "C": "CN",

      "ST": "BeiJing",

     "L": "BeiJing",

     "O": "k8s",

     "OU": "System"

    }

  ]

}

EOF

建立證書和金鑰:

cat >/etc/kubernetes/ssl/k8s-gencert.json << EOF

{

 "signing": {

   "default": {

     "expiry": "87600h"

   },

   "profiles": {

     "kubernetes": {

       "usages": [

           "signing",

           "key encipherment",

           "server auth",

            "client auth"

       ],

       "expiry": "87600h"

     }

    }

  }

}

EOF

cfssl gencert-ca=/etc/kubernetes/ssl/k8s-root-ca.pem \

 -ca-key=/etc/kubernetes/ssl/k8s-root-ca-key.pem \

 -config=/etc/kubernetes/ssl/k8s-gencert.json \

 -profile=kubernetes harbor-csr.json | cfssljson -bare harbor

ls harbor*

harbor.csr harbor-csr.json  harbor-key.pemharbor.pem

sudo mkdir -p /etc/harbor/ssl

sudo mv harbor*.pem /etc/harbor/ssl

rm /etc/kubernetes/ssl/k8s-root-ca-key.pem

rm harbor.csr  harbor-csr.json

修改harbor.cfg 檔案

這幾個引數要改

hostname = 172.16.68.90

ui_url_protocol = https

ssl_cert = /etc/harbor/ssl/harbor.pem

ssl_cert_key =/etc/harbor/ssl/harbor-key.pem

啟動

./install.sh

用賬號 admin 和harbor.cfg 配置檔案中的預設密碼 Harbor12345 登陸系統:

docker 客戶端訪問:

將簽署 harbor 證書的 CA 證書拷貝到 /etc/docker/certs.d/172.16.68.90 目錄下

sudo mkdir -p/etc/docker/certs.d/172.16.68.90

sudo cp /etc/kubernetes/ssl/k8s-root-ca.pem/etc/docker/certs.d/172.16.68.90/ca.crt

測試登入

docker login 172.16.68.90

Username: admin

Password:

Login Succeeded

測試上傳映象:

web介面建立專案-elk:

檢視ELK相關映象:

[[email protected] ~]# docker images

REPOSITORY