1. 程式人生 > 實用技巧 >kubernetes之2---二進位制部署k8s叢集

kubernetes之2---二進位制部署k8s叢集

二進位制部署k8s叢集

目錄

叢集架構

服務
etcd 127.0.0.1:2379,2380
kubelet 10250,10255
kube-proxy 10256
kube-apiserve 6443,127.0.0.1:8080
kube-schedule 10251,10259
kube-controll 10252,10257

環境準備

主機 IP 記憶體 軟體
k8s-master 10.0.0.11 1G etcd,api-server,controller-manager,scheduler
k8s-node1 10.0.0.12 2G etcd,kubelet,kube-proxy,docker,flannel
k8s-node2 10.0.0.13 2G ectd,kubelet,kube-proxy,docker,flannel
k8s-node3 10.0.0.14 2G kubelet,kube-proxy,docker,flannel
  • 關閉selinuxfirewalld

    NetworkManagerpostfix(非必須)

  • 修改IP地址、主機名

hostnamectl set-hostname 主機名
sed -i 's/200/IP/g' /etc/sysconfig/network-scripts/ifcfg-eth0
  • 新增hosts解析
cat > /etc/hosts <<EOF
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.11 k8s-master
10.0.0.12 k8s-node1
10.0.0.13 k8s-node2
10.0.0.14 k8s-node3
EOF
  • 建立k8s配置檔案目錄
mkdir /etc/kubernetes
  • k8s-node3配置ssh金鑰免密登入所有節點
ssh-keygen
ssh-copy-id k8s-master
ssh-copy-id k8s-node1
ssh-copy-id k8s-node2
ssh-copy-id k8s-node3

注意: 如果SSH不使用預設22埠時

cat > ~/.ssh/config <<EOF
Port 12345
EOF

簽發HTTPS證書

根據認證物件可以將證書分成三類:

  • 伺服器證書server cert:服務端使用,客戶端以此驗證服務端身份,例如docker服務端、kube-apiserver
  • 客戶端證書client cert:用於服務端認證客戶端,例如etcdctl、etcd proxy、fleetctl、docker客戶端
  • 對等證書peer cert(表示既是server cert又是client cert):雙向證書,用於etcd叢集成員間通訊

kubernetes叢集需要的證書如下:

  • etcd 節點需要標識自己服務的server cert,也需要client cert與etcd叢集其他節點互動,因此使用對等證書peer cert
  • master 節點需要標識apiserver服務的server cert,也需要client cert連線etcd叢集,這裡分別指定2個證書。
  • kubectlcalicokube-proxy 只需要client cert,因此證書請求中 hosts 欄位可以為空。
  • kubelet證書比較特殊,不是手動生成,它由node節點TLS BootStrapapiserver請求,由master節點的controller-manager 自動簽發,包含一個client cert 和一個server cert

本架構使用的證書:參考文件

  • 一套對等證書(etcd-peer):etcd<-->etcd<-->etcd
  • 客戶端證書(client):api-server-->etcd和flanneld-->etcd
  • 伺服器證書(apiserver):-->api-server
  • 伺服器證書(kubelet):api-server-->kubelet
  • 伺服器證書(kube-proxy-client):api-server-->kube-proxy

不使用證書:

  • 如果使用證書,每次訪問etcd都必須指定證書;為了方便,etcd監聽127.0.0.1,本機訪問不使用證書。

  • api-server-->controller-manager

  • api-server-->scheduler


在k8s-node3節點基於CFSSL工具建立CA證書,服務端證書,客戶端證書。

CFSSL是CloudFlare開源的一款PKI/TLS工具。 CFSSL 包含一個命令列工具 和一個用於簽名,驗證並且捆綁TLS證書的 HTTP API 服務。 使用Go語言編寫。

Github:https://github.com/cloudflare/cfssl
官網:https://pkg.cfssl.org/
參考:http://blog.51cto.com/liuzhengwei521/2120535?utm_source=oschina-app


  1. 準備證書頒發工具CFSSL
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssl-certinfo /usr/local/bin/cfssljson
  1. 建立ca證書配置檔案
mkdir /opt/certs && cd /opt/certs
cat > /opt/certs/ca-config.json <<EOF
{
    "signing": {
        "default": {
            "expiry": "175200h"
        },
        "profiles": {
            "server": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth"
                ]
            },
            "client": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "client auth"
                ]
            },
            "peer": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}
EOF
  1. 建立ca證書請求配置檔案
cat > /opt/certs/ca-csr.json <<EOF
{
    "CN": "kubernetes-ca",
    "hosts": [
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ],
    "ca": {
        "expiry": "175200h"
    }
}
EOF
  1. 生成CA證書和私鑰
[root@k8s-node3 certs]# cfssl gencert -initca ca-csr.json|cfssljson -bare ca - 
2020/12/14 09:59:31 [INFO] generating a new CA key and certificate from CSR
2020/12/14 09:59:31 [INFO] generate received request
2020/12/14 09:59:31 [INFO] received CSR
2020/12/14 09:59:31 [INFO] generating key: rsa-2048
2020/12/14 09:59:31 [INFO] encoded CSR
2020/12/14 09:59:31 [INFO] signed certificate with serial number 541033833394022225124150924404905984331621873569
[root@k8s-node3 certs]# ls 
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

部署etcd叢集

主機名 IP 角色
k8s-master 10.0.0.11 etcd lead
k8s-node1 10.0.0.12 etcd follow
k8s-node2 10.0.0.13 etcd follow

  1. k8s-node3簽發etcd節點之間通訊的證書
cat > /opt/certs/etcd-peer-csr.json <<EOF
{
    "CN": "etcd-peer",
    "hosts": [
        "10.0.0.11",
        "10.0.0.12",
        "10.0.0.13"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
EOF
[root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json | cfssljson -bare etcd-peer
2020/12/14 10:05:22 [INFO] generate received request
2020/12/14 10:05:22 [INFO] received CSR
2020/12/14 10:05:22 [INFO] generating key: rsa-2048
2020/12/14 10:05:23 [INFO] encoded CSR
2020/12/14 10:05:23 [INFO] signed certificate with serial number 300469497136552423377618640775350926134698270185
2020/12/14 10:05:23 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-node3 certs]# ls etcd-peer*
etcd-peer.csr  etcd-peer-csr.json  etcd-peer-key.pem  etcd-peer.pem
  1. k8s-master,k8s-node1,k8s-node2安裝etcd服務
yum -y install etcd
  1. k8s-node3傳送證書到k8s-master,k8s-node1,k8s-node2的/etc/etcd目錄
cd /opt/certs
scp -rp *.pem [email protected]:/etc/etcd/
scp -rp *.pem [email protected]:/etc/etcd/
scp -rp *.pem [email protected]:/etc/etcd/
  1. k8s-master,k8s-node1,k8s-node2修改證書屬主屬組
chown -R etcd:etcd /etc/etcd/*.pem
  1. k8s-master配置etcd
cat > /etc/etcd/etcd.conf <<EOF
ETCD_DATA_DIR="/var/lib/etcd/"
ETCD_LISTEN_PEER_URLS="https://10.0.0.11:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.11:2379,http://127.0.0.1:2379"
ETCD_NAME="node1"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.11:2379,http://127.0.0.1:2379"
ETCD_INITIAL_CLUSTER="node1=https://10.0.0.11:2380,node2=https://10.0.0.12:2380,node3=https://10.0.0.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_CERT_FILE="/etc/etcd/etcd-peer.pem"
ETCD_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/etcd-peer.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
ETCD_PEER_AUTO_TLS="true"
EOF
  1. k8s-node1配置etcd
cat > /etc/etcd/etcd.conf <<EOF
ETCD_DATA_DIR="/var/lib/etcd/"
ETCD_LISTEN_PEER_URLS="https://10.0.0.12:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.12:2379,http://127.0.0.1:2379"
ETCD_NAME="node2"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.12:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.12:2379,http://127.0.0.1:2379"
ETCD_INITIAL_CLUSTER="node1=https://10.0.0.11:2380,node2=https://10.0.0.12:2380,node3=https://10.0.0.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_CERT_FILE="/etc/etcd/etcd-peer.pem"
ETCD_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/etcd-peer.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
ETCD_PEER_AUTO_TLS="true"
EOF
  1. k8s-node1配置etcd
cat > /etc/etcd/etcd.conf <<EOF
ETCD_DATA_DIR="/var/lib/etcd/"
ETCD_LISTEN_PEER_URLS="https://10.0.0.13:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.13:2379,http://127.0.0.1:2379"
ETCD_NAME="node3"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.13:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.13:2379,http://127.0.0.1:2379"
ETCD_INITIAL_CLUSTER="node1=https://10.0.0.11:2380,node2=https://10.0.0.12:2380,node3=https://10.0.0.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_CERT_FILE="/etc/etcd/etcd-peer.pem"
ETCD_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/etcd-peer.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
ETCD_PEER_AUTO_TLS="true"
EOF
  1. k8s-master,k8s-node1,k8s-node2同時啟動etcd服務並加入開機自啟
systemctl start etcd
systemctl enable etcd
  1. k8s-master驗證etcd叢集
[root@k8s-master ~]# etcdctl member list
55fcbe0adaa45350: name=node3 peerURLs=https://10.0.0.13:2380 clientURLs=http://127.0.0.1:2379,https://10.0.0.13:2379 isLeader=true
cebdf10928a06f3c: name=node1 peerURLs=https://10.0.0.11:2380 clientURLs=http://127.0.0.1:2379,https://10.0.0.11:2379 isLeader=false
f7a9c20602b8532e: name=node2 peerURLs=https://10.0.0.12:2380 clientURLs=http://127.0.0.1:2379,https://10.0.0.12:2379 isLeader=false

master節點安裝

  1. k8s-node3下載二進位制包,解壓,並推送master節點所需服務到k8s-master

    本架構使用v1.15.4的kubernetes-server二進位制包

mkdir /opt/softs && cd /opt/softs
wget https://storage.googleapis.com/kubernetes-release/release/v1.16.1/kubernetes-server-linux-amd64.tar.gz
wget https://storage.googleapis.com/kubernetes-release/release/v1.15.4/kubernetes-server-linux-amd64.tar.gz
tar xf kubernetes-server-linux-amd64-v1.15.4.tar.gz 
cd /opt/softs/kubernetes/server/bin/
scp -rp kube-apiserver kube-controller-manager kube-scheduler kubectl [email protected]:/usr/sbin/
  1. k8s-node3簽發client證書
cd /opt/certs/
cat > /opt/certs/client-csr.json <<EOF
{
    "CN": "k8s-node",
    "hosts": [
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
EOF
[root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json|cfssljson -bare client
2020/12/14 11:24:13 [INFO] generate received request
2020/12/14 11:24:13 [INFO] received CSR
2020/12/14 11:24:13 [INFO] generating key: rsa-2048
2020/12/14 11:24:13 [INFO] encoded CSR
2020/12/14 11:24:13 [INFO] signed certificate with serial number 558115824565037436109754375250535796590542635717
2020/12/14 11:24:13 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-node3 certs]# ls client*
client.csr  client-csr.json  client-key.pem  client.pem
  1. k8s-node3簽發kube-apiserver證書
cat > /opt/certs/apiserver-csr.json <<EOF
{
    "CN": "apiserver",
    "hosts": [
        "127.0.0.1",
        "10.254.0.1",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local",
        "10.0.0.11",
        "10.0.0.12",
        "10.0.0.13"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
EOF

注意:pod資源建立時,使用環境變數匯入clusterIP網段的第一個ip(10.254.0.1),做為pod訪問api-server的內部IP,實現自動發現功能。

[root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json | cfssljson -bare apiserver
2020/12/14 11:31:42 [INFO] generate received request
2020/12/14 11:31:42 [INFO] received CSR
2020/12/14 11:31:42 [INFO] generating key: rsa-2048
2020/12/14 11:31:42 [INFO] encoded CSR
2020/12/14 11:31:42 [INFO] signed certificate with serial number 418646719184970675117735868438071556604394393673
2020/12/14 11:31:42 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-node3 certs]# ls apiserver*
apiserver.csr  apiserver-csr.json  apiserver-key.pem  apiserver.pem
  1. k8s-node3推送證書給k8s-master
scp -rp ca*pem apiserver*pem client*pem [email protected]:/etc/kubernetes

安裝api-server服務

  1. master節點檢視證書
[root@k8s-master kubernetes]# ls /etc/kubernetes
apiserver-key.pem  apiserver.pem  ca-key.pem  ca.pem  client-key.pem  client.pem
  1. master節點配置api-server審計日誌規則
cat > /etc/kubernetes/audit.yaml <<EOF
apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
  - "RequestReceived"
rules:
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
    - group: ""
      # Resource "pods" doesn't match requests to any subresource of pods,
      # which is consistent with the RBAC policy.
      resources: ["pods"]
  # Log "pods/log", "pods/status" at Metadata level
  - level: Metadata
    resources:
    - group: ""
      resources: ["pods/log", "pods/status"]

  # Don't log requests to a configmap called "controller-leader"
  - level: None
    resources:
    - group: ""
      resources: ["configmaps"]
      resourceNames: ["controller-leader"]

  # Don't log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
    - group: "" # core API group
      resources: ["endpoints", "services"]

  # Don't log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
    - "/api*" # Wildcard matching.
    - "/version"

  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"]

  # Log configmap and secret changes in all other namespaces at the Metadata level.
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]

  # Log all other resources in core and extensions at the Request level.
  - level: Request
    resources:
    - group: "" # core API group
    - group: "extensions" # Version of group should NOT be included.

  # A catch-all rule to log all other requests at the Metadata level.
  - level: Metadata
    # Long-running requests like watches that fall under this rule will not
    # generate an audit event in RequestReceived.
    omitStages:
      - "RequestReceived"
EOF
  1. master節點配置api-server.service
cat > /usr/lib/systemd/system/kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
[Service]
ExecStart=/usr/sbin/kube-apiserver \\
  --audit-log-path /var/log/kubernetes/audit-log \\
  --audit-policy-file /etc/kubernetes/audit.yaml \\
  --authorization-mode RBAC \\
  --client-ca-file /etc/kubernetes/ca.pem \\
  --requestheader-client-ca-file /etc/kubernetes/ca.pem \\
  --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \\
  --etcd-cafile /etc/kubernetes/ca.pem \\
  --etcd-certfile /etc/kubernetes/client.pem \\
  --etcd-keyfile /etc/kubernetes/client-key.pem \\
  --etcd-servers https://10.0.0.11:2379,https://10.0.0.12:2379,https://10.0.0.13:2379 \\
  --service-account-key-file /etc/kubernetes/ca-key.pem \\
  --service-cluster-ip-range 10.254.0.0/16 \\
  --service-node-port-range 30000-59999 \\
  --kubelet-client-certificate /etc/kubernetes/client.pem \\
  --kubelet-client-key /etc/kubernetes/client-key.pem \\
  --log-dir  /var/log/kubernetes/ \\
  --logtostderr=false \\
  --tls-cert-file /etc/kubernetes/apiserver.pem \\
  --tls-private-key-file /etc/kubernetes/apiserver-key.pem \\
  --v 2
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

為了省事,apiserver和etcd通訊,apiserver和kubelet通訊共用一套client cert證書。

--audit-log-path /var/log/kubernetes/audit-log \ # 審計日誌路徑
--audit-policy-file /etc/kubernetes/audit.yaml \ # 審計規則檔案
--authorization-mode RBAC \                      # 授權模式:RBAC
--client-ca-file /etc/kubernetes/ca.pem \        # client ca證書
--requestheader-client-ca-file /etc/kubernetes/ca.pem \ # 請求頭 ca證書
--enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \ # 啟用的准入外掛
--etcd-cafile /etc/kubernetes/ca.pem \          # 與etcd通訊ca證書
--etcd-certfile /etc/kubernetes/client.pem \    # 與etcd通訊client證書
--etcd-keyfile /etc/kubernetes/client-key.pem \ # 與etcd通訊client私鑰
--etcd-servers https://10.0.0.11:2379,https://10.0.0.12:2379,https://10.0.0.13:2379 \
--service-account-key-file /etc/kubernetes/ca-key.pem \ # ca私鑰
--service-cluster-ip-range 10.254.0.0/16 \              # VIP範圍
--service-node-port-range 30000-59999 \          # VIP埠範圍
--kubelet-client-certificate /etc/kubernetes/client.pem \ # 與kubelet通訊client證書
--kubelet-client-key /etc/kubernetes/client-key.pem \ # 與kubelet通訊client私鑰
--log-dir  /var/log/kubernetes/ \  # 日誌檔案路徑
--logtostderr=false \ # 關閉日誌標準錯誤輸出,就會輸出到檔案中
--tls-cert-file /etc/kubernetes/apiserver.pem \            # api服務證書
--tls-private-key-file /etc/kubernetes/apiserver-key.pem \ # api服務私鑰
--v 2  # 日誌級別 2
Restart=on-failure
  1. master節點建立日誌目錄,啟動並開機啟動apiserver
mkdir /var/log/kubernetes
systemctl daemon-reload
systemctl start kube-apiserver.service
systemctl enable kube-apiserver.service
  1. master節點檢驗
[root@k8s-master kubernetes]# kubectl get cs
NAME                 STATUS      MESSAGE                                                                                     ERROR
scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused   
etcd-1               Healthy     {"health":"true"}                                                                           
etcd-2               Healthy     {"health":"true"}                                                                           
etcd-0               Healthy     {"health":"true"}

安裝controller-manager服務

  1. master節點配置kube-controller-manager.service
cat > /usr/lib/systemd/system/kube-controller-manager.service <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
[Service]
ExecStart=/usr/sbin/kube-controller-manager \\
  --cluster-cidr 172.18.0.0/16 \\
  --log-dir /var/log/kubernetes/ \\
  --master http://127.0.0.1:8080 \\
  --service-account-private-key-file /etc/kubernetes/ca-key.pem \\
  --service-cluster-ip-range 10.254.0.0/16 \\
  --root-ca-file /etc/kubernetes/ca.pem \\
  --logtostderr=false \\
  --v 2
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
  1. master節點啟動並開機啟動controller-manager
systemctl daemon-reload 
systemctl enable kube-controller-manager.service
systemctl start kube-controller-manager.service

安裝scheduler服務

  1. master節點配置kube-scheduler.service
cat > /usr/lib/systemd/system/kube-scheduler.service <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
[Service]
ExecStart=/usr/sbin/kube-scheduler \\
  --log-dir /var/log/kubernetes/ \\
  --master http://127.0.0.1:8080 \\
  --logtostderr=false \\
  --v 2
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
  1. master節點啟動並開機啟動scheduler
systemctl daemon-reload
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service
  1. master節點檢驗
[root@k8s-master kubernetes]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}

node節點的安裝

安裝kubelet服務

  1. k8s-node3節點簽發kubelet證書
cd /opt/certs/
cat > kubelet-csr.json <<EOF
{
    "CN": "kubelet-node",
    "hosts": [
    "127.0.0.1",
    "10.0.0.11",
    "10.0.0.12",
    "10.0.0.13",
    "10.0.0.14",
    "10.0.0.15"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
EOF
[root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssljson -bare kubelet
2020/12/14 14:55:00 [INFO] generate received request
2020/12/14 14:55:00 [INFO] received CSR
2020/12/14 14:55:00 [INFO] generating key: rsa-2048
2020/12/14 14:55:00 [INFO] encoded CSR
2020/12/14 14:55:00 [INFO] signed certificate with serial number 110678673830256746819664644693971611232380342377
2020/12/14 14:55:00 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-node3 certs]# ls kubelet*
kubelet.csr  kubelet-csr.json  kubelet-key.pem  kubelet.pem
  1. **k8s-node3生成kubelet客戶端認證憑據kubelet.kubeconfig **
ln -s /opt/softs/kubernetes/server/bin/kubectl /usr/sbin/
# 設定叢集引數
kubectl config set-cluster myk8s \
   --certificate-authority=/opt/certs/ca.pem \
   --embed-certs=true \
   --server=https://10.0.0.11:6443 \
   --kubeconfig=kubelet.kubeconfig
# 設定客戶端認證引數
kubectl config set-credentials k8s-node --client-certificate=/opt/certs/client.pem --client-key=/opt/certs/client-key.pem --embed-certs=true --kubeconfig=kubelet.kubeconfig
# 生成上下文引數
kubectl config set-context myk8s-context \
   --cluster=myk8s \
   --user=k8s-node \
   --kubeconfig=kubelet.kubeconfig
# 切換當前上下文
kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig
# cat kubelet.kubeconfig 
apiVersion: v1
clusters:
- cluster:    # 叢集
    certificate-authority-data:  ... ... # ca證書
    server: https://10.0.0.11:6443       # apiserve服務地址
  name: myk8s # 叢集名稱
contexts:
- context:     # 上下文
    cluster: myk8s
    user: k8s-node
  name: myk8s-context
current-context: myk8s-context # 當前上下文
kind: Config
preferences: {}
users:
- name: k8s-node # 使用者名稱
  user:
    client-certificate-data: ... ... # client證書
    client-key-data:         ... ... # client私鑰
  1. master節點建立rbac許可權service資源(只需要建立一次)
cat > k8s-node.yaml <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: k8s-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s-node
EOF
kubectl create -f k8s-node.yaml
  1. node節點安裝docker-ce啟動並加入開機自啟,並配置映象加速,用systemd控制
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo
yum install docker-ce -y
systemctl enable docker
systemctl start docker
cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://registry.docker-cn.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl restart docker.service
docker info
  1. k8s-node3推送kubelet命令,客戶端認證憑據和所需證書到node節點
cd /opt/certs/
scp -rp kubelet.kubeconfig ca*pem kubelet*pem [email protected]:/etc/kubernetes
scp -rp /opt/softs/kubernetes/server/bin/kubelet [email protected]:/usr/bin/
scp -rp kubelet.kubeconfig ca*pem kubelet*pem [email protected]:/etc/kubernetes
scp -rp /opt/softs/kubernetes/server/bin/kubelet [email protected]:/usr/bin/
  1. node節點配置kubelet.service啟動並開機啟動
mkdir /var/log/kubernetes
cat > /usr/lib/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service multi-user.target
Requires=docker.service
[Service]
ExecStart=/usr/bin/kubelet \\
  --anonymous-auth=false \\
  --cgroup-driver systemd \\
  --cluster-dns 10.254.230.254 \\
  --cluster-domain cluster.local \\
  --runtime-cgroups=/systemd/system.slice \\
  --kubelet-cgroups=/systemd/system.slice \\
  --fail-swap-on=false \\
  --client-ca-file /etc/kubernetes/ca.pem \\
  --tls-cert-file /etc/kubernetes/kubelet.pem \\
  --tls-private-key-file /etc/kubernetes/kubelet-key.pem \\
  --hostname-override 10.0.0.12 \\
  --image-gc-high-threshold 90 \\
  --image-gc-low-threshold 70 \\
  --kubeconfig /etc/kubernetes/kubelet.kubeconfig \\
  --log-dir /var/log/kubernetes/ \\
  --pod-infra-container-image t29617342/pause-amd64:3.0 \\
  --logtostderr=false \\
  --v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kubelet.service
systemctl start kubelet.service
Requires=docker.service # 依賴服務
[Service]
ExecStart=/usr/bin/kubelet \
--anonymous-auth=false \         # 關閉匿名認證
--cgroup-driver systemd \        # 用systemd控制
--cluster-dns 10.254.230.254 \   # DNS地址
--cluster-domain cluster.local \ # DNS域名,與DNS服務配置資源指定的一致
--runtime-cgroups=/systemd/system.slice \
--kubelet-cgroups=/systemd/system.slice \
--fail-swap-on=false \           # 關閉不使用swap
--client-ca-file /etc/kubernetes/ca.pem \                # ca證書
--tls-cert-file /etc/kubernetes/kubelet.pem \            # kubelet證書
--tls-private-key-file /etc/kubernetes/kubelet-key.pem \ # kubelet金鑰
--hostname-override 10.0.0.13 \  # kubelet主機名, 各node節點不一樣
--image-gc-high-threshold 20 \   # 磁碟使用率超過20,始終執行映象垃圾回收
--image-gc-low-threshold 10 \    # 磁碟使用率小於10,從不執行映象垃圾回收
--kubeconfig /etc/kubernetes/kubelet.kubeconfig \ # 客戶端認證憑據
--pod-infra-container-image t29617342/pause-amd64:3.0 \ # pod基礎容器映象

注意:這裡的pod基礎容器映象使用的是官方倉庫t29617342使用者的公開映象!

  1. 其他node節點重複4.5.6步(注意:修改scp IP和hostname)

  2. master節點驗證

[root@k8s-master ~]# kubectl get nodes
NAME        STATUS   ROLES    AGE     VERSION
10.0.0.12   Ready    <none>   4m19s   v1.15.4
10.0.0.13   Ready    <none>   13s     v1.15.4

安裝kube-proxy服務

  1. k8s-node3節點簽發證書kube-proxy-client
cd /opt/certs/
cat > /opt/certs/kube-proxy-csr.json <<EOF
{
    "CN": "system:kube-proxy",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
EOF
[root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json | cfssljson -bare kube-proxy-client
2020/12/14 16:20:46 [INFO] generate received request
2020/12/14 16:20:46 [INFO] received CSR
2020/12/14 16:20:46 [INFO] generating key: rsa-2048
2020/12/14 16:20:46 [INFO] encoded CSR
2020/12/14 16:20:46 [INFO] signed certificate with serial number 364147028440857189661095322729307531340019233888
2020/12/14 16:20:46 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-node3 certs]# ls kube-proxy-c*
kube-proxy-client.csr  kube-proxy-client-key.pem  kube-proxy-client.pem  kube-proxy-csr.json
  1. k8s-node3生成kubelet-proxy客戶端認證憑據kube-proxy.kubeconfig
kubectl config set-cluster myk8s \
   --certificate-authority=/opt/certs/ca.pem \
   --embed-certs=true \
   --server=https://10.0.0.11:6443 \
   --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
   --client-certificate=/opt/certs/kube-proxy-client.pem \
   --client-key=/opt/certs/kube-proxy-client-key.pem \
   --embed-certs=true \
   --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context myk8s-context \
   --cluster=myk8s \
   --user=kube-proxy \
   --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig
  1. k8s-node3推送kube-proxy命令和客戶端認證憑據到node節點
scp -rp /opt/certs/kube-proxy.kubeconfig [email protected]:/etc/kubernetes/
scp -rp /opt/certs/kube-proxy.kubeconfig [email protected]:/etc/kubernetes/
scp -rp /opt/softs/kubernetes/server/bin/kube-proxy [email protected]:/usr/bin/
scp -rp /opt/softs/kubernetes/server/bin/kube-proxy [email protected]:/usr/bin/
  1. node節點配置kube-proxy.service啟動並開機啟動(注意修改hostname-override)
cat > /usr/lib/systemd/system/kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
ExecStart=/usr/bin/kube-proxy \\
  --kubeconfig /etc/kubernetes/kube-proxy.kubeconfig \\
  --cluster-cidr 172.18.0.0/16 \\
  --hostname-override 10.0.0.12 \\
  --logtostderr=false \\
  --v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-proxy.service
systemctl start kube-proxy.service
--cluster-cidr 172.18.0.0/16 \ # pod IP

配置flannel網路

  1. 所有節點安裝flannel(master節點安裝方便測試)
yum install flannel -y
mkdir /opt/certs/
  1. k8s-node3節點簽發證書(複用client證書),推送給其他所有節點
cd /opt/certs/
scp -rp ca.pem client*pem [email protected]:/opt/certs/
scp -rp ca.pem client*pem [email protected]:/opt/certs/
scp -rp ca.pem client*pem [email protected]:/opt/certs/
  1. etcd節點建立flannel的key
# 通過這個key定義pod的ip地址範圍
etcdctl mk /atomic.io/network/config '{ "Network": "172.18.0.0/16","Backend": {"Type": "vxlan"} }'

注意:可能會失敗,提示

Error: x509: certificate signed by unknown authority

多重試幾次就好了。

  1. 所有節點配置flannel.service啟動並開機啟動
cat > /etc/sysconfig/flanneld <<EOF
FLANNEL_ETCD_ENDPOINTS="https://10.0.0.11:2379,https://10.0.0.12:2379,https://10.0.0.13:2379"
FLANNEL_ETCD_PREFIX="/atomic.io/network"
FLANNEL_OPTIONS="-etcd-cafile=/opt/certs/ca.pem -etcd-certfile=/opt/certs/client.pem -etcd-keyfile=/opt/certs/client-key.pem"
EOF
systemctl enable flanneld.service
systemctl start flanneld.service
  1. k8s-node1和k8s-node2修改docker.service:新增引數,iptables開啟轉發
sed -i '/ExecStart/c ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock' /usr/lib/systemd/system/docker.service
sed -i '/ExecStart/i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT' /usr/lib/systemd/system/docker.service
systemctl daemon-reload 
systemctl restart docker

docker啟動時,需要使用flannel指定的引數DOCKER_NETWORK_OPTIONS,使兩者網段一致。

[root@k8s-node1 ~]# cat /run/flannel/docker 
DOCKER_OPT_BIP="--bip=172.18.28.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=true"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.18.28.1/24 --ip-masq=true --mtu=1450"
  1. master驗證各節點互通
# docker0和flannel.1為172.18網段的相同網路
ifconfig
# 各node節點啟動一個容器
docker run -it alpine
# 檢視容器IP
ifconfig
# master節點ping所有node節點啟動的容器,驗證各節點互通
  1. master驗證k8s叢集

① 建立pod資源

kubectl run nginx --image=nginx:1.13 --replicas=2
kubectl get pod -o wide -A

run將在未來被移除,以後用:

kubectl create deployment test --image=nginx:1.13

k8s高版本支援 -A引數

-A, --all-namespaces # 如果存在,列出所有名稱空間中請求的物件

② 建立svc資源

kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort
kubectl get svc

③ 訪問驗證

[root@k8s-master ~]# curl -I 10.0.0.12:55531
HTTP/1.1 200 OK
Server: nginx/1.13.12
Date: Mon, 14 Dec 2020 09:27:20 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
Connection: keep-alive
ETag: "5acb8e45-264"
Accept-Ranges: bytes

[root@k8s-master ~]# curl -I 10.0.0.13:55531
HTTP/1.1 200 OK
Server: nginx/1.13.12
Date: Mon, 14 Dec 2020 09:27:23 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
Connection: keep-alive
ETag: "5acb8e45-264"
Accept-Ranges: bytes

kubectl命令列TAB鍵補全:

echo "source <(kubectl completion bash)" >> ~/.bashrc

汙點和容忍度

節點和Pod的親和力,用來將Pod吸引到一組節點【根據拓撲域】(作為優選或硬性要求)。

汙點(Taints)則相反,應用於node,它們允許一個節點排斥一組Pod。

汙點taints是定義在節點之上的key=value:effect,用於讓節點拒絕將Pod排程運行於其上, 除非該Pod物件具有接納節點汙點的容忍度。

容忍(Tolerations)應用於pod,允許(但不強制要求)pod排程到具有匹配汙點的節點上。

容忍度tolerations是定義在 Pod物件上的鍵值型屬性資料,用於配置其可容忍的節點汙點,而且排程器僅能將Pod物件排程至其能夠容忍該節點汙點的節點之上。

汙點(Taints)和容忍(Tolerations)共同作用,確保pods不會被排程到不適當的節點。一個或多個汙點應用於節點;這標誌著該節點不應該接受任何不容忍汙點的Pod。

說明:我們在平常使用中發現pod不會排程到k8s的master節點,就是因為master節點存在汙點。

多個Taints汙點和多個Tolerations容忍判斷:

可以在同一個node節點上設定多個汙點(Taints),在同一個pod上設定多個容忍(Tolerations)。

Kubernetes處理多個汙點和容忍的方式就像一個過濾器:從節點的所有汙點開始,然後忽略可以被Pod容忍匹配的汙點;保留其餘不可忽略的汙點,汙點的effect對Pod具有顯示效果:


汙點

汙點(Taints): node節點的屬性,通過打標籤實現

汙點(Taints)型別:

  • NoSchedule:不要再往該node節點排程了,不影響之前已經存在的pod。
  • PreferNoSchedule:備用。優先往其他node節點排程。
  • NoExecute:清場,驅逐。新pod不許來,老pod全趕走。適用於node節點下線。

汙點(Taints)的 effect 值 NoExecute,它會影響已經在節點上執行的 pod:

  • 如果 pod 不能容忍 effect 值為 NoExecute 的 taint,那麼 pod 將馬上被驅逐
  • 如果 pod 能夠容忍 effect 值為 NoExecute 的 taint,且在 toleration 定義中沒有指定 tolerationSeconds,則 pod 會一直在這個節點上執行。
  • 如果 pod 能夠容忍 effect 值為 NoExecute 的 taint,但是在toleration定義中指定了 tolerationSeconds,則表示 pod 還能在這個節點上繼續執行的時間長度。

  1. 檢視node節點標籤
[root@k8s-master ~]# kubectl get nodes --show-labels
NAME        STATUS     ROLES    AGE   VERSION   LABELS
10.0.0.12   NotReady   <none>   17h   v1.15.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.0.0.12,kubernetes.io/os=linux
10.0.0.13   NotReady   <none>   17h   v1.15.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.0.0.13,kubernetes.io/os=linux
  1. 新增標籤:node角色
kubectl label nodes 10.0.0.12 node-role.kubernetes.io/node=
  1. 檢視node節點標籤:10.0.0.12的ROLES變為node
[root@k8s-master ~]# kubectl get nodes --show-labels
NAME        STATUS     ROLES    AGE   VERSION   LABELS
10.0.0.12   NotReady   node     17h   v1.15.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.0.0.12,kubernetes.io/os=linux,node-role.kubernetes.io/node=
10.0.0.13   NotReady   <none>   17h   v1.15.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.0.0.13,kubernetes.io/os=linux
  1. 刪除標籤
kubectl label nodes 10.0.0.12 node-role.kubernetes.io/node-
  1. 新增標籤:硬碟型別
kubectl label nodes 10.0.0.12 disk=ssd
kubectl label nodes 10.0.0.13 disk=sata
  1. 清除其他pod
kubectl delete deployments --all
  1. 檢視當前pod:2個
[root@k8s-master ~]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
nginx-6459cd46fd-dl2ct   1/1     Running   1          16h   172.18.28.3   10.0.0.12   <none>           <none>
nginx-6459cd46fd-zfwbg   1/1     Running   0          16h   172.18.98.4   10.0.0.13   <none>           <none>

NoSchedule

  1. 新增汙點:基於硬碟型別的NoSchedule
kubectl taint node 10.0.0.12 disk=ssd:NoSchedule
  1. 檢視汙點
kubectl describe nodes 10.0.0.12|grep Taint
  1. 調整副本數
kubectl scale deployment nginx --replicas=5
  1. 檢視pod驗證:新增pod都在10.0.0.13上建立
kubectl get pod -o wide
  1. 刪除汙點
kubectl taint node 10.0.0.12 disk-

NoExecute

  1. 新增汙點:基於硬碟型別的NoExecute
kubectl taint node 10.0.0.12 disk=ssd:NoExecute
  1. 檢視pod驗證:所有pod都在10.0.0.13上建立,之前10.0.0.12上的pod也轉移到10.0.0.13上
kubectl get pod -o wide
  1. 刪除汙點
kubectl taint node 10.0.0.12 disk-

PreferNoSchedule

  1. 新增汙點:基於硬碟型別的PreferNoSchedule
kubectl taint node 10.0.0.12 disk=ssd:PreferNoSchedule
  1. 調整副本數
kubectl scale deployment nginx --replicas=2
kubectl scale deployment nginx --replicas=5
  1. 檢視pod驗證:有部分pod都在10.0.0.12上建立
kubectl get pod -o wide
  1. 刪除汙點
kubectl taint node 10.0.0.12 disk-

容忍度

容忍度(Tolerations):pod.spec的屬性,設定了容忍的Pod將可以容忍汙點的存在,可以被排程到存在汙點的Node上。


  1. 檢視解釋
kubectl explain pod.spec.tolerations
  1. 配置能夠容忍NoExecute汙點的deploy資源yaml配置檔案
mkdir -p /root/k8s_yaml/deploy && cd /root/k8s_yaml/deploy
cat > /root/k8s_yaml/deploy/k8s_deploy.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      tolerations:
      - key: "disk"
        operator: "Equal"
        value: "ssd"
        effect: "NoExecute"
      containers:
      - name: nginx
        image: nginx:1.13
        ports:
        - containerPort: 80
EOF
  1. 建立deploy資源
kubectl delete deployments nginx
kubectl create -f k8s_deploy.yaml
  1. 檢視當前pod
kubectl get pod -o wide
  1. 新增汙點:基於硬碟型別的NoExecute
kubectl taint node 10.0.0.12 disk=ssd:NoExecute
  1. 調整副本數
kubectl scale deployment nginx --replicas=5
  1. 檢視pod驗證:有部分pod都在10.0.0.12上建立,容忍了汙點
kubectl get pod -o wide
  1. 刪除汙點
kubectl taint node 10.0.0.12 disk-

pod.spec.tolerations示例

tolerations:
- key: "key"
  operator: "Equal"
  value: "value"
  effect: "NoSchedule"
---
tolerations:
- key: "key"
  operator: "Exists"
  effect: "NoSchedule"
---
tolerations:
- key: "key"
  operator: "Equal"
  value: "value"
  effect: "NoExecute"
  tolerationSeconds: 3600

說明:

  • 其中key、value、effect要與Node上設定的taint保持一致
  • operator的值為Exists時,將會忽略value;只要有key和effect就行
  • tolerationSeconds:表示pod能夠容忍 effect 值為 NoExecute 的 taint;當指定了 tolerationSeconds【容忍時間】,則表示 pod 還能在這個節點上繼續執行的時間長度。

不指定key值和effect值時,且operator為Exists,表示容忍所有的汙點【能匹配汙點所有的keys,values和effects】

tolerations:
- operator: "Exists"

不指定effect值時,則能容忍汙點key對應的所有effects情況

tolerations:
- key: "key"
  operator: "Exists"

有多個Master存在時,為了防止資源浪費,可以進行如下設定:

kubectl taint nodes Node-name node-role.kubernetes.io/master=:PreferNoSchedule

常用資源

pod資源

pod資源至少由兩個容器組成:一個基礎容器pod+業務容器

  • 動態pod:從etcd獲取yaml檔案。

  • 靜態pod:kubelet本地目錄讀取yaml檔案。


  1. k8s-node1修改kubelet.service,指定靜態pod路徑:該目錄下只能放置靜態pod的yaml配置檔案
sed -i '22a \ \ --pod-manifest-path /etc/kubernetes/manifest \\' /usr/lib/systemd/system/kubelet.service
mkdir /etc/kubernetes/manifest
systemctl daemon-reload
systemctl restart kubelet.service
  1. k8s-node1建立靜態pod的yaml配置檔案:靜態pod立即被建立,其name增加字尾本機IP
cat > /etc/kubernetes/manifest/k8s_pod.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: static-pod
spec:
  containers:
    - name: nginx
      image: nginx:1.13
      ports:
        - containerPort: 80
EOF
  1. master檢視pod
[root@k8s-master ~]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-6459cd46fd-dl2ct   1/1     Running   0          51m
nginx-6459cd46fd-zfwbg   1/1     Running   0          51m
test-8c7c68d6d-x79hf     1/1     Running   0          51m
static-pod-10.0.0.12     1/1     Running   0          3s

kubeadm部署k8s基於靜態pod。

靜態pod:

  • 建立yaml配置檔案,立即自動建立pod。

  • 移走yaml配置檔案,立即自動移除pod。


secret資源

secret資源是某個namespace的區域性資源,含有加密的密碼、金鑰、證書等。


k8s對接harbor

首先搭建Harbor docker映象倉庫,啟用https,建立私有倉庫。

然後使用secrets資源管理金鑰對,用於拉取映象時的身份驗證。


首先:deploy在pull映象時呼叫secrets

  1. 建立secrets資源regcred
kubectl create secret docker-registry regcred --docker-server=blog.oldqiang.com --docker-username=admin --docker-password=a123456 [email protected]
  1. 檢視secrets資源
[root@k8s-master ~]# kubectl get secrets 
NAME                       TYPE                                  DATA   AGE
default-token-vgc4l        kubernetes.io/service-account-token   3      2d19h
regcred                    kubernetes.io/dockerconfigjson        1      114s
  1. deploy資源呼叫secrets資源的金鑰對pull映象
cd /root/k8s_yaml/deploy
cat > /root/k8s_yaml/deploy/k8s_deploy_secrets.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      imagePullSecrets:
      - name: regcred
      containers:
      - name: nginx
        image: blog.oldqiang.com/oldboy/nginx:1.13
        ports:
        - containerPort: 80
EOF
  1. 建立deploy資源
kubectl delete deployments nginx
kubectl create -f k8s_deploy_secrets.yaml
  1. 檢視當前pod:資源建立成功
kubectl get pod -o wide

RBAC:deploy在pull映象時通過使用者呼叫secrets

  1. 建立secrets資源harbor-secret
kubectl create secret docker-registry harbor-secret --namespace=default --docker-username=admin --docker-password=a123456 --docker-server=blog.oldqiang.com
  1. 建立使用者和pod資源的yaml檔案
cd /root/k8s_yaml/deploy
# 建立使用者
cat > /root/k8s_yaml/deploy/k8s_sa_harbor.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: docker-image
  namespace: default
imagePullSecrets:
- name: harbor-secret
EOF
# 建立pod
cat > /root/k8s_yaml/deploy/k8s_pod.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: static-pod
spec:
  serviceAccount: docker-image
  containers:
    - name: nginx
      image: blog.oldqiang.com/oldboy/nginx:1.13
      ports:
        - containerPort: 80
EOF
  1. 建立資源
kubectl delete deployments nginx
kubectl create -f k8s_sa_harbor.yaml
kubectl create -f k8s_pod.yaml
  1. 檢視當前pod:資源建立成功
kubectl get pod -o wide

configmap資源

configmap資源用來存放配置檔案,可用掛載到pod容器上。


  1. 建立配置檔案
cat > /root/k8s_yaml/deploy/81.conf <<EOF
    server {
        listen       81;
        server_name  localhost;
        root         /html;
        index      index.html index.htm;
        location / {
        }
    }
EOF
  1. 建立configmap資源(可以指定多個--from-file)
kubectl create configmap 81.conf --from-file=/root/k8s_yaml/deploy/81.conf
  1. 檢視configmap資源
kubectl get cm
kubectl get cm 81.conf -o yaml
  1. deploy資源掛載configmap資源
cd /root/k8s_yaml/deploy
cat > /root/k8s_yaml/deploy/k8s_deploy_cm.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      volumes:
        - name: nginx-config
          configMap:
            name: 81.conf
            items:
              - key: 81.conf  # 指定多個配置檔案中的一個
                path: 81.conf
      containers:
      - name: nginx
        image: nginx:1.13
        volumeMounts:
          - name: nginx-config
            mountPath: /etc/nginx/conf.d
        ports:
        - containerPort: 80
          name: port1
        - containerPort: 81
          name: port2
EOF
  1. 建立deploy資源
kubectl delete deployments nginx
kubectl create -f k8s_deploy_cm.yaml
  1. 檢視當前pod
kubectl get pod -o wide
  1. 但是volumeMounts只能掛目錄,原有檔案會被覆蓋,導致80埠不能訪問。

initContainers資源

在啟動pod前,先啟動initContainers容器進行初始化操作。


  1. 檢視解釋
kubectl explain pod.spec.initContainers
  1. deploy資源掛載configmap資源

初始化操作:

  • 初始化容器一:掛載持久化hostPath和configmap,拷貝81.conf到持久化目錄
  • 初始化容器二:掛載持久化hostPath,拷貝default.conf到持久化目錄

最後Deployment容器啟動,掛載持久化目錄。

cd /root/k8s_yaml/deploy
cat > /root/k8s_yaml/deploy/k8s_deploy_init.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      volumes:
        - name: config
          hostPath:
            path: /mnt
        - name: tmp
          configMap:
            name: 81.conf
            items:
              - key: 81.conf
                path: 81.conf
      initContainers:
      - name: cp1
        image: nginx:1.13
        volumeMounts:
          - name: config
            mountPath: /nginx_config
          - name: tmp
            mountPath: /tmp
        command: ["cp","/tmp/81.conf","/nginx_config/"]
      - name: cp2
        image: nginx:1.13
        volumeMounts:
          - name: config
            mountPath: /nginx_config
        command: ["cp","/etc/nginx/conf.d/default.conf","/nginx_config/"]
      containers:
      - name: nginx
        image: nginx:1.13
        volumeMounts:
          - name: config
            mountPath: /etc/nginx/conf.d
        ports:
        - containerPort: 80
          name: port1
        - containerPort: 81
          name: port2
EOF
  1. 建立deploy資源
kubectl delete deployments nginx
kubectl create -f k8s_deploy_init.yaml
  1. 檢視當前pod
kubectl get pod -o wide -l app=nginx
  1. 檢視存在配置檔案:81.conf,default.conf
kubectl exec -ti nginx-7879567f94-25g5s /bin/bash
ls /etc/nginx/conf.d

常用服務

RBAC

RBAC:role base access controller

kubernetes的認證訪問授權機制RBAC,通過apiserver設定-–authorization-mode=RBAC開啟。

RBAC的授權步驟分為兩步:

1)定義角色:在定義角色時會指定此角色對於資源的訪問控制的規則;

2)繫結角色:將主體與角色進行繫結,對使用者進行訪問授權。


使用者:sa(ServiceAccount)

角色:role

  • 區域性角色:Role
    • 角色繫結(授權):RoleBinding
  • 全域性角色:ClusterRole
    • 角色繫結(授權):ClusterRoleBinding


使用流程圖

  • 使用者使用:如果是使用者需求許可權,則將Role與User(或Group)繫結(這需要建立User/Group);

  • 程式使用:如果是程式需求許可權,將Role與ServiceAccount指定(這需要建立ServiceAccount並且在deployment中指定ServiceAccount)。


部署dns服務

部署coredns,官方文件

  1. master節點建立配置檔案coredns.yaml(指定排程到node2)
mkdir -p /root/k8s_yaml/dns && cd /root/k8s_yaml/dns
cat > /root/k8s_yaml/dns/coredns.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            upstream
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        beta.kubernetes.io/os: linux
      nodeName: 10.0.0.13
      containers:
      - name: coredns
        image: coredns/coredns:1.3.1
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 100Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        - name: tmp
          mountPath: /tmp
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: tmp
          emptyDir: {}
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.254.230.254
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
EOF
  1. master節點建立資源(準備映象:coredns/coredns:1.3.1)
kubectl create -f coredns.yaml
  1. master節點檢視pod使用者
kubectl get pod -n kube-system
kubectl get pod -n kube-system coredns-6cf5d7fdcf-dvp8r -o yaml | grep -i ServiceAccount
  1. master節點檢視DNS資源coredns使用者的全域性角色,繫結
kubectl get clusterrole | grep coredns
kubectl get clusterrolebindings | grep coredns
kubectl get sa -n kube-system | grep coredns
  1. master節點建立tomcat+mysql的deploy資源yaml檔案
mkdir -p /root/k8s_yaml/tomcat_deploy && cd /root/k8s_yaml/tomcat_deploy
cat > /root/k8s_yaml/tomcat_deploy/mysql-deploy.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  namespace: tomcat
  name: mysql
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - name: mysql
          image: mysql:5.7
          ports:
          - containerPort: 3306
          env:
          - name: MYSQL_ROOT_PASSWORD
            value: '123456'
EOF
cat > /root/k8s_yaml/tomcat_deploy/mysql-svc.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  namespace: tomcat
  name: mysql
spec:
  ports:
    - port: 3306
      targetPort: 3306
  selector:
    app: mysql
EOF
cat > /root/k8s_yaml/tomcat_deploy/tomcat-deploy.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  namespace: tomcat
  name: myweb
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: myweb
    spec:
      containers:
        - name: myweb
          image: kubeguide/tomcat-app:v2
          ports:
          - containerPort: 8080
          env:
          - name: MYSQL_SERVICE_HOST
            value: 'mysql'
          - name: MYSQL_SERVICE_PORT
            value: '3306'
EOF
cat > /root/k8s_yaml/tomcat_deploy/tomcat-svc.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  namespace: tomcat
  name: myweb
spec:
  type: NodePort
  ports:
    - port: 8080
      nodePort: 30008
  selector:
    app: myweb
EOF
  1. master節點建立資源(準備映象:mysql:5.7 和 kubeguide/tomcat-app:v2)
kubectl create namespace tomcat
kubectl create -f .
  1. master節點驗證
[root@k8s-master tomcat_demo]# kubectl get pod -n tomcat
NAME                     READY   STATUS    RESTARTS   AGE
mysql-94f6bbcfd-6nng8    1/1     Running   0          5s
myweb-5c8956ff96-fnhjh   1/1     Running   0          5s
[root@k8s-master tomcat_deploy]# kubectl -n tomcat exec -ti myweb-5c8956ff96-fnhjh /bin/bash
root@myweb-5c8956ff96-fnhjh:/usr/local/tomcat# ping mysql
PING mysql.tomcat.svc.cluster.local (10.254.94.77): 56 data bytes
^C--- mysql.tomcat.svc.cluster.local ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
root@myweb-5c8956ff96-fnhjh:/usr/local/tomcat# exit
exit
  1. 驗證DNS
  • master節點
[root@k8s-master deploy]# kubectl get pod -n kube-system -o wide
NAME                       READY   STATUS    RESTARTS   AGE    IP            NODE        NOMINATED NODE   READINESS GATES
coredns-6cf5d7fdcf-dvp8r   1/1     Running   0          177m   172.18.98.2   10.0.0.13   <none>           <none>
yum install bind-utils -y
dig @172.18.98.2 kubernetes.default.svc.cluster.local +short
  • node節點(kube-proxy)
yum install bind-utils -y
dig @10.254.230.254 kubernetes.default.svc.cluster.local +short

部署dashboard服務

  1. 官方配置檔案,略作修改

k8s1.15的dashboard-controller.yaml建議使用dashboard1.10.1kubernetes-dashboard.yaml

mkdir -p /root/k8s_yaml/dashboard && cd /root/k8s_yaml/dashboard
cat > /root/k8s_yaml/dashboard/kubernetes-dashboard.yaml <<EOF
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# ------------------- Dashboard Secret ------------------- #

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque

---
# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
  # Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Deployment ------------------- #

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule

---
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      nodePort: 30001
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
EOF
# 映象改用國內源
image: registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
# service型別改為NodePort:指定宿主機埠
spec:
type: NodePort
ports:
    - port: 443
      nodePort: 30001
      targetPort: 8443
  1. 建立資源(準備映象:registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1)
kubectl create -f kubernetes-dashboard.yaml
  1. 檢視當前已存在角色admin
kubectl get clusterrole | grep admin
  1. 建立使用者,繫結已存在角色admin(預設使用者只有最小許可權)
cat > /root/k8s_yaml/dashboard/dashboard_rbac.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
  name: kubernetes-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-admin
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-admin
  namespace: kube-system
EOF
  1. 建立資源
kubectl create -f dashboard_rbac.yaml
  1. 檢視admin角色使用者令牌
[root@k8s-master dashboard]# kubectl describe secrets -n kube-system kubernetes-admin-token-tpqs6 
Name:         kubernetes-admin-token-tpqs6
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-admin
              kubernetes.io/service-account.uid: 17f1f684-588a-4639-8ec6-a39c02361d0e

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1354 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWFkbWluLXRva2VuLXRwcXM2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Imt1YmVybmV0ZXMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIxN2YxZjY4NC01ODhhLTQ2MzktOGVjNi1hMzljMDIzNjFkMGUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06a3ViZXJuZXRlcy1hZG1pbiJ9.JMvv-W50Zala4I0uxe488qjzDZ2m05KN0HMX-RCHFg87jHq49JGyqQJQDFgujKCyecAQSYRFm4uZWnKiWR81Xd7IZr16pu5exMpFaAryNDeAgTAsvpJhaAuumopjiXXYgip-7pNKxJSthmboQkQ4OOmzSHRv7N6vOsyDQOhwGcgZ01862dsjowP3cCPL6GSQCeXT0TX968MyeKZ-2JV4I2XdbkPoZYCRNvwf9F3u74xxPlC9vVLYWdNP8rXRBXi3W_DdQyXntN-jtMXHaN47TWuqKIgyWmT3ZzTIKhKART9_7YeiOAA6LVGtYq3kOvPqyGHvQulx6W2ADjCTAAPovA
  1. 使用火狐瀏覽器訪問:https://10.0.0.12:30001使用令牌登入
  2. 生成證書,解決Google瀏覽器不能開啟kubernetes dashboard的問題
mkdir /root/k8s_yaml/dashboard/key && cd /root/k8s_yaml/dashboard/key
openssl genrsa -out dashboard.key 2048
openssl req -new -out dashboard.csr -key dashboard.key -subj '/CN=10.0.0.11'
openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
  1. 刪除原有的證書secret資源
kubectl delete secret kubernetes-dashboard-certs -n kube-system
  1. 建立新的證書secret資源
kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kube-system
  1. 刪除pod,自動建立新pod生效
[root@k8s-master key]# kubectl get pod -n kube-system 
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-6cf5d7fdcf-dvp8r                1/1     Running   0          4h19m
kubernetes-dashboard-5dc4c54b55-sn8sv   1/1     Running   0          41m
kubectl delete pod -n kube-system kubernetes-dashboard-5dc4c54b55-sn8sv
  1. 使用谷歌瀏覽器訪問:https://10.0.0.12:30001使用令牌登入
  2. 令牌生成kubeconfig,解決令牌登陸快速超時的問題
DASH_TOKEN='eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWFkbWluLXRva2VuLXRwcXM2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Imt1YmVybmV0ZXMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIxN2YxZjY4NC01ODhhLTQ2MzktOGVjNi1hMzljMDIzNjFkMGUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06a3ViZXJuZXRlcy1hZG1pbiJ9.JMvv-W50Zala4I0uxe488qjzDZ2m05KN0HMX-RCHFg87jHq49JGyqQJQDFgujKCyecAQSYRFm4uZWnKiWR81Xd7IZr16pu5exMpFaAryNDeAgTAsvpJhaAuumopjiXXYgip-7pNKxJSthmboQkQ4OOmzSHRv7N6vOsyDQOhwGcgZ01862dsjowP3cCPL6GSQCeXT0TX968MyeKZ-2JV4I2XdbkPoZYCRNvwf9F3u74xxPlC9vVLYWdNP8rXRBXi3W_DdQyXntN-jtMXHaN47TWuqKIgyWmT3ZzTIKhKART9_7YeiOAA6LVGtYq3kOvPqyGHvQulx6W2ADjCTAAPovA'
kubectl config set-cluster kubernetes --server=10.0.0.11:6443 --kubeconfig=/root/dashbord-admin.conf
kubectl config set-credentials admin --token=$DASH_TOKEN --kubeconfig=/root/dashbord-admin.conf
kubectl config set-context admin --cluster=kubernetes --user=admin --kubeconfig=/root/dashbord-admin.conf
kubectl config use-context admin --kubeconfig=/root/dashbord-admin.conf
  1. 下載到主機,用於以後登入使用
cd ~
sz dashbord-admin.conf
  1. 使用谷歌瀏覽器訪問:https://10.0.0.12:30001使用kubeconfig檔案登入,可以exec

網路

對映(endpoints資源)

  1. master節點檢視endpoints資源
[root@k8s-master ~]# kubectl get endpoints 
NAME         ENDPOINTS        AGE
kubernetes   10.0.0.11:6443   28h
... ...

可用其將外部服務對映到內部使用。每個Service資源自動關連一個endpoints資源,優先標籤,然後同名。

  1. k8s-node2準備外部資料庫
yum install mariadb-server -y
systemctl start mariadb
mysql_secure_installation

n
y
y
y
y
mysql -e "grant all on *.* to root@'%' identified by '123456';"

該專案在tomcat的index.html頁面,已經將資料庫連線寫固定了,使用者名稱root,密碼123456。

  1. master節點建立endpoint和svc資源yaml檔案
cd /root/k8s_yaml/tomcat_deploy
cat > /root/k8s_yaml/tomcat_deploy/mysql_endpoint_svc.yaml <<EOF
apiVersion: v1
kind: Endpoints
metadata:
  name: mysql
  namespace: tomcat
subsets:
- addresses:
  - ip: 10.0.0.13
  ports:
  - name: mysql
    port: 3306
    protocol: TCP
--- 
apiVersion: v1
kind: Service
metadata:
  name: mysql
  namespace: tomcat
spec:
  ports:
  - name: mysql
    port: 3306
    protocol: TCP
    targetPort: 3306  
  type: ClusterIP
EOF
# 可以參考系統預設建立
kubectl get endpoints kubernetes -o yaml
kubectl get svc kubernetes -o yaml

注意:此時不能使用標籤選擇器!

  1. master節點建立資源
kubectl delete deployment mysql -n tomcat
kubectl delete svc mysql -n tomcat
kubectl create -f mysql_endpoint_svc.yaml
  1. master節點檢視endpoints資源及其與svc的關聯
kubectl get endpoints -n tomcat
kubectl describe svc -n tomcat
  1. 瀏覽器訪問http://10.0.0.12:30008/demo/

  2. k8s-node2檢視資料庫驗證

[root@k8s-node2 ~]# mysql -e 'show databases;'
+--------------------+
| Database           |
+--------------------+
| information_schema |
| HPE_APP            |
| mysql              |
| performance_schema |
+--------------------+
[root@k8s-node2 ~]# mysql -e 'use HPE_APP;select * from T_USERS;'
+----+-----------+-------+
| ID | USER_NAME | LEVEL |
+----+-----------+-------+
|  1 | me        | 100   |
|  2 | our team  | 100   |
|  3 | HPE       | 100   |
|  4 | teacher   | 100   |
|  5 | docker    | 100   |
|  6 | google    | 100   |
+----+-----------+-------+

kube-proxy的ipvs模式

  1. node節點安裝依賴命令
yum install ipvsadm conntrack-tools -y
  1. node節點修改kube-proxy.service增加引數
cat > /usr/lib/systemd/system/kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
ExecStart=/usr/bin/kube-proxy \\
  --kubeconfig /etc/kubernetes/kube-proxy.kubeconfig \\
  --cluster-cidr 172.18.0.0/16 \\
  --hostname-override 10.0.0.12 \\
  --proxy-mode ipvs \\
  --logtostderr=false \\
  --v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
--proxy-mode ipvs  # 啟用ipvs模式

LVS預設NAT模式。不滿足LVS,自動降級為iptables。

  1. node節點重啟kube-proxy並檢查LVS規則
systemctl daemon-reload 
systemctl restart kube-proxy.service 
ipvsadm -L -n 

七層負載均衡(ingress-traefik)

Ingress 包含兩大元件:Ingress Controller 和 Ingress。

  • ingress-controller(traefik)服務元件,直接使用宿主機網路。
  • Ingress資源是基於DNS名稱(host)或URL路徑把請求轉發到指定的Service資源的轉發規則


Ingress-Traefik

Traefik 是一款開源的反向代理與負載均衡工具。它最大的優點是能夠與常見的微服務系統直接整合,可以實現自動化動態配置。目前支援 Docker、Swarm、Mesos/Marathon、 Mesos、Kubernetes、Consul、Etcd、Zookeeper、BoltDB、Rest API 等等後端模型。

Traefike可觀測性方案


建立rbac

  1. 建立rbac的yaml檔案
mkdir -p /root/k8s_yaml/ingress && cd /root/k8s_yaml/ingress
cat > /root/k8s_yaml/ingress/ingress_rbac.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
  name: traefik-ingress-controller
  namespace: kube-system
EOF
  1. 建立資源
kubectl create -f ingress_rbac.yaml
  1. 檢視資源
kubectl get serviceaccounts -n kube-system | grep traefik-ingress-controller
kubectl get clusterrole -n kube-system | grep traefik-ingress-controller
kubectl get clusterrolebindings.rbac.authorization.k8s.io -n kube-system | grep traefik-ingress-controller

部署traefik服務

  1. 建立traefik的DaemonSet資源yaml檔案
cat > /root/k8s_yaml/ingress/ingress_traefik.yaml <<EOF
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  selector:
    matchLabels:
      k8s-app: traefik-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      serviceAccountName: traefik-ingress-controller
      terminationGracePeriodSeconds: 60
      tolerations:
      - operator: "Exists"
      #nodeSelector:
        #kubernetes.io/hostname: master
      # 允許使用主機網路,指定主機埠hostPort
      hostNetwork: true
      containers:
      - image: traefik:v1.7.2
        imagePullPolicy: IfNotPresent
        name: traefik-ingress-lb
        ports:
        - name: http
          containerPort: 80
          hostPort: 80
        - name: admin
          containerPort: 8080
          hostPort: 8080
        args:
        - --api
        - --kubernetes
        - --logLevel=DEBUG
---
kind: Service
apiVersion: v1
metadata:
  name: traefik-ingress-service
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
    - protocol: TCP
      port: 80
      name: web
    - protocol: TCP
      port: 8080
      name: admin
  type: NodePort
EOF
  1. 建立資源(準備映象:traefik:v1.7.2)
kubectl create -f ingress_traefik.yaml
  1. 瀏覽器訪問 traefik 的 dashboardhttp://10.0.0.12:8080 此時沒有server。

建立Ingress資源

  1. 檢視要代理的svc資源的NAME和POST
[root@k8s-master ingress]# kubectl get svc -n tomcat 
NAME    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
mysql   ClusterIP   10.254.71.221    <none>        3306/TCP         4h2m
myweb   NodePort    10.254.130.141   <none>        8080:30008/TCP   8h
  1. 建立Ingress資源yaml檔案
cat > /root/k8s_yaml/ingress/ingress.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-myweb
  namespace: tomcat
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: tomcat.oldqiang.com
    http:
      paths:
      - backend:
          serviceName: myweb
          servicePort: 8080
EOF
  1. 建立資源
kubectl create -f ingress.yaml
  1. 檢視資源
kubectl get ingress -n tomcat

測試訪問

  1. windows配置:在C:\Windows\System32\drivers\etc\hosts檔案中增加10.0.0.12 tomcat.oldqiang.com

  2. 瀏覽器直接訪問tomcat:http://tomcat.oldqiang.com/demo/

  1. 瀏覽器訪問:http://10.0.0.12:8080 此時BACKENDS(後端)有Server


七層負載均衡(ingress-nginx)

五個基礎yaml檔案:

  • Namespace
  • ConfigMap
  • RBAC
  • Service:新增NodePort埠
  • Deployment:預設404頁面,改用國內阿里雲映象
  • Deployment:ingress-controller,改用國內阿里雲映象
  1. 準備配置檔案
mkdir /root/k8s_yaml/ingress-nginx && cd /root/k8s_yaml/ingress-nginx
# 建立名稱空間 ingress-nginx
cat > /root/k8s_yaml/ingress-nginx/namespace.yaml <<EOF
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
EOF
# 建立配置資源
cat > /root/k8s_yaml/ingress-nginx/configmap.yaml <<EOF
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
EOF
# 如果外界訪問的域名不存在的話,則預設轉發到default-http-backend這個Service,直接返回404:
cat > /root/k8s_yaml/ingress-nginx/default-backend.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
  namespace: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: default-http-backend
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: default-http-backend
        app.kubernetes.io/part-of: ingress-nginx
    spec:
      terminationGracePeriodSeconds: 60
      containers:
        - name: default-http-backend
          # Any image is permissible as long as:
          # 1. It serves a 404 page at /
          # 2. It serves 200 on a /healthz endpoint
          # 改用國內阿里雲映象
          image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64:1.5
          livenessProbe:
            httpGet:
              path: /healthz
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 30
            timeoutSeconds: 5
          ports:
            - containerPort: 8080
          resources:
            limits:
              cpu: 10m
              memory: 20Mi
            requests:
              cpu: 10m
              memory: 20Mi

---
apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
spec:
  ports:
    - port: 80
      targetPort: 8080
  selector:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
EOF
# 建立Ingress的RBAC授權控制,包括:
# ServiceAccount、ClusterRole、Role、RoleBinding、ClusterRoleBinding
cat > /root/k8s_yaml/ingress-nginx/rbac.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx
EOF
# 建立ingress-controller。將新加入的Ingress進行轉化為Nginx的配置。
cat > /root/k8s_yaml/ingress-nginx/with-rbac.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          # 改用國內阿里雲映象
          image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/nginx-ingress-controller:0.20.0
          args:
            - /nginx-ingress-controller
            - --default-backend-service=\$(POD_NAMESPACE)/default-http-backend
            - --configmap=\$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=\$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=\$(POD_NAMESPACE)/udp-services
            - --publish-service=\$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
EOF
# 建立Service資源,對外提供服務
cat > /root/k8s_yaml/ingress-nginx/service-nodeport.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
      nodePort: 32080  # http
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
      nodePort: 32443  # https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
EOF
  1. 所有node節點準備映象
docker pull registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64:1.5
docker pull registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/nginx-ingress-controller:0.20.0
docker images
  1. 建立資源
kubectl create -f namespace.yaml
kubectl create -f configmap.yaml
kubectl create -f rbac.yaml
kubectl create -f default-backend.yaml
kubectl create -f with-rbac.yaml
kubectl create -f service-nodeport.yaml
  1. 檢視ingress-nginx元件狀態
kubectl get all -n ingress-nginx
  1. 訪問http://10.0.0.12:32080/
[root@k8s-master ingress-nginx]# curl 10.0.0.12:32080
default backend - 404
  1. 準備後端Service,建立Deployment資源(nginx)
cat > /root/k8s_yaml/ingress-nginx/deploy-demon.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: myapp-nginx
spec:
  selector:
    app: myapp-nginx
    release: canary
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: nginx-deploy
spec:
  replicas: 2
  selector: 
    matchLabels:
      app: myapp-nginx
      release: canary
  template:
    metadata:
      labels:
        app: myapp-nginx
        release: canary
    spec:
      containers:
      - name: myapp-nginx
        image: nginx:1.13
        ports:
        - name: httpd
          containerPort: 80
EOF
  1. 建立資源(準備映象:nginx:1.13)
kubectl apply -f deploy-demon.yaml
  1. 檢視資源
kubectl get all
  1. 建立ingress資源:將nginx加入ingress-nginx中
cat > /root/k8s_yaml/ingress-nginx/ingress-myapp.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-myapp
  annotations: 
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: myapp.oldqiang.com
    http:
      paths:
      - path: 
        backend:
          serviceName: myapp-nginx
          servicePort: 80
EOF
  1. 建立資源
kubectl apply -f ingress-myapp.yaml
  1. 檢視資源
kubectl get ingresses
  1. windows配置:在C:\Windows\System32\drivers\etc\hosts檔案中增加10.0.0.12 myapp.oldqiang.com
  2. 瀏覽器直接訪問http://myapp.oldqiang.com:32080/,顯示nginx歡迎頁
  3. 修改nginx頁面以便區分
[root@k8s-master ingress-nginx]# kubectl get pod
NAME                           READY   STATUS    RESTARTS   AGE
nginx-deploy-6b4c84588-crgvr   1/1     Running   0          22m
nginx-deploy-6b4c84588-krvwz   1/1     Running   0          22m
kubectl exec -ti nginx-deploy-6b4c84588-crgvr /bin/bash
echo web1 > /usr/share/nginx/html/index.html
exit
kubectl exec -ti nginx-deploy-6b4c84588-krvwz /bin/bash
echo web2 > /usr/share/nginx/html/index.html
exit
  1. 瀏覽器訪問http://myapp.oldqiang.com:32080/,重新整理測試負載均衡


彈性伸縮

heapster監控

參考heapster1.5.4官方配置檔案

  1. 檢視已存在預設角色heapster
kubectl get clusterrole | grep heapster
  1. 建立heapster所需RBAC、Service和Deployment的yaml檔案
mkdir /root/k8s_yaml/heapster/ && cd /root/k8s_yaml/heapster/
cat > /root/k8s_yaml/heapster/heapster.yaml <<EOF
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: heapster
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:heapster
subjects:
- kind: ServiceAccount
  name: heapster
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: heapster
  namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: heapster
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: heapster
    spec:
      serviceAccountName: heapster
      containers:
      - name: heapster
        image: registry.aliyuncs.com/google_containers/heapster-amd64:v1.5.3
        imagePullPolicy: IfNotPresent
        command:
        - /heapster
        - --source=kubernetes:https://kubernetes.default
        - --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
---
apiVersion: v1
kind: Service
metadata:
  labels:
    task: monitoring
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: Heapster
  name: heapster
  namespace: kube-system
spec:
  ports:
  - port: 80
    targetPort: 8082
  selector:
    k8s-app: heapster
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: monitoring-grafana
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: grafana
    spec:
      containers:
      - name: grafana
        image: registry.aliyuncs.com/google_containers/heapster-grafana-amd64:v4.4.3
        ports:
        - containerPort: 3000
          protocol: TCP
        volumeMounts:
        - mountPath: /etc/ssl/certs
          name: ca-certificates
          readOnly: true
        - mountPath: /var
          name: grafana-storage
        env:
        - name: INFLUXDB_HOST
          value: monitoring-influxdb
        - name: GF_SERVER_HTTP_PORT
          value: "3000"
          # The following env variables are required to make Grafana accessible via
          # the kubernetes api-server proxy. On production clusters, we recommend
          # removing these env variables, setup auth for grafana, and expose the grafana
          # service using a LoadBalancer or a public IP.
        - name: GF_AUTH_BASIC_ENABLED
          value: "false"
        - name: GF_AUTH_ANONYMOUS_ENABLED
          value: "true"
        - name: GF_AUTH_ANONYMOUS_ORG_ROLE
          value: Admin
        - name: GF_SERVER_ROOT_URL
          # If you're only using the API Server proxy, set this value instead:
          # value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
          value: /
      volumes:
      - name: ca-certificates
        hostPath:
          path: /etc/ssl/certs
      - name: grafana-storage
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  labels:
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-grafana
  name: monitoring-grafana
  namespace: kube-system
spec:
  # In a production setup, we recommend accessing Grafana through an external Loadbalancer
  # or through a public IP.
  # type: LoadBalancer
  # You could also use NodePort to expose the service at a randomly-generated port
  # type: NodePort
  ports:
  - port: 80
    targetPort: 3000
  selector:
    k8s-app: grafana
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: monitoring-influxdb
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: influxdb
    spec:
      containers:
      - name: influxdb
        image: registry.aliyuncs.com/google_containers/heapster-influxdb-amd64:v1.3.3
        volumeMounts:
        - mountPath: /data
          name: influxdb-storage
      volumes:
      - name: influxdb-storage
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  labels:
    task: monitoring
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-influxdb
  name: monitoring-influxdb
  namespace: kube-system
spec:
  ports:
  - port: 8086
    targetPort: 8086
  selector:
    k8s-app: influxdb
EOF
  1. 建立資源
kubectl create -f heapster.yaml
  1. 高版本k8s已經不建議使用heapster彈性伸縮,配置強制開啟:
kube-controller-manager \
--horizontal-pod-autoscaler-use-rest-clients=false
sed -i '8a \ \ --horizontal-pod-autoscaler-use-rest-clients=false \\' /usr/lib/systemd/system/kube-controller-manager.service
  1. 建立業務資源
cd /root/k8s_yaml/deploy
cat > /root/k8s_yaml/deploy/k8s_deploy3.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.13
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 100m
            memory: 50Mi
EOF
kubectl create -f k8s_deploy3.yaml
  1. 建立HPA規則
kubectl autoscale deploy nginx --max=6 --min=1 --cpu-percent=5
  1. 檢視資源
kubectl get pod
kubectl get hpa
  1. 清除heapster資源,和metric-server不能相容
kubectl delete -f heapster.yaml
kubectl delete hpa nginx
# 還原kube-controller-manager.service配置
  1. 當node節點NotReady時,強制刪除pod
kubectl delete -n kube-system pod Pod_Name --force --grace-period 0

metric-server

metrics-server Github 1.15

  1. 準備yaml檔案,使用國內映象地址(2個),修改一些其他引數
mkdir -p /root/k8s_yaml/metrics/ && cd /root/k8s_yaml/metrics/
cat <<EOF > /root/k8s_yaml/metrics/auth-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: metrics-server-auth-reader
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metrics-server:system:auth-delegator
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:metrics-server
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - "extensions"
  resources:
  - deployments
  verbs:
  - get
  - list
  - update
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:metrics-server
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
EOF
cat <<EOF > /root/k8s_yaml/metrics/metrics-apiservice.yaml
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
  name: v1beta1.metrics.k8s.io
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  service:
    name: metrics-server
    namespace: kube-system
  group: metrics.k8s.io
  version: v1beta1
  insecureSkipTLSVerify: true
  groupPriorityMinimum: 100
  versionPriority: 100
EOF
cat <<EOF > /root/k8s_yaml/metrics/metrics-server.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metrics-server-config
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: EnsureExists
data:
  NannyConfiguration: |-
    apiVersion: nannyconfig/v1alpha1
    kind: NannyConfiguration
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-server-v0.3.3
  namespace: kube-system
  labels:
    k8s-app: metrics-server
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    version: v0.3.3
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
      version: v0.3.3
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
        version: v0.3.3
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      containers:
      - name: metrics-server
        image: registry.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.3
        command:
        - /metrics-server
        - --metric-resolution=30s
        # These are needed for GKE, which doesn't support secure communication yet.
        # Remove these lines for non-GKE clusters, and when GKE supports token-based auth.
        #- --kubelet-port=10255
        #- --deprecated-kubelet-completely-insecure=true
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
        ports:
        - containerPort: 443
          name: https
          protocol: TCP
      - name: metrics-server-nanny
        image: registry.aliyuncs.com/google_containers/addon-resizer:1.8.5
        resources:
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 5m
            memory: 50Mi
        env:
          - name: MY_POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: MY_POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        volumeMounts:
        - name: metrics-server-config-volume
          mountPath: /etc/config
        command:
          - /pod_nanny
          - --config-dir=/etc/config
          #- --cpu=80m
          - --extra-cpu=0.5m
          #- --memory=80Mi
          #- --extra-memory=8Mi
          - --threshold=5
          - --deployment=metrics-server-v0.3.3
          - --container=metrics-server
          - --poll-period=300000
          - --estimator=exponential
          - --minClusterSize=2
          # Specifies the smallest cluster (defined in number of nodes)
          # resources will be scaled to.
          #- --minClusterSize={{ metrics_server_min_cluster_size }}
      volumes:
        - name: metrics-server-config-volume
          configMap:
            name: metrics-server-config
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
---
apiVersion: v1
kind: Service
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "Metrics-server"
spec:
  selector:
    k8s-app: metrics-server
  ports:
  - port: 443
    protocol: TCP
    targetPort: https
EOF

下載指定配置檔案:

for file in auth-delegator.yaml auth-reader.yaml metrics-apiservice.yaml metrics-server-deployment.yaml metrics-server-service.yaml resource-reader.yaml;do wget https://raw.githubusercontent.com/kubernetes/kubernetes/v1.15.0/cluster/addons/metrics-server/$file;done
# 使用國內映象
image: registry.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.3
command:
        - /metrics-server
        - --metric-resolution=30s
# 不驗證客戶端證書
        - --kubelet-insecure-tls
# 預設解析主機名,coredns中沒有物理機的主機名解析,指定使用IP
        - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
... ...
# 使用國內映象
        image: registry.aliyuncs.com/google_containers/addon-resizer:1.8.5
        command:
          - /pod_nanny
          - --config-dir=/etc/config
          #- --cpu=80m
          - --extra-cpu=0.5m
          #- --memory=80Mi
          #- --extra-memory=8Mi
          - --threshold=5
          - --deployment=metrics-server-v0.3.3
          - --container=metrics-server
          - --poll-period=300000
          - --estimator=exponential
          - --minClusterSize=2
# 新增 node/stats 許可權
kind: ClusterRole
metadata:
  name: system:metrics-server
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats

不加上述引數,可能報錯:

unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:k8s-node02: unable to fetch metrics from Kubelet k8s-node02 (10.10.0.13): request failed - "401 Unauthorized", response: "Unauthorized", unable to fully scrape metrics from source kubelet_summary:k8s-node01: unable to fetch metrics from Kubelet k8s-node01 (10.10.0.12): request failed - "401 Unauthorized", response: "Unauthorized"]
  1. 建立資源(準備映象:registry.aliyuncs.com/google_containers/addon-resizer:1.8.5和registry.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.3)
kubectl create -f .
  1. 檢視資源,使用-l指定標籤
kubectl get pod -n kube-system -l k8s-app=metrics-server
  1. 檢視資源監控:報錯
kubectl top nodes
  1. 注意:二進位制安裝需要在master節點安裝kubelet、kube-proxy、docker-ce。並將master節點加入進群worker node節點。否則有可能會無法連線metrics-server而報錯timeout。
kubectl get apiservices v1beta1.metrics.k8s.io -o yaml
# 報錯資訊:mertics無法與 apiserver服務通訊
"metrics-server error "Client.Timeout exceeded while awaiting headers"
  1. 其他報錯檢視api,日誌
kubectl describe apiservice v1beta1.metrics.k8s.io
kubectl get pods -n kube-system | grep 'metrics'
kubectl logs metrics-server-v0.3.3-6b7c586ffd-7b4n4 metrics-server -n kube-system
  1. 修改kube-apiserver.service開啟聚合層,使用證書
cat > /usr/lib/systemd/system/kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
[Service]
ExecStart=/usr/sbin/kube-apiserver \\
  --audit-log-path /var/log/kubernetes/audit-log \\
  --audit-policy-file /etc/kubernetes/audit.yaml \\
  --authorization-mode RBAC \\
  --client-ca-file /etc/kubernetes/ca.pem \\
  --requestheader-client-ca-file /etc/kubernetes/ca.pem \\
  --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \\
  --etcd-cafile /etc/kubernetes/ca.pem \\
  --etcd-certfile /etc/kubernetes/client.pem \\
  --etcd-keyfile /etc/kubernetes/client-key.pem \\
  --etcd-servers https://10.0.0.11:2379,https://10.0.0.12:2379,https://10.0.0.13:2379 \\
  --service-account-key-file /etc/kubernetes/ca-key.pem \\
  --service-cluster-ip-range 10.254.0.0/16 \\
  --service-node-port-range 30000-59999 \\
  --kubelet-client-certificate /etc/kubernetes/client.pem \\
  --kubelet-client-key /etc/kubernetes/client-key.pem \\
  --proxy-client-cert-file=/etc/kubernetes/client.pem \\
  --proxy-client-key-file=/etc/kubernetes/client-key.pem \\
  --requestheader-allowed-names= \\
  --requestheader-extra-headers-prefix=X-Remote-Extra- \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-username-headers=X-Remote-User \\
  --log-dir /var/log/kubernetes/ \\
  --logtostderr=false \\
  --tls-cert-file /etc/kubernetes/apiserver.pem \\
  --tls-private-key-file /etc/kubernetes/apiserver-key.pem \\
  --v 2
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl restart kube-apiserver.service
# 開啟聚合層,使用證書
--requestheader-client-ca-file /etc/kubernetes/ca.pem \\ # 已配置
--proxy-client-cert-file=/etc/kubernetes/client.pem \\
--proxy-client-key-file=/etc/kubernetes/client-key.pem \\
--requestheader-allowed-names= \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\

注:如果 --requestheader-allowed-names 不為空,則--proxy-client-cert-file 證書的 CN 必須位於 allowed-names 中,預設為 aggregator

  如果 kube-apiserver 主機沒有執行 kube-proxy,則還需要新增 --enable-aggregator-routing=true 引數。

注意:kube-apiserver不開啟聚合層會報錯:

I0109 05:55:43.708300       1 serving.go:273] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
Error: cluster doesn't provide requestheader-client-ca-file
  1. 每個節點修改kubelet.service檢查:否則無法正常獲取節點主機或者pod的資源使用情況
  • 刪除--read-only-port=0
  • 新增--authentication-token-webhook=true
cat > /usr/lib/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service multi-user.target
Requires=docker.service
[Service]
ExecStart=/usr/bin/kubelet \\
  --anonymous-auth=false \\
  --cgroup-driver systemd \\
  --cluster-dns 10.254.230.254 \\
  --cluster-domain cluster.local \\
  --runtime-cgroups=/systemd/system.slice \\
  --kubelet-cgroups=/systemd/system.slice \\
  --fail-swap-on=false \\
  --client-ca-file /etc/kubernetes/ca.pem \\
  --tls-cert-file /etc/kubernetes/kubelet.pem \\
  --tls-private-key-file /etc/kubernetes/kubelet-key.pem \\
  --hostname-override 10.0.0.12 \\
  --image-gc-high-threshold 90 \\
  --image-gc-low-threshold 70 \\
  --kubeconfig /etc/kubernetes/kubelet.kubeconfig \\
  --authentication-token-webhook=true \\
  --log-dir /var/log/kubernetes/ \\
  --pod-infra-container-image t29617342/pause-amd64:3.0 \\
  --logtostderr=false \\
  --v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl restart kubelet.service
  1. 重新部署(生成自簽發證書)
cd /root/k8s_yaml/metrics/
kubectl delete -f .
kubectl create -f .
  1. 檢視資源監控
[root@k8s-master metrics]# kubectl top nodes
NAME        CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
10.0.0.11   99m          9%     644Mi           73%       
10.0.0.12   56m          5%     1294Mi          68%       
10.0.0.13   44m          4%     622Mi           33%

動態儲存

搭建NFS提供靜態儲存

  1. 所有節點安裝nfs-utils
yum -y install nfs-utils
  1. master節點部署nfs服務
mkdir -p /data/tomcat-db
cat > /etc/exports <<EOF
/data    10.0.0.0/24(rw,sync,no_root_squash,no_all_squash)
EOF
systemctl start nfs
  1. 所有node節點檢查掛載
showmount -e 10.0.0.11

配置動態儲存

建立PVC時,系統自動建立PV

1. 準備儲存類SC資源及其依賴的Deployment和RBAC的yaml檔案

mkdir /root/k8s_yaml/storageclass/ && cd /root/k8s_yaml/storageclass/
# 實現自動建立PV功能,提供儲存類SC
cat > /root/k8s_yaml/storageclass/nfs-client.yaml <<EOF
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 10.0.0.11
            - name: NFS_PATH
              value: /data
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.0.0.13
            path: /data
EOF
# RBAC
cat > /root/k8s_yaml/storageclass/nfs-client-rbac.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
EOF
# 建立SC資源,基於nfs-client-provisioner
cat > /root/k8s_yaml/storageclass/nfs-client-class.yaml <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: course-nfs-storage
provisioner: fuseim.pri/ifs
EOF
  1. 建立資源(準備映象:quay.io/external_storage/nfs-client-provisioner:latest)
kubectl create -f .
  1. 建立pvc資源:yaml檔案增加屬性annotations(可以設為預設屬性)
cat > /root/k8s_yaml/storageclass/test_pvc1.yaml <<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc1
  annotations:
    volume.beta.kubernetes.io/storage-class: "course-nfs-storage"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
EOF

Jenkins對接k8s

Jenkins部署在物理機(常修改),k8s現在有了身份認證:

  • 方案一:Jenkins安裝k8s身份認證外掛
  • 方案二:遠端控制k8s:同版本kubectl,指定kubelet客戶端認證憑據
kubectl --kubeconfig='kubelet.kubeconfig' get nodes

kubeadm的憑據位於/etc/kubernetes/admin.conf