1. 程式人生 > >istio kiali 親和性排程

istio kiali 親和性排程

一、節點排程

在開始 kiali 親和性排程之前,先演示一個簡單的例子介紹 pod 選擇排程到指定 node:

 

節點打標

使用命令檢視當前所有 k8s 節點:

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   5h11m   v1.18.1
k8s-node01   Ready    <none>   5h8m    v1.18.1

 

現在給 k8s-w-206 這個節點打上一個標籤,標籤內容為 name: xiao,命令如下:

kubectl label node k8s-node01 name=xiao

 

編寫 pod

編寫 pod 資原始檔flaskapp-deployment.yaml,檔案中使用 nodeSelector 指定該 pod 要排程到 k8s-node01節點之上

apiVersion: apps/v1
kind: Deployment
metadata:
  name: flaskapp-1
spec:
  selector:
    matchLabels:
      run: flaskapp-1
  replicas: 1
  template:
    metadata:
      labels:
        run: flaskapp-1
    spec:
      containers:
      - name: flaskapp-1
        image: jcdemo/flaskapp
        ports:
        - containerPort: 5000
      nodeSelector:
        name: xiao

 

部署 flaskapp-deployment.yaml,發現 pod 果然被排程到了 k8s-node01 這個 node,效果如下:

[root@k8s-master ~]# kubectl get pods -o wide
NAME                             READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
flaskapp-1-58b69c66f9-hv498      1/1     Running   0          7m30s   10.244.1.30   k8s-node01   <none>           <none>

 

二、kiali 親和性排程

上面舉例 pod 使用 nodeSelector 選擇 node,這就是最簡單的 k8s 排程方式。

節點親和性排程策略示例:

apiVersion: v1
kind: Pod
metadata:
  name: with-node-affinity
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/e2e-az-name
            operator: In
            values:
            - e2e-az1
            - e2e-az2
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        preference:
          matchExpressions:
          - key: another-node-label-key
            operator: In
            values:
            - another-node-label-value
  containers:
 - name: with-node-affinity
    image: k8s.gcr.io/pause:2.0

 

舉個例子

舉一個生活的例子,以前去醫院看病,病人(pod)不能挑醫生(node),排隊叫到誰就是誰,冷冰冰完全沒有親和性而言;如今可以網上掛號了,病人也可以挑選中意的醫生,這樣就有了親和性,說明社會進步了。

 

當然病人在挑選醫生的過程中也會有兩種情況:一種是硬性(required),比如非要某醫生,即使他忙,也願意一直等下去;還有一種是軟性(prefered),比如優先選擇某醫生,但是如果真不行,其他醫生也未嘗不可。

 

節點親和性排程(NodeAffinity)

下面的理論可以對照上面的例子。

節點親和性,也就是 NodeAffinity,用來控制 pod 部署或者不能部署在哪臺機器上。

節點親和性排程策略分為硬策略分為軟策略和硬策略兩種方式。硬策略是如果沒有滿足條件的節點,就會不斷重試直到條件滿足了為止;軟策略是如果沒有滿足條件的節點,pod 就會忽略這條規則,繼續完成排程過程。

 

節點親和性軟硬策略的語法分別介紹如下。

硬策略(關鍵字 require)

requiredDuringSchedulingIgnoredDuringExecution:

pod 必須部署到滿足條件的節點上,如果節點不滿足條件,就不停重試。

 

軟策略(關鍵字 prefer)

preferredDuringSchedulingIgnoredDuringExecution:

pod 優先部署到滿足條件的節點,如果節點不滿足條件,就忽略這些條件,排程到其他節點。

 

kiali 節點親和性排程

舉例1

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-cache
spec:
  selector:
    matchLabels:
      app: store
  replicas: 3
  template:
    metadata:
      labels:
        app: store
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - store
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: redis-server
        image: redis:3.2-alpine

建立了一個Deployment,副本數為3,指定了反親和規則如上所示,pod的label為app:store,那麼pod排程的時候將不會排程到node上已經運行了label為app:store的pod了,這樣就會使得Deployment的三副本分別部署在不同的host的node上.

 

舉例2

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-cache
spec:
  selector:
    matchLabels:
      app: store
  replicas: 3
  template:
    metadata:
      labels:
        app: store
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - web-store
            topologyKey: "kubernetes.io/hostname"
        podAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - store
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: web-app
        image: nginx:1.12-alpine

在一個例子中基礎之上,要求pod的親和性滿足requiredDuringSchedulingIgnoredDuringExecution中topologyKey=”kubernetes.io/hostname”,並且node上需要執行有app=store的label.

 

舉例3

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-server
spec:
  selector:
    matchLabels:
      app: web-server
  replicas: 3
  template:
    metadata:
      labels:
        app: web-server
    spec:
      affinity:
        podAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - web-store
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: web-app
        image: hub.easystack.io/library/nginx:1.9.0

在一些應用中,pod副本之間需要共享cache,需要將pod執行在一個節點之上

 

 

本文參考連結:

https://blog.51cto.com/14268033/2487240

https://blog.csdn.net/jettery/article/details/79003562

&n