k8s pod資源配額對排程的影響
阿新 • • 發佈:2021-10-29
1. pod資源配額對排程的影響
-
容器資源限制:
-
resources.limits.cpu
-
resources.limits.memory
-
-
容器使用的最小資源需求,作為容器排程時資源分配的依據:
- resources.requests.cpu
- resources.requests.memory
-
註釋:
CPU單位:可以寫m也可以寫浮點數,例如0.5=500m,1=1000m
-
參考示例
apiVersion: v1 kind: Pod metadata: name: web spec: containers: - name: web image: nginx resources: requests: # 啟動容器最小資源 memory: "64Mi" cpu: "250m" limits: # 容器的最大資源限制 memory: "128Mi" # 容器的最大資源記憶體限制128M cpu: "500m" # 容器的最大資源cpu的核數限制,500m=0.5核數
-
註釋
- CPU單位:可以寫m也可以寫浮點數,例如0.5=500m,1核=1000m
2. 案例
-
編寫案例模板
[root@k8s-master yaml]# vim request-pod.yaml [root@k8s-master yaml]# cat request-pod.yaml apiVersion: v1 kind: Pod metadata: name: web spec: containers: - name: web image: nginx resources: requests: # 啟動容器最小資源 memory: "64Mi" cpu: "250m" limits: # 容器的最大資源限制 memory: "128Mi" # 容器的最大資源記憶體限制128M cpu: "500m" # 容器的最大資源cpu的核數限制,500m=0.5核數
-
啟動服務
[root@k8s-master yaml]# kubectl apply -f request-pod.yaml pod/web created
-
檢視服務
[root@k8s-master yaml]# kubectl get pods NAME READY STATUS RESTARTS AGE init-demo 1/1 Running 0 21h pod-envars 1/1 Running 2 22h probe-demo 1/1 Running 0 24h web 1/1 Running 0 14s
-
驗證服務啟動的資源配置限制
[root@k8s-master yaml]# kubectl describe pods web Name: web Namespace: default Priority: 0 Node: k8s-node2/192.168.0.203 Start Time: Mon, 30 Nov 2020 15:04:13 +0800 Labels: <none> Annotations: cni.projectcalico.org/podIP: 10.244.169.158/32 cni.projectcalico.org/podIPs: 10.244.169.158/32 Status: Running IP: 10.244.169.158 IPs: IP: 10.244.169.158 Containers: web: Container ID: docker://f895befae98eedb7fe8fc0b26852afd4023a793b8a7f1abbab7d58851f314651 Image: nginx Image ID: docker-pullable://nginx@sha256:6b1daa9462046581ac15be20277a7c75476283f969cb3a61c8725ec38d3b01c3 Port: <none> Host Port: <none> State: Running Started: Mon, 30 Nov 2020 15:04:23 +0800 Ready: True Restart Count: 0 Limits: cpu: 500m memory: 128Mi Requests: cpu: 250m memory: 64Mi Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-8pppk (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-8pppk: Type: Secret (a volume populated by a Secret) SecretName: default-token-8pppk Optional: false QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m28s default-scheduler Successfully assigned default/web to k8s-node2 Normal Pulling 4m23s kubelet, k8s-node2 Pulling image "nginx" Normal Pulled 4m18s kubelet, k8s-node2 Successfully pulled image "nginx" in 4.71128787s Normal Created 4m18s kubelet, k8s-node2 Created container web Normal Started 4m18s kubelet, k8s-node2 Started container web
-
怎麼檢視node節點資源配額
[root@k8s-master pod]# kubectl describe node k8s-node01 Name: k8s-node01 Roles: <none> Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=k8s-node01 kubernetes.io/os=linux Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 projectcalico.org/IPv4Address: 172.17.0.13/20 projectcalico.org/IPv4IPIPTunnelAddr: 10.244.85.192 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 04 Aug 2021 14:31:44 +0800 Taints: <none> Unschedulable: false Lease: HolderIdentity: k8s-node01 AcquireTime: <unset> RenewTime: Mon, 16 Aug 2021 22:33:39 +0800 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- NetworkUnavailable False Wed, 04 Aug 2021 14:35:31 +0800 Wed, 04 Aug 2021 14:35:31 +0800 CalicoIsUp Calico is running on this node MemoryPressure False Mon, 16 Aug 2021 22:28:53 +0800 Wed, 04 Aug 2021 14:31:44 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 16 Aug 2021 22:28:53 +0800 Wed, 04 Aug 2021 14:31:44 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 16 Aug 2021 22:28:53 +0800 Wed, 04 Aug 2021 14:31:44 +0800 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 16 Aug 2021 22:28:53 +0800 Wed, 04 Aug 2021 14:34:45 +0800 KubeletReady kubelet is posting ready status Addresses: InternalIP: 172.17.0.13 Hostname: k8s-node01 Capacity: cpu: 2 ephemeral-storage: 51473868Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 3880180Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 47438316671 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 3777780Ki pods: 110 System Info: Machine ID: 96285cf085224a608d1d6ad0cbf21e97 System UUID: 96285CF0-8522-4A60-8D1D-6AD0CBF21E97 Boot ID: c84c93c7-eb6f-4e92-884b-01c8eddfb4f8 Kernel Version: 3.10.0-1160.11.1.el7.x86_64 OS Image: CentOS Linux 7 (Core) Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.8 Kubelet Version: v1.19.0 Kube-Proxy Version: v1.19.0 PodCIDR: 10.244.1.0/24 PodCIDRs: 10.244.1.0/24 Non-terminated Pods: (4 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- kube-system calico-kube-controllers-5f6cfd688c-pg27v 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12d kube-system calico-node-xdc7h 250m (12%) 0 (0%) 0 (0%) 0 (0%) 12d kube-system coredns-6d56c8448f-msj5b 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 11d kube-system kube-proxy-mcnn7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12d Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 350m (17%) 0 (0%) memory 70Mi (1%) 170Mi (4%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: <none>