十、Docker容器:磁碟&記憶體&CPU資源限制實戰
阿新 • • 發佈:2020-12-24
inode1 192.168.31.101 ----- docker version:Docker version 1.13.1, build cccb291/1.13.1 inode2 192.168.31.102 ----- docker version:Docker version 19.03.8, build afacb8b(docker-ce)
[root@node1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES aca49b0226ad web:v1"sleep 9999d" 25 hours ago Up 19 seconds web01
宿主機的磁碟
[root@node1 ~]# df -Th Filesystem Type Size Used Avail Use% Mounted on /dev/sda2 xfs 20G 3.0G 17G 15% / devtmpfs devtmpfs 233M 0 233M 0% /dev tmpfs tmpfs 243M 0 243M 0% /dev/shm tmpfs tmpfs 243M5.2M 238M 3% /run tmpfs tmpfs 243M 0 243M 0% /sys/fs/cgroup /dev/sda1 xfs 497M 117M 380M 24% /boot tmpfs tmpfs 49M 0 49M 0% /run/user/0
宿主機上docker容器的磁碟
[root@node1 ~]# docker exec web01 df -Th Filesystem Type Size Used Avail Use% Mounted on overlay overlay 20G3.0G 17G 15% / tmpfs tmpfs 243M 0 243M 0% /dev tmpfs tmpfs 243M 0 243M 0% /sys/fs/cgroup /dev/sda2 xfs 20G 3.0G 17G 15% /etc/hosts shm tmpfs 64M 0 64M 0% /dev/shm tmpfs tmpfs 243M 0 243M 0% /proc/acpi tmpfs tmpfs 243M 0 243M 0% /proc/scsi tmpfs tmpfs 243M 0 243M 0% /sys/firmware
宿主機的記憶體
[root@node1 ~]# free -mh total used free shared buff/cache available Mem: 485M 98M 184M 5.2M 203M 345M Swap: 2.0M 0B 2.0M
[root@node1 ~]# docker exec web01 free -mh total used free shared buff/cache available Mem: 485M 105M 173M 5.2M 206M 338M Swap: 2.0M 0B 2.0M
宿主機的cpu
#物理cpu數 [root@node1 ~]# grep 'physical id' /proc/cpuinfo|sort|uniq|wc -l 1 #cpu的核數 [root@node1 ~]# grep 'cpu cores' /proc/cpuinfo|uniq|awk -F ':' '{print $2}' 1
宿主機上的docker容器的cpu
[root@node1 ~]# docker exec web01 grep 'physical id' /proc/cpuinfo|sort|uniq|wc -l 1 [root@node1 ~]# docker exec web01 grep 'cpu cores' /proc/cpuinfo|uniq|awk -F ':' '{print $2}' 1
4、檢視單個容器的記憶體、cpu資源使用
#實時檢視容器web01的記憶體和cpu資源 [root@node1 ~]# docker stats web01 CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS web01 0.00% 88 KiB / 485.7 MiB 0.02% 4.29 kB / 1.34 kB 6.64 MB / 0 B 1 CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS web01 0.00% 88 KiB / 485.7 MiB 0.02% 4.29 kB / 1.34 kB 6.64 MB / 0 B 1 CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS web01 0.00% 88 KiB / 485.7 MiB 0.02% 4.29 kB / 1.34 kB 6.64 MB / 0 B 1 CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS web01 0.00% 88 KiB / 485.7 MiB 0.02% 4.29 kB / 1.34 kB 6.64 MB / 0 B 1 #檢視容器web01瞬時的記憶體和cpu資源 [root@node1 ~]# docker stats --no-stream web01 CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS web01 0.00% 88 KiB / 485.7 MiB 0.02% 4.42 kB / 1.42 kB 6.64 MB / 0 B 1 [root@node1 ~]#
CPU和記憶體的資源限制
docker run -itd --cpuset-cpus=0-0 -m 4MB --name=test web:v1 /bin/bash --cpuset-cpus:設定cpu的核數,0-0、1-1、2-2...(這種是繫結cpu,把本虛擬機器繫結在一個邏輯cpu上);0-1、0-2、0-3和0,1、0,2、0,3(這兩種形式都是指定多個邏輯cpu,每次隨機使用一個邏輯cpu,相當於是共享cpu) #注意:一個docker容器繫結一個邏輯cpu便於監控容器佔用cpu的情況;而共享cpu可以更好利用cpu資源,而且要選好cpu排程演算法! -m:設定記憶體的大小 [root@node1 ~]# docker run -itd --cpuset-cpus=0-0 -m 4MB --name=test web:v1 /bin/bash de30929be801fe3d0262b7a8f2de15234c53bc07b7c8d05d27ea4845b3c5f479 [root@node1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES de30929be801 web:v1 "/bin/bash" 3 seconds ago Up 2 seconds test
檢視記憶體
[root@node1 ~]# docker stats --no-stream test CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS test 0.00% 372 KiB / 4 MiB 9.08% 1.34 kB / 734 B 1.53 MB / 0 B 1 #記憶體被限制在了4M
[root@node1 ~]# docker run -itd --cpuset-cpus=0-1 -m 4MB --name=test2 web:v1 /bin/bash 1944e0f432d57d4ad48015a74d4b537f6fa76bda09e32d204a4d20a38fa6594a /usr/bin/docker-current: Error response from daemon: oci runtime error: container_linux.go:235: starting container process caused "process_linux.go:327: setting cgroup config for procHooks process caused \"failed to write 0-1 to cpuset.cpus: write /sys/fs/cgroup/cpuset/system.slice/docker-1944e0f432d57d4ad48015a74d4b537f6fa76bda09e32d204a4d20a38fa6594a.scope/cpuset.cpus: permission denied\"". #報錯,因為宿主機的cpu核數只有1顆,給docker容器配置2顆會報錯
從上面可以看到docker的cpu和記憶體資源有被限制
四、對容器硬碟資源的限制
1、硬碟資源的限制是要修改配置檔案
Docker version 1.13.1
docker配置檔案:/etc/sysconfig/docker(注意不是docker-storage檔案)中,OPTIONS引數後面新增如下程式碼:
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --storage-opt overlay2.size=10G'
docker配置檔案:/usr/lib/systemd/system/docker.service中,OPTIONS引數後面新增如下程式碼:
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --storage-opt overlay2.size=10G
重啟docker服務
[root@node1 ~]# systemctl restart docker Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details. [root@node1 ~]# tail -fn 50 /var/log/messages ...... Apr 1 06:29:21 node1 dockerd-current: Error starting daemon: error initializing graphdriver: Storage option overlay2.size not supported. Filesystem does not support Project Quota: Failed to set quota limit for projid 1 on /var/lib/docker/overlay2/backingFsBlockDev: function not implemented ......
[root@node2 ~]# systemctl restart docker.service Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details. [root@node2 ~]tail -fn 50 /var/log/messages ...... Apr 1 06:34:29 node2 dockerd: time="2020-04-01T06:34:29.701688085+08:00" level=error msg="[graphdriver] prior storage driver overlay2 failed: Storage Option overlay2.size only supported for backingFS XFS. Found <unknown>" ......
原因:
Overlay2 Docker磁碟驅動模式,如果要調整其大小,需要讓Linux檔案系統設定為xfs,並且支援目錄級別的磁碟配額功能;預設情況下,我們在安裝系統時,不會做磁碟配額限制的。
什麼叫支援目錄的磁碟配額?
就是支援在固定大小目錄中分配磁碟大小。目錄有大小怎麼理解?將一個固定大小的硬碟掛載到此目錄,這個目錄的大小就是硬碟的大小。然後目錄可分配指定大小的硬碟資源給其下的檔案
準備工作:備份docker images
[root@node1 ~]# docker image save busybox > /tmp/busybox.tar
[root@node1 ~]# fdisk -l Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x0004ff38 Device Boot Start End Blocks Id System /dev/sda1 * 2048 1026047 512000 83 Linux /dev/sda2 1026048 41938943 20456448 83 Linux /dev/sda3 41938944 41943039 2048 82 Linux swap / Solaris Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@node1 ~]# mkfs.xfs -f /dev/sdb meta-data=/dev/sdb isize=512 agcount=4, agsize=1310720 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=5242880, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
[root@node1 ~]# mkdir /data/ -p
[root@node1 ~]# mount -o uquota,prjquota /dev/sdb /data/
[root@node1 ~]# xfs_quota -x -c 'report' /data/ User quota on /data (/dev/sdb) Blocks User ID Used Soft Hard Warn/Grace ---------- -------------------------------------------------- root 0 0 0 00 [--------] Project quota on /data (/dev/sdb) Blocks Project ID Used Soft Hard Warn/Grace ---------- -------------------------------------------------- #0 0 0 0 00 [--------]
第六步:從/data/docker/作軟連結到/var/lib下
把/var/lib目錄下docker目錄備份走,再重新做一個/data/docker的軟連線到/var/lib下;
不支援目錄級別的磁碟配額功能的源/var/lib/docker/目錄移走,把支援目錄級別的磁碟配額功能軟連結到/data/docker/目錄下的/var/lib/docker/目錄
cd /var/lib mv docker docker.bak mkdir -p /data/docker ln -s /data/docker/ /var/lib/
[root@node1 ~]# systemctl restart docker [root@node1 ~]# ps -ef |grep docker root 3842 1 0 07:11 ? 00:00:00 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json --selinux-enabled --log-driver=journald --signature-verification=false --storage-opt overlay2.size=10G --storage-driver overlay2 -b=br0 root 3848 3842 0 07:11 ? 00:00:00 /usr/bin/docker-containerd-current -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --shim docker-containerd-shim --runtime docker-runc --runtime-args --systemd-cgroup=true root 3929 1759 0 07:13 pts/0 00:00:00 grep --color=auto docker #會看到一個"--storage-opt overlay2.size=10G"引數,表示磁碟配置成功
#匯入我們備份的docker images [root@node1 ~]#docker image load -i busybox:latest /tmp/busybox.tar #啟動容器 [root@node1 ~]# docker run -itd --name=test --privileged --cpuset-cpus=0 -m 4M busybox /bin/sh 0c4465b350551011e1dfebd6f8fc057a336ff7980736c60e31871ab67c42ac42
檢視容器的磁碟大小
[root@node1 ~]# docker exec test df -Th Filesystem Type Size Used Available Use% Mounted on overlay overlay 10.0G 8.0K 10.0G 0% / tmpfs tmpfs 242.9M 0 242.9M 0% /dev tmpfs tmpfs 242.9M 0 242.9M 0% /sys/fs/cgroup /dev/sdb xfs 20.0G 33.6M 20.0G 0% /etc/resolv.conf /dev/sdb xfs 20.0G 33.6M 20.0G 0% /etc/hostname /dev/sdb xfs 20.0G 33.6M 20.0G 0% /etc/hosts shm tmpfs 64.0M 0 64.0M 0% /dev/shm /dev/sdb xfs 20.0G 33.6M 20.0G 0% /run/secrets
[root@node1 ~]# docker stats --no-stream test CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS test 0.00% 56 KiB / 4 MiB 1.37% 780 B / 734 B 0 B / 0 B 1
1、無論是磁碟大小的限制、還是cpu、記憶體,它們都不能超出實際擁有的大小! 比如我這臺vmware的記憶體是4G、cpu兩核、硬碟20G(因為這裡可配額的/data/目錄就只有20G),因為centos系統執行還需要佔部分記憶體,所以容器指定記憶體最好不要超過3G,cpu不能超過兩核(即0-0、1-1;0-1都可以)、硬碟不能超過20G(最好在15G以下) 2、做磁碟資源限制時,磁碟分割槽需要做支援目錄級別的磁碟配額功能 3、配置新磁碟支援目錄級別的磁碟配額功能後,重啟docker服務,docker會被重新初始化,注意備份docker images