0%

K8s学习笔记——限制与隔离

学习极客时间上的《深入剖析Kubernetes》

秉持眼过千遍不如手过一遍的原则.

对应章节:06 | 白话容器基础(二):隔离与限制

隔离与限制

cgroup的mount点

1
2
3
4
5
6
7
8
9
10
11
12
13
$ mount -t cgroup
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset,clone_children)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)

查看CPU相关信息

1
2
3
4
5
6
7
8
9
$ cd /sys/fs/cgroup
$ ls cpu
cgroup.clone_children cpuacct.stat cpuacct.usage_percpu cpuacct.usage_sys cpu.cfs_quota_us docker system.slice
cgroup.procs cpuacct.usage cpuacct.usage_percpu_sys cpuacct.usage_user cpu.shares notify_on_release tasks
cgroup.sane_behavior cpuacct.usage_all cpuacct.usage_percpu_user cpu.cfs_period_us cpu.stat release_agent user.slice
$ cat cpu/cpu.cfs_period_us
100000
$ cat cpu/cpu.cfs_quota_us
-1

在CPU目录下创建一个container目录

1
2
3
4
5
6
$ mkdir container
$ ls container/
cgroup.clone_children cpuacct.usage_all cpuacct.usage_sys cpu.shares
cgroup.procs cpuacct.usage_percpu cpuacct.usage_user cpu.stat
cpuacct.stat cpuacct.usage_percpu_sys cpu.cfs_period_us notify_on_release
cpuacct.usage cpuacct.usage_percpu_user cpu.cfs_quota_us tasks

会看到创建目录后,会自动创建一堆文件

用一个while循环检测CPU使用状况

1
2
3
4
5
6
7
8
9
10
11
$ while : ; do : ; done &
[1] 16508
$ top
top - 09:03:06 up 3:32, 4 users, load average: 0.28, 0.07, 0.02
Tasks: 105 total, 2 running, 61 sleeping, 0 stopped, 0 zombie
%Cpu(s): 99.7 us, 0.3 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 1009256 total, 208232 free, 220168 used, 580856 buff/cache
KiB Swap: 2018300 total, 2008256 free, 10044 used. 647692 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
16508 root 20 0 21640 3180 1216 R 99.7 0.3 0:17.88 bash

在有的虚拟机上,你会看到CPU使用率不是99%,可能是50%,25%等,你可以思考一下这是为什么

使用container的cgroup对while循环进行资源限制

1
2
3
4
5
6
7
8
9
10
11
$ echo 20000 > container/cpu.cfs_quota_us
$ echo 16508 > container/tasks
$ top
top - 09:06:24 up 3:35, 4 users, load average: 0.97, 0.52, 0.21
Tasks: 105 total, 2 running, 61 sleeping, 0 stopped, 0 zombie
%Cpu(s): 20.0 us, 0.0 sy, 0.0 ni, 79.7 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st
KiB Mem : 1009256 total, 208636 free, 219736 used, 580884 buff/cache
KiB Swap: 2018300 total, 2008256 free, 10044 used. 648120 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
16508 root 20 0 21640 3180 1216 R 20.6 0.3 3:31.08 bash

起一个容器看看

1
2
3
4
5
6
7
8
9
10
$ docker run -it --cpu-period=100000 --cpu-quota=20000 ubuntu /bin/bash
root@26cbffdcb5bf:/# cat /sys/fs/cgroup/cpu/cpu.cfs_period_us
100000
root@26cbffdcb5bf:/# cat /sys/fs/cgroup/cpu/cpu.cfs_quota_us
20000
# 在宿主机上
$ cat /sys/fs/cgroup/cpu/docker/26cbffdcb5bf2f9d9ecdc9207f4211c8f5b3cfbc39d83c77ed4666db3ca0bac3/cpu.cfs_period_us
100000
$ cat /sys/fs/cgroup/cpu/docker/26cbffdcb5bf2f9d9ecdc9207f4211c8f5b3cfbc39d83c77ed4666db3ca0bac3/cpu.cfs_quota_us
20000

top对比

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
# 容器中
root@26cbffdcb5bf:/# while : ; do : ; done &
[1] 11
root@26cbffdcb5bf:/# top
top - 09:10:34 up 3:39, 0 users, load average: 0.01, 0.21, 0.15
Tasks: 3 total, 2 running, 1 sleeping, 0 stopped, 0 zombie
%Cpu(s): 20.3 us, 0.0 sy, 0.0 ni, 79.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 985.6 total, 163.9 free, 251.5 used, 570.2 buff/cache
MiB Swap: 1971.0 total, 1961.2 free, 9.8 used. 595.9 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
11 root 20 0 4224 560 0 R 20.3 0.1 0:01.21 bash
1 root 20 0 4224 3504 2944 S 0.0 0.3 0:00.19 bash
12 root 20 0 6080 3244 2740 R 0.0 0.3 0:00.00 top

# 宿主机上
$ top
top - 09:11:36 up 3:40, 5 users, load average: 0.10, 0.20, 0.15
Tasks: 111 total, 2 running, 66 sleeping, 0 stopped, 0 zombie
%Cpu(s): 20.0 us, 0.3 sy, 0.0 ni, 79.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 1009256 total, 167472 free, 257600 used, 584184 buff/cache
KiB Swap: 2018300 total, 2008256 free, 10044 used. 610176 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
16755 root 20 0 4224 560 0 R 19.9 0.1 0:13.62 bash
$ pstree -g
systemd(1)─┬─VGAuthService(562)
├─accounts-daemon(866)─┬─{accounts-daemon}(866)
│ └─{accounts-daemon}(866)
├─atd(928)
├─containerd(1055)─├─containerd-shim(16557)─┬─bash(16585)───bash(16755)
│ │ ├─{containerd-shim}(16557)
│ │ ├─{containerd-shim}(16557)
│ │ ├─{containerd-shim}(16557)
│ │ ├─{containerd-shim}(16557)
│ │ ├─{containerd-shim}(16557)
│ │ ├─{containerd-shim}(16557)
│ │ ├─{containerd-shim}(16557)
│ │ ├─{containerd-shim}(16557)
│ │ └─{containerd-shim}(16557)

可以看到容器内和容器外部看到的是一样的

总结

主要使用了cgroup对一个进程的CPU使用率进行了限制,通过了解进程的CPU使用率限制机制,了解docker通过cgroup对相关资源使用的限制