0%

K8s学习笔记——作业副本与水平扩展

学习极客时间上的《深入剖析Kubernetes》

秉持眼过千遍不如手过一遍的原则。动手实践并记录结果

对应章节:17 | 经典PaaS的记忆:作业副本与水平扩展

ReplicaSet

书接上文中,创建了一个nginx-dp的deployment

1
2
3
4
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-dp-67f857c57f-4wptz 1/1 Running 0 7m34s
nginx-dp-67f857c57f-qx7l7 1/1 Running 0 25m

先来查看下ReplicaSet

1
2
3
$ kubectl get replicasets
NAME DESIRED CURRENT READY AGE
nginx-dp-67f857c57f 2 2 2 39m

由此可见,实际上当我们创建一个deployment的时候,k8s为我们创建了一个名为nginx-dp-67f857c57f的ReplicaSet。是不是觉得这个名字中的ID这么熟悉。正是Pod的labels字段中的那个ID

创建一个ReplicaSet

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: test-rs
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: Never
1
2
3
4
5
6
7
8
9
10
11
12
13
$ kubectl apply -f rs-test.yaml
replicaset.apps/rs-test created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-dp-67f857c57f-4wptz 1/1 Running 0 33m
nginx-dp-67f857c57f-qx7l7 1/1 Running 0 51m
rs-test-8zd6q 0/1 Terminating 0 4s
rs-test-f5vvr 0/1 Terminating 0 4s
rs-test-rkh7r 0/1 Terminating 0 4s
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-dp-67f857c57f 2 2 2 51m
rs-test 0 0 0 14s

WHAT?!

居然创建失败,所谓在作死的道路上不停试探。谁让我用了和上节课中的Deployment一样的labels呢

再来一遍:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: test-rs
labels:
app: nginx-new
spec:
replicas: 3
selector:
matchLabels:
app: nginx-new
template:
metadata:
labels:
app: nginx-new
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: Never
1
2
3
4
5
6
7
8
9
10
11
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-dp-67f857c57f 2 2 2 54m
test-rs 3 3 3 7m30s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-dp-67f857c57f-4wptz 1/1 Running 0 37m
nginx-dp-67f857c57f-qx7l7 1/1 Running 0 54m
test-rs-9wfx8 1/1 Running 0 7m33s
test-rs-qbckm 1/1 Running 0 7m33s
test-rs-rxjbv 1/1 Running 0 7m33s

现在看起来,创建出了一个名为test-rs的ReplicaSet,而test-rs的rs又创建出了三个pod

水平收缩

修改test-rs的RS:replicas: 3replicas: 4

1
2
3
4
5
6
7
8
$ kubectl apply -f rs-test.yaml
replicaset.apps/test-rs configured
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test-rs-9wfx8 1/1 Running 0 37m
test-rs-dwm9j 1/1 Running 0 4s
test-rs-qbckm 1/1 Running 0 37m
test-rs-rxjbv 1/1 Running 0 37m

修改为4个后,自动创建了一个新的test-rs-dwm9j的pod

再次修改为replicas: 2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test-rs-9wfx8 1/1 Running 0 41m
test-rs-rxjbv 1/1 Running 0 41m
$ kubectl describe rs test-rs
Name: test-rs
Namespace: default
Selector: app=nginx-new
Labels: app=nginx-new
Annotations: Replicas: 2 current / 2 desired
Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 41m replicaset-controller Created pod: test-rs-qbckm
Normal SuccessfulCreate 41m replicaset-controller Created pod: test-rs-9wfx8
Normal SuccessfulCreate 41m replicaset-controller Created pod: test-rs-rxjbv
Normal SuccessfulCreate 4m41s replicaset-controller Created pod: test-rs-dwm9j
Normal SuccessfulDelete 16s replicaset-controller Deleted pod: test-rs-dwm9j
Normal SuccessfulDelete 16s replicaset-controller Deleted pod: test-rs-qbckm

查看ReplicaSet的Events可以看出来:

  1. 修改为4的时候,创建了一个新的pod
  2. 修改为2的时候,两个pod被删除

滚动更新

创建Deployment

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-dp
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ kubectl apply -f nginx-dp.yaml --record
deployment.apps/nginx-dp created
# 注:执行完立即执行rollout才能看到
$ kubectl rollout status deployment/nginx-dp
Waiting for deployment "nginx-dp" rollout to finish: 0 of 3 updated replicas are available...
Waiting for deployment "nginx-dp" rollout to finish: 1 of 3 updated replicas are available...
Waiting for deployment "nginx-dp" rollout to finish: 2 of 3 updated replicas are available...
deployment "nginx-dp" successfully rolled out
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-dp-d46f5678b 3 3 3 3m41s
$ kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-dp 3/3 3 3 3m45s

可以看到:

  1. ReplicaSet的状态有:Desired, Current, Ready
  2. Deployment的有: Ready, Available, UP-to-date

注:与课程中的不一样

修改image版本

1
2
3
4
5
6
7
8
9
10
$ kubectl edit deployment/nginx-dp
....
spec:
containers:
- image: nginx:stable
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
protocol: TCP
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-dp-7fb9ff5685 1 1 0 41s
nginx-dp-d46f5678b 3 3 3 8m2s
$ kubectl rollout status deployment/nginx-dp
Waiting for deployment "nginx-dp" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "nginx-dp" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "nginx-dp" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "nginx-dp" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-dp" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-dp" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-dp" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "nginx-dp" rollout to finish: 1 old replicas are pending termination...
deployment "nginx-dp" successfully rolled out
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-dp-7fb9ff5685 3 3 3 10m
nginx-dp-d46f5678b 0 0 0 17m
$ kubectl describe deploy nginx-dp
...
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
...
OldReplicaSets: <none>
NewReplicaSet: nginx-dp-7fb9ff5685 (3/3 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 18m deployment-controller Scaled up replica set nginx-dp-d46f5678b to 3
Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-dp-7fb9ff5685 to 1
Normal ScalingReplicaSet 7m22s deployment-controller Scaled down replica set nginx-dp-d46f5678b to 2
Normal ScalingReplicaSet 7m22s deployment-controller Scaled up replica set nginx-dp-7fb9ff5685 to 2
Normal ScalingReplicaSet 3m34s deployment-controller Scaled down replica set nginx-dp-d46f5678b to 1
Normal ScalingReplicaSet 3m34s deployment-controller Scaled up replica set nginx-dp-7fb9ff5685 to 3
Normal ScalingReplicaSet 3m17s deployment-controller Scaled down replica set nginx-dp-d46f5678b to 0

从上面的各种结果可以看出,当修改了deployment中的image后:

  1. k8s创建了一个新的ReplicaSet: nginx-dp-7fb9ff5685
  2. 逐个增加新的ReplicaSet,然后逐个减少旧的ReplicaSet
  3. 当滚动更新完毕后,旧的RS nginx-dp-d46f5678b数量变为0,但仍旧保留,没有自动删除
  4. Deployment的NewReplicaSet字段变成了nginx-dp-7fb9ff5685

修改为一个错误的image

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
$ kubectl set image deployment/nginx-dp nginx=nginx:1.91
deployment.apps/nginx-dp image updated
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-dp-6d6678fb55 1 1 0 6s
nginx-dp-7fb9ff5685 3 3 3 24m
nginx-dp-d46f5678b 0 0 0 31m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-dp-6d6678fb55-4xxpv 0/1 ErrImagePull 0 72s
nginx-dp-7fb9ff5685-25xlv 1/1 Running 0 25m
nginx-dp-7fb9ff5685-jxvt8 1/1 Running 0 17m
nginx-dp-7fb9ff5685-ld64t 1/1 Running 0 21m
$ kubectl describe deployment nginx-dp
...
OldReplicaSets: nginx-dp-7fb9ff5685 (3/3 replicas created)
NewReplicaSet: nginx-dp-6d6678fb55 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 33m deployment-controller Scaled up replica set nginx-dp-d46f5678b to 3
Normal ScalingReplicaSet 25m deployment-controller Scaled up replica set nginx-dp-7fb9ff5685 to 1
Normal ScalingReplicaSet 21m deployment-controller Scaled down replica set nginx-dp-d46f5678b to 2
Normal ScalingReplicaSet 21m deployment-controller Scaled up replica set nginx-dp-7fb9ff5685 to 2
Normal ScalingReplicaSet 18m deployment-controller Scaled down replica set nginx-dp-d46f5678b to 1
Normal ScalingReplicaSet 18m deployment-controller Scaled up replica set nginx-dp-7fb9ff5685 to 3
Normal ScalingReplicaSet 17m deployment-controller Scaled down replica set nginx-dp-d46f5678b to 0
Normal ScalingReplicaSet 98s deployment-controller Scaled up replica set nginx-dp-6d6678fb55 to 1

可以看出:

  1. 再次创建了一个新的RS: nginx-dp-6d6678fb55
  2. 创建了一个名为nginx-dp-6d6678fb55-4xxpv的pod,但状态为ErrImagePull
  3. Deployment的OldReplicaSets为原来的nginx-dp-7fb9ff5685,NewReplicaSet为新的nginx-dp-6d6678fb55

Rollout

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
$ kubectl rollout undo deployment/nginx-dp
deployment.apps/nginx-dp rolled back
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-dp-6d6678fb55 0 0 0 6m49s
nginx-dp-7fb9ff5685 3 3 3 30m
nginx-dp-d46f5678b 0 0 0 38m
$ kubectl describe deployment/nginx-dp
...
OldReplicaSets: <none>
NewReplicaSet: nginx-dp-7fb9ff5685 (3/3 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 38m deployment-controller Scaled up replica set nginx-dp-d46f5678b to 3
Normal ScalingReplicaSet 31m deployment-controller Scaled up replica set nginx-dp-7fb9ff5685 to 1
Normal ScalingReplicaSet 27m deployment-controller Scaled down replica set nginx-dp-d46f5678b to 2
Normal ScalingReplicaSet 27m deployment-controller Scaled up replica set nginx-dp-7fb9ff5685 to 2
Normal ScalingReplicaSet 23m deployment-controller Scaled down replica set nginx-dp-d46f5678b to 1
Normal ScalingReplicaSet 23m deployment-controller Scaled up replica set nginx-dp-7fb9ff5685 to 3
Normal ScalingReplicaSet 23m deployment-controller Scaled down replica set nginx-dp-d46f5678b to 0
Normal ScalingReplicaSet 7m2s deployment-controller Scaled up replica set nginx-dp-6d6678fb55 to 1
Normal ScalingReplicaSet 19s deployment-controller Scaled down replica set nginx-dp-6d6678fb55 to 0
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-dp-7fb9ff5685-25xlv 1/1 Running 0 40m
nginx-dp-7fb9ff5685-jxvt8 1/1 Running 0 33m
nginx-dp-7fb9ff5685-ld64t 1/1 Running 0 36m

可以看出:

  1. nginx-dp-6d6678fb55的RS的期望值变为了0
  2. 对应的pod被自动删除

history

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
$ kubectl rollout history deployment/nginx-dp
deployment.apps/nginx-dp
REVISION CHANGE-CAUSE
1 kubectl apply --filename=nginx-dp.yaml --record=true
3 kubectl apply --filename=nginx-dp.yaml --record=true
4 kubectl apply --filename=nginx-dp.yaml --record=true

$ kubectl rollout history deployment/nginx-dp --revision=4
deployment.apps/nginx-dp with revision #4
Pod Template:
Labels: app=nginx
pod-template-hash=7fb9ff5685
Annotations: kubernetes.io/change-cause: kubectl apply --filename=nginx-dp.yaml --record=true
Containers:
nginx:
Image: nginx:stable
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>

注:新版本的命令输出已经和课程中的不一样了

现在可以使用revision字段回滚到镜像为nginx:latest

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
$ kubectl rollout history deployment/nginx-dp --revision=1
deployment.apps/nginx-dp with revision #1
Pod Template:
Labels: app=nginx
pod-template-hash=d46f5678b
Annotations: kubernetes.io/change-cause: kubectl apply --filename=nginx-dp.yaml --record=true
Containers:
nginx:
Image: nginx
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
$ kubectl rollout undo deployment/nginx-dp --to-revision=1
deployment.apps/nginx-dp rolled back
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-dp-6d6678fb55 0 0 0 29m
nginx-dp-7fb9ff5685 3 3 3 53m
nginx-dp-d46f5678b 1 1 0 60m
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-dp-6d6678fb55 0 0 0 29m
nginx-dp-7fb9ff5685 1 1 1 53m
nginx-dp-d46f5678b 3 3 2 60m
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-dp-6d6678fb55 0 0 0 29m
nginx-dp-7fb9ff5685 0 0 0 53m
nginx-dp-d46f5678b 3 3 3 60m
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
$ kubectl rollout history deployment/nginx-dp
deployment.apps/nginx-dp
REVISION CHANGE-CAUSE
3 kubectl apply --filename=nginx-dp.yaml --record=true
4 kubectl apply --filename=nginx-dp.yaml --record=true
5 kubectl apply --filename=nginx-dp.yaml --record=true

$ kubectl rollout history deployment/nginx-dp --revision=5
deployment.apps/nginx-dp with revision #5
Pod Template:
Labels: app=nginx
pod-template-hash=d46f5678b
Annotations: kubernetes.io/change-cause: kubectl apply --filename=nginx-dp.yaml --record=true
Containers:
nginx:
Image: nginx
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>

可以看到,回滚之后:

  1. 原来的revision 1已经不存在了
  2. 出现了一个revision = 5
  3. revision 5和原来的revision 1的内容一致

所以,虽然是回滚到某个版本,但revision的号依然是增加了的。是通过滚动升级的方式实现了回滚的操作

暂停与恢复

  • 现在,先暂停
1
2
$ kubectl rollout pause deployment/nginx-dp
deployment.apps/nginx-dp paused
  • 修改image版本
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
$ kubectl set image deployment/nginx-dp nginx=nginx:1.18
deployment.apps/nginx-dp image updated
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-dp-6d6678fb55 0 0 0 37m
nginx-dp-7fb9ff5685 0 0 0 61m
nginx-dp-d46f5678b 3 3 3 68m
$ kubectl describe deployment/nginx-dp
...
OldReplicaSets: nginx-dp-d46f5678b (3/3 replicas created)
NewReplicaSet: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 57m deployment-controller Scaled down replica set nginx-dp-d46f5678b to 2
Normal ScalingReplicaSet 57m deployment-controller Scaled up replica set nginx-dp-7fb9ff5685 to 2
Normal ScalingReplicaSet 54m deployment-controller Scaled down replica set nginx-dp-d46f5678b to 1
Normal ScalingReplicaSet 54m deployment-controller Scaled up replica set nginx-dp-7fb9ff5685 to 3
Normal ScalingReplicaSet 53m deployment-controller Scaled down replica set nginx-dp-d46f5678b to 0
Normal ScalingReplicaSet 37m deployment-controller Scaled up replica set nginx-dp-6d6678fb55 to 1
Normal ScalingReplicaSet 30m deployment-controller Scaled down replica set nginx-dp-6d6678fb55 to 0
Normal ScalingReplicaSet 8m32s deployment-controller Scaled up replica set nginx-dp-d46f5678b to 1
Normal ScalingReplicaSet 8m26s deployment-controller Scaled up replica set nginx-dp-d46f5678b to 2
Normal ScalingReplicaSet 8m26s deployment-controller Scaled down replica set nginx-dp-7fb9ff5685 to 2
Normal ScalingReplicaSet 8m22s (x2 over 69m) deployment-controller Scaled up replica set nginx-dp-d46f5678b to 3
Normal ScalingReplicaSet 8m22s deployment-controller Scaled down replica set nginx-dp-7fb9ff5685 to 1
Normal ScalingReplicaSet 8m17s deployment-controller Scaled down replica set nginx-dp-7fb9ff5685 to 0

可以看到:

  1. 修改了image为nginx:1.18后,没有创建出新的RS
  2. 但:Deployment的OldReplicaSet变成了nginx-dp-d46f5678b,而NewReplicaSet变成了<none>,这与最早之前看到的不一样
  • 如果在暂停状态下将image再设置回nginx:latest会怎样呢
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
$ kubectl set image deployment/nginx-dp nginx=nginx
deployment.apps/nginx-dp image updated
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-dp-6d6678fb55 0 0 0 44m
nginx-dp-7fb9ff5685 0 0 0 68m
nginx-dp-d46f5678b 3 3 3 76m
# 取消暂停
$ kubectl rollout resume deployment/nginx-dp
deployment.apps/nginx-dp resumed
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-dp-6d6678fb55 0 0 0 46m
nginx-dp-7fb9ff5685 0 0 0 70m
nginx-dp-d46f5678b 3 3 3 77m
$ kubectl rollout history deployment/nginx-dp
deployment.apps/nginx-dp
REVISION CHANGE-CAUSE
3 kubectl apply --filename=nginx-dp.yaml --record=true
4 kubectl apply --filename=nginx-dp.yaml --record=true
5 kubectl apply --filename=nginx-dp.yaml --record=true
$ kubectl describe deployment/nginx-dp
...
OldReplicaSets: <none>
NewReplicaSet: nginx-dp-d46f5678b (3/3 replicas created)
...

可以看到,修改回暂停前的镜像后,没有生成新的revision

  • 继续暂停的实验
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
# 暂停
$ kubectl rollout pause deployment/nginx-dp
deployment.apps/nginx-dp paused
# 修改
$ kubectl set image deployment/nginx-dp nginx=nginx:1.18
deployment.apps/nginx-dp image updated
# 恢复
$ kubectl rollout resume deployment nginx-dp
deployment.apps/nginx-dp resumed
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-dp-6d6678fb55 0 0 0 53m
nginx-dp-7f6cd547bd 1 1 0 3s
nginx-dp-7fb9ff5685 0 0 0 77m
nginx-dp-d46f5678b 3 3 3 85m
$ kubectl rollout history deployment/nginx-dp
deployment.apps/nginx-dp
REVISION CHANGE-CAUSE
3 kubectl apply --filename=nginx-dp.yaml --record=true
4 kubectl apply --filename=nginx-dp.yaml --record=true
5 kubectl apply --filename=nginx-dp.yaml --record=true
6 kubectl apply --filename=nginx-dp.yaml --record=true
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-dp-6d6678fb55 0 0 0 55m
nginx-dp-7f6cd547bd 3 3 3 90s
nginx-dp-7fb9ff5685 0 0 0 79m
nginx-dp-d46f5678b 0 0 0 86m
$ kubectl rollout history deployment/nginx-dp --revision=6
deployment.apps/nginx-dp with revision #6
Pod Template:
Labels: app=nginx
pod-template-hash=7f6cd547bd
Annotations: kubernetes.io/change-cause: kubectl apply --filename=nginx-dp.yaml --record=true
Containers:
nginx:
Image: nginx:1.18
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>

可以看出,当取消暂停后,对应的动作按照滚动更新的方式开始执行。

小结

本节课程主要学习了:

  1. 水平扩展时,实际上是修改了ReplicaSet的期望值,并按照期望值去逐步拉起pod
  2. 而滚动更新时,会创建新的ReplicaSet,并通过逐个增加新的,而逐个减少旧的的方式来逐步替换pod

最后,一个对原文的引用:

Deployment 实际上是一个两层控制器。首先,它通过 ReplicaSet 的个数来描述应用的版本;然后,它再通过 ReplicaSet 的属性(比如 replicas 的值),来保证 Pod 的副本数量。