Pod 亲和性/反亲和性调度:一组 Pod 运行在一起,允许调度器将第一个 Pod 随机选择一个节点,第二个 Pod 根据第一个 Pod 实现调度。
可以根据标签分类:机架(rack)、一排(row)、机房(zone),来判断是否为同一位置。
"podAnffinity"
$ kubectl explain pods.spec.affinity
nodeAffinity <Object>
podAffinity <Object>
preferredDuringSchedulingIgnoredDuringExecution <[]Object> # 软亲和性,尽量满足
preference <Object> -required-
matchExpressions <[]Object> # 匹配表达式
matchFields <[]Object> # 匹配字段
key <string> -required- # key
operator <string> -required- # 操作符(In, NotIn, Exists, DoesNotExist, Gt, and Lt.)
values <[]string> # value
weight <integer> -required-
requiredDuringSchedulingIgnoredDuringExecution <[]Object> # 硬亲和性,必须满足
labelSelector <Object> # 选定要亲和的Pod
matchExpressions <[]Object>
matchLabels <map[string]string>
namespaceSelector <Object> # 选定要亲和的Namespace
namespaces <[]string> # 选定要亲和的Namespace
topologyKey <string> -required- # 位置拓扑键
podAntiAffinity <Object>
实例: podAffinity
"创建测试Pod"
$ cat demo-pod-podAffinity.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-frontend
namespace: qingyun
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: zhangyyhub/myapp:v1.0
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Pod
metadata:
name: pod-afterend
namespace: qingyun
labels:
app: db
tier: afterend
spec:
containers:
- name: myapp
image: zhangyyhub/myapp:v1.0
imagePullPolicy: IfNotPresent
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution: # 硬亲和性,必须满足
- labelSelector:
matchExpressions: # 匹配表达式
- key: app # key
operator: In # 操作符
values: # value
- myapp
topologyKey: kubernetes.io/hostname # 位置拓扑键
$ kubectl apply -f demo-pod-podAffinity.yaml
pod/pod-frontend unchanged
pod/pod-afterend created
"验证测试Pod"
$ kubectl get pods -n qingyun -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-afterend 1/1 Running 0 14s 10.244.235.141 worker1 <none> <none>
pod-frontend 1/1 Running 0 3m26s 10.244.235.171 worker1 <none> <none>
实例: podAntiAffinity
"创建测试Pod"
$ cat demo-pod-podAffinity2.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-frontend
namespace: qingyun
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: zhangyyhub/myapp:v1.0
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Pod
metadata:
name: pod-afterend
namespace: qingyun
labels:
app: db
tier: afterend
spec:
containers:
- name: myapp
image: zhangyyhub/myapp:v1.0
imagePullPolicy: IfNotPresent
affinity:
podAntiAffinity: # Pod反亲和
requiredDuringSchedulingIgnoredDuringExecution: # 硬亲和性,必须满足
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- myapp
topologyKey: kubernetes.io/hostname
$ kubectl apply -f demo-pod-podAffinity2.yaml
pod/pod-frontend created
pod/pod-afterend created
"验证测试Pod"
$ kubectl get pods -n qingyun -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-afterend 1/1 Running 0 14s 10.244.235.143 worker1 <none> <none>
pod-frontend 1/1 Running 0 14s 10.244.235.210 k8s-master <none> <none>
容忍调度/污点调度:
kubectl explain nodes.spec.taints
effect <string> -required- # 定义对Pod排斥效果
NoSchedule # 仅影响调度过程,对现有Pod不影响
PreferNoSchedule # 最好不影响调度,如果必须调度也行
NoExecute # 即影响调度过程,也影响现有的Pod
key <string> -required-
timeAdded <string>
value <string>
实例:
"打污点:生产环境专用"
$ kubectl taint node worker1 node-type=production:NoSchedule
apiVersion: v1
kind: Deployment
metadata:
name: deploy-myapp
namespace: qingyun
spec:
replicas: 2
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
labels:
app: myapp
release: canary
spec:
containers:
- name: myapp
image: zhangyyhub/myapp:v1.0
imagePullPolicy: IfNotPresent
tolerations:
- key: "node-type"
operator: "Equal" # Exists(污点存在就行不在乎值) and Equal(等值比较)
value: "production"
effect: "NoExecute" # 即影响调度过程,也影响现有的Pod
tolerationSeconds: 3600
注意 kubernetes 调度策略添加,不要影响现有现有环境。
分别使用节点选择器、节点亲和器和节点容忍进行测试。