博客 kubernetes调度策略(二)

kubernetes调度策略(二)

   数栈君   发表于 2024-03-08 10:05  85  0

(接上一篇)

3 操作步骤

3.3 podAffinity

Pod 亲和性/反亲和性调度:一组 Pod 运行在一起,允许调度器将第一个 Pod 随机选择一个节点,第二个 Pod 根据第一个 Pod 实现调度。

可以根据标签分类:机架(rack)、一排(row)、机房(zone),来判断是否为同一位置。

"podAnffinity"
$ kubectl explain pods.spec.affinity
   nodeAffinity <Object>
   podAffinity  <Object>
      preferredDuringSchedulingIgnoredDuringExecution      <[]Object>  # 软亲和性,尽量满足
         preference   <Object> -required-
            matchExpressions     <[]Object>      # 匹配表达式
            matchFields  <[]Object>              # 匹配字段
               key  <string> -required-          # key
               operator     <string> -required-  # 操作符(In, NotIn, Exists, DoesNotExist, Gt, and Lt.)
               values       <[]string>           # value
         weight       <integer> -required-
      requiredDuringSchedulingIgnoredDuringExecution       <[]Object>  # 硬亲和性,必须满足
         labelSelector        <Object>           # 选定要亲和的Pod
            matchExpressions     <[]Object>
            matchLabels  <map[string]string>
         namespaceSelector    <Object>           # 选定要亲和的Namespace
         namespaces   <[]string>                 # 选定要亲和的Namespace
         topologyKey  <string> -required-        # 位置拓扑键
   podAntiAffinity      <Object>

实例: podAffinity

"创建测试Pod"
$  cat demo-pod-podAffinity.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-frontend
  namespace: qingyun
  labels:
    app: myapp
    tier: frontend
spec:
  containers: 
  - name: myapp
    image: zhangyyhub/myapp:v1.0
    imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-afterend
  namespace: qingyun
  labels:
    app: db
    tier: afterend
spec:
  containers:
  - name: myapp
    image: zhangyyhub/myapp:v1.0
    imagePullPolicy: IfNotPresent
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:  # 硬亲和性,必须满足
      - labelSelector:
          matchExpressions:                  # 匹配表达式
          - key: app                         # key
            operator: In                     # 操作符
            values:                          # value
            - myapp
        topologyKey: kubernetes.io/hostname  # 位置拓扑键

$ kubectl apply -f demo-pod-podAffinity.yaml
pod/pod-frontend unchanged
pod/pod-afterend created

"验证测试Pod"
$ kubectl get pods -n qingyun -o wide
NAME           READY   STATUS    RESTARTS   AGE     IP               NODE      NOMINATED NODE   READINESS GATES
pod-afterend   1/1     Running   0          14s     10.244.235.141   worker1   <none>           <none>
pod-frontend   1/1     Running   0          3m26s   10.244.235.171   worker1   <none>           <none>

实例: podAntiAffinity

"创建测试Pod"
$ cat demo-pod-podAffinity2.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-frontend
  namespace: qingyun
  labels:
    app: myapp
    tier: frontend
spec:
  containers: 
  - name: myapp
    image: zhangyyhub/myapp:v1.0
    imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-afterend
  namespace: qingyun
  labels:
    app: db
    tier: afterend
spec:
  containers:
  - name: myapp
    image: zhangyyhub/myapp:v1.0
    imagePullPolicy: IfNotPresent
  affinity:
    podAntiAffinity:   # Pod反亲和
      requiredDuringSchedulingIgnoredDuringExecution:  # 硬亲和性,必须满足
      - labelSelector:
          matchExpressions:
          - key: app
            operator: In
            values:
            - myapp
        topologyKey: kubernetes.io/hostname

$ kubectl apply -f demo-pod-podAffinity2.yaml
pod/pod-frontend created
pod/pod-afterend created

"验证测试Pod"
$ kubectl get pods -n qingyun -o wide
NAME           READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
pod-afterend   1/1     Running   0          14s   10.244.235.143   worker1      <none>           <none>
pod-frontend   1/1     Running   0          14s   10.244.235.210   k8s-master   <none>           <none>

3.4 容忍调度

容忍调度/污点调度:

kubectl explain nodes.spec.taints
   effect        <string> -required-  # 定义对Pod排斥效果
      NoSchedule         # 仅影响调度过程,对现有Pod不影响
      PreferNoSchedule   # 最好不影响调度,如果必须调度也行
      NoExecute          # 即影响调度过程,也影响现有的Pod
   key  <string> -required-
   timeAdded     <string>
   value         <string>

实例:

"打污点:生产环境专用"
$ kubectl taint node worker1 node-type=production:NoSchedule
apiVersion: v1
kind: Deployment
metadata:
  name: deploy-myapp
  namespace: qingyun
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
      labels:
        app: myapp
        release: canary
    spec:
      containers:
      - name: myapp
        image: zhangyyhub/myapp:v1.0
        imagePullPolicy: IfNotPresent
      tolerations:
      - key: "node-type"
        operator: "Equal"        # Exists(污点存在就行不在乎值) and Equal(等值比较)
        value: "production"
        effect: "NoExecute"      # 即影响调度过程,也影响现有的Pod
        tolerationSeconds: 3600

4、注意事项

注意 kubernetes 调度策略添加,不要影响现有现有环境。

5、结果检查

分别使用节点选择器、节点亲和器和节点容忍进行测试。



《数据治理行业实践白皮书》下载地址:
https://fs80.cn/4w2atu

《数栈V6.0产品白皮书》下载地址:https://fs80.cn/cw0iw1

想了解或咨询更多有关袋鼠云大数据产品、行业解决方案、客户案例的朋友,浏览袋鼠云官网:https://www.dtstack.com/?src=bbs

同时,欢迎对大数据开源项目有兴趣的同学加入「袋鼠云开源框架钉钉技术群」,交流最新开源技术信息,群号码:30537511,项目地址:https://github.com/DTStack  

0条评论
社区公告
  • 大数据领域最专业的产品&技术交流社区,专注于探讨与分享大数据领域有趣又火热的信息,专业又专注的数据人园地

最新活动更多
微信扫码获取数字化转型资料
钉钉扫码加入技术交流群