Difference between revisions of "Kubernetes/Scheduling"

From Ever changing code
Jump to navigation Jump to search
Line 10: Line 10:
# If there is more than one node could schedule a pod, the scheduler priorities the nodes and choose the best one. If they have the same priority it chooses in round-robin fashion.
# If there is more than one node could schedule a pod, the scheduler priorities the nodes and choose the best one. If they have the same priority it chooses in round-robin fashion.
= Label nodes =
= Label nodes =
<source lang=bash>
kubectl label node worker1.acme.com share-type=dedicated
</source>
YAML for the deployment to include the node affinity rules:
<source lang=yaml>
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: pref
spec:
  replicas: 5
  template:
    metadata:
      labels:
        app: pref
    spec:
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution: #all pods,but not current pod on the node
          - weight: 80
            preference:
              matchExpressions:
              - key: availability-zone
                operator: In
                values:
                - zone1
          - weight: 20              #4 time less priority then AZ
            preference:
              matchExpressions:
              - key: share-type    #label key
                operator: In
                values:
                - dedicated        #label value
      containers:
      - args:
        - sleep
        - "999"
        image: busybox:v1.28.4
        name: main
</source>


= Resources =
= Resources =
*[https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ Assigning a Pod to a Node]
*[https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ Assigning a Pod to a Node]
*[https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity Pod and Node Affinity Rules]
*[https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity Pod and Node Affinity Rules]

Revision as of 09:03, 19 July 2019

Default scheduler rules

  1. Identify if a node has adequate hardware resources
  2. Check if a node is running out of resources. check for memory or disk pressure conditions
  3. Check if a pod schedule is scheduled to a node by a name
  4. Check if a node has a label matching node selector in a pod spec
  5. Check if a pod is requesting to bound to a specific host port and if so, does the node have that port available
  6. Check if a pod is requesting a certain type of volume be mounted and if other pods are using the same volume
  7. Check if a pod tolerates taints of the node, eg. master nodes is tainted with "noSchedule"
  8. Check if a pod or a node affinity rules and checking if scheduling the pod would break these rules
  9. If there is more than one node could schedule a pod, the scheduler priorities the nodes and choose the best one. If they have the same priority it chooses in round-robin fashion.

Label nodes

kubectl label node worker1.acme.com share-type=dedicated


YAML for the deployment to include the node affinity rules:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: pref
spec:
  replicas: 5
  template:
    metadata:
      labels:
        app: pref
    spec:
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution: #all pods,but not current pod on the node
          - weight: 80
            preference:
              matchExpressions:
              - key: availability-zone
                operator: In
                values:
                - zone1
          - weight: 20              #4 time less priority then AZ
            preference:
              matchExpressions:
              - key: share-type     #label key
                operator: In
                values:
                - dedicated         #label value
      containers:
      - args:
        - sleep
        - "999"
        image: busybox:v1.28.4
        name: main

Resources