mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-07 00:48:01 +01:00
299 lines
8.2 KiB
Plaintext
299 lines
8.2 KiB
Plaintext
// Module included in the following assemblies:
|
|
//
|
|
// * nodes/nodes-scheduler-default.adoc
|
|
|
|
[id="nodes-scheduler-default-predicates_{context}"]
|
|
= Understanding the scheduler predicates
|
|
|
|
Predicates are rules that filter out unqualified nodes.
|
|
|
|
There are several predicates provided by default in {product-title}. Some of
|
|
these predicates can be customized by providing certain parameters. Multiple
|
|
predicates can be combined to provide additional filtering of nodes.
|
|
|
|
[id="static-predicates_{context}"]
|
|
== Static Predicates
|
|
These predicates do not take any configuration parameters or inputs from the
|
|
user. These are specified in the scheduler configuration using their exact
|
|
name.
|
|
|
|
[id="default-predicates_{context}"]
|
|
=== Default Predicates
|
|
|
|
The default scheduler policy includes the following predicates:
|
|
|
|
*NoVolumeZoneConflict* checks that the volumes a pod requests
|
|
are available in the zone.
|
|
|
|
[source,yaml]
|
|
----
|
|
{"name" : "NoVolumeZoneConflict"}
|
|
----
|
|
|
|
*MaxEBSVolumeCount* checks the maximum number of volumes that can be attached to an AWS instance.
|
|
|
|
[source,yaml]
|
|
----
|
|
{"name" : "MaxEBSVolumeCount"}
|
|
----
|
|
|
|
*MaxAzureDiskVolumeCount* checks the maximum number of Azure Disk Volumes.
|
|
|
|
[source,yaml]
|
|
----
|
|
{"name" : "MaxAzureDiskVolumeCount"}
|
|
----
|
|
|
|
*PodToleratesNodeTaints* checks if a pod can tolerate the node taints.
|
|
|
|
[source,yaml]
|
|
----
|
|
{"name" : "PodToleratesNodeTaints"}
|
|
----
|
|
|
|
*CheckNodeUnschedulable* checks if a pod can be scheduled on a node with `Unschedulable` spec.
|
|
|
|
[source,yaml]
|
|
----
|
|
{"name" : "CheckNodeUnschedulable"}
|
|
----
|
|
|
|
|
|
*CheckVolumeBinding* evaluates if a pod can fit based on the volumes, it requests, for both bound and unbound PVCs.
|
|
* For PVCs that are bound, the predicate checks that the corresponding PV's node affinity is satisfied by the given node.
|
|
* For PVCs that are unbound, the predicate searched for available PVs that can satisfy the PVC requirements and that
|
|
the PV node affinity is satisfied by the given node.
|
|
|
|
The predicate returns true if all bound PVCs have compatible PVs with the node, and if all unbound PVCs can be matched with an available and node-compatible PV.
|
|
|
|
[source,yaml]
|
|
----
|
|
{"name" : "CheckVolumeBinding"}
|
|
----
|
|
|
|
// The `CheckVolumeBinding` predicate must be enabled in non-default schedulers.
|
|
|
|
*NoDiskConflict* checks if the volume requested by a pod is available.
|
|
|
|
[source,yaml]
|
|
----
|
|
{"name" : "NoDiskConflict"}
|
|
----
|
|
|
|
*MaxGCEPDVolumeCount* checks the maximum number of Google Compute Engine (GCE) Persistent Disks (PD).
|
|
|
|
[source,yaml]
|
|
----
|
|
{"name" : "MaxGCEPDVolumeCount"}
|
|
----
|
|
|
|
*MaxCSIVolumeCountPred*
|
|
|
|
|
|
*MatchInterPodAffinity* checks if the pod affinity/anti-affinity rules permit the pod.
|
|
|
|
[source,yaml]
|
|
----
|
|
{"name" : "MatchInterPodAffinity"}
|
|
----
|
|
|
|
[id="other-predicates_{context}"]
|
|
=== Other Static Predicates
|
|
|
|
{product-title} also supports the following predicates:
|
|
|
|
[NOTE]
|
|
====
|
|
The *CheckNode-** predicates cannot be used if the *Taint Nodes By Condition* feature is enabled.
|
|
The *Taint Nodes By Condition* feature is enabled by default.
|
|
====
|
|
|
|
*CheckNodeCondition* checks if a pod can be scheduled on a node reporting *out of disk*, *network unavailable*, or *not ready* conditions.
|
|
|
|
[source,yaml]
|
|
----
|
|
{"name" : "CheckNodeCondition"}
|
|
----
|
|
|
|
*CheckNodeLabelPresence* checks if all of the specified labels exist on a node, regardless of their value.
|
|
|
|
[source,yaml]
|
|
----
|
|
{"name" : "CheckNodeLabelPresence"}
|
|
----
|
|
|
|
*checkServiceAffinity* checks that ServiceAffinity labels are homogeneous for pods that are scheduled on a node.
|
|
|
|
[source,yaml]
|
|
----
|
|
{"name" : "checkServiceAffinity"}
|
|
----
|
|
|
|
*PodToleratesNodeNoExecuteTaints* checks if a pod tolerations can tolerate a node *NoExecute* taints.
|
|
|
|
[source,yaml]
|
|
----
|
|
{"name" : "PodToleratesNodeNoExecuteTaints"}
|
|
----
|
|
|
|
[id="admin-guide-scheduler-general-predicates_{context}"]
|
|
== General Predicates
|
|
|
|
The following general predicates check whether non-critical predicates and essential predicates pass. Non-critical predicates are the predicates
|
|
that only non-critical pods must pass and essential predicates are the predicates that all pods must pass.
|
|
|
|
_The default scheduler policy includes the general predicates._
|
|
|
|
[discrete]
|
|
=== Non-critical general predicates
|
|
|
|
*PodFitsResources* determines a fit based on resource availability
|
|
(CPU, memory, GPU, and so forth). The
|
|
nodes can declare their resource capacities and then pods can specify what
|
|
resources they require. Fit is based on requested, rather than used
|
|
resources.
|
|
|
|
[source,yaml]
|
|
----
|
|
{"name" : "PodFitsResources"}
|
|
----
|
|
|
|
[discrete]
|
|
==== Essential general predicates
|
|
|
|
*PodFitsHostPorts* determines if a node has free ports for the requested pod ports (absence
|
|
of port conflicts).
|
|
|
|
[source,yaml]
|
|
----
|
|
{"name" : "PodFitsHostPorts"}
|
|
----
|
|
|
|
*HostName* determines fit based on the presence of the Host parameter
|
|
and a string match with the name of the host.
|
|
|
|
[source,yaml]
|
|
----
|
|
{"name" : "HostName"}
|
|
----
|
|
|
|
*MatchNodeSelector* determines fit based on node selector (nodeSelector) queries
|
|
defined in the pod.
|
|
|
|
[source,yaml]
|
|
----
|
|
{"name" : "MatchNodeSelector"}
|
|
----
|
|
|
|
////
|
|
[id="configurable-predicates_{context}"]
|
|
== Configurable Predicates
|
|
|
|
// per sjenning Nope
|
|
|
|
You can configure these predicates in the scheduler policy Configmap
|
|
in the `openshift-config` project, to add labels to affect
|
|
how the predicate functions.
|
|
|
|
Since these are configurable, multiple predicates
|
|
of the same type (but different configuration parameters) can be combined as
|
|
long as their user-defined names are different.
|
|
|
|
For information on using these priorities, see Modifying Scheduler Policy.
|
|
|
|
*ServiceAffinity* places pods on nodes based on the service running on that pod.
|
|
Placing pods of the same service on the same or co-located nodes can lead to higher efficiency.
|
|
|
|
This predicate attempts to place pods with specific labels
|
|
in its node selector on nodes that have the same label.
|
|
|
|
If the pod does not specify the labels in its
|
|
node selector, then the first pod is placed on any node based on availability
|
|
and all subsequent pods of the service are scheduled on nodes that have the
|
|
same label values as that node.
|
|
|
|
[source,json]
|
|
----
|
|
"predicates":[
|
|
{
|
|
"name":"<name>", <1>
|
|
"argument":{
|
|
"serviceAffinity":{
|
|
"labels":[
|
|
"<label>" <2>
|
|
]
|
|
}
|
|
}
|
|
}
|
|
],
|
|
----
|
|
<1> Specify a name for the predicate.
|
|
<2> Specify a label to match.
|
|
|
|
For example:
|
|
|
|
[source,json]
|
|
----
|
|
"name":"ZoneAffinity",
|
|
"argument":{
|
|
"serviceAffinity":{
|
|
"labels":[
|
|
"rack"
|
|
]
|
|
}
|
|
}
|
|
----
|
|
|
|
For example. if the first pod of a service had a node selector `rack` was scheduled to a node with label `region=rack`,
|
|
all the other subsequent pods belonging to the same service will be scheduled on nodes
|
|
with the same `region=rack` label.
|
|
|
|
Multiple-level labels are also supported. Users can also specify all pods for a service to
|
|
be scheduled on nodes within the same region and within the same zone (under the region).
|
|
|
|
The `labelsPresence` parameter checks whether a particular node has a specific label. The labels create node _groups_ that the
|
|
`LabelPreference` priority uses. Matching by label can be useful, for example, where nodes have their physical location or status defined by labels.
|
|
|
|
[source,json]
|
|
----
|
|
"predicates":[
|
|
{
|
|
"name":"<name>", <1>
|
|
"argument":{
|
|
"labelsPresence":{
|
|
"labels":[
|
|
"<label>" <2>
|
|
],
|
|
"presence": true <3>
|
|
}
|
|
}
|
|
}
|
|
],
|
|
----
|
|
<1> Specify a name for the predicate.
|
|
<2> Specify a label to match.
|
|
<3> Specify whether the labels are required, either `true` or `false`.
|
|
+
|
|
* For `presence:false`, if any of the requested labels are present in the node labels,
|
|
the pod cannot be scheduled. If the labels are not present, the pod can be scheduled.
|
|
+
|
|
* For `presence:true`, if all of the requested labels are present in the node labels,
|
|
the pod can be scheduled. If all of the labels are not present, the pod is not scheduled.
|
|
|
|
For example:
|
|
|
|
[source,json]
|
|
----
|
|
"name":"RackPreferred",
|
|
"argument":{
|
|
"labelsPresence":{
|
|
"labels":[
|
|
"rack",
|
|
"region"
|
|
],
|
|
"presence": true
|
|
}
|
|
}
|
|
----
|
|
////
|