mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
Just a few lines above, the docs say: `node-role.kubernetes.io/infra`: Use this label to avoid having the control-plane workload count toward your subscription The nodeSelector should match the label.
47 lines
3.1 KiB
Plaintext
47 lines
3.1 KiB
Plaintext
// Module included in the following assemblies:
|
|
//
|
|
// * hosted_control_planes/hcp-prepare/hcp-distribute-workloads.adoc
|
|
|
|
:_mod-docs-content-type: PROCEDURE
|
|
[id="hcp-labels-taints_{context}"]
|
|
= Labeling management cluster nodes
|
|
|
|
Proper node labeling is a prerequisite to deploying {hcp}.
|
|
|
|
As a management cluster administrator, you use the following labels and taints in management cluster nodes to schedule a control plane workload:
|
|
|
|
* `hypershift.openshift.io/control-plane: true`: Use this label and taint to dedicate a node to running hosted control plane workloads. By setting a value of `true`, you avoid sharing the control plane nodes with other components, for example, the infrastructure components of the management cluster or any other mistakenly deployed workload.
|
|
* `hypershift.openshift.io/cluster: ${HostedControlPlane Namespace}`: Use this label and taint when you want to dedicate a node to a single hosted cluster.
|
|
|
|
Apply the following labels on the nodes that host control-plane pods:
|
|
|
|
* `node-role.kubernetes.io/infra`: Use this label to avoid having the control-plane workload count toward your subscription.
|
|
* `topology.kubernetes.io/zone`: Use this label on the management cluster nodes to deploy highly available clusters across failure domains. The zone might be a location, rack name, or the hostname of the node where the zone is set. For example, a management cluster has the following nodes: `worker-1a`, `worker-1b`, `worker-2a`, and `worker-2b`. The `worker-1a` and `worker-1b` nodes are in `rack1`, and the `worker-2a` and worker-2b nodes are in `rack2`. To use each rack as an availability zone, enter the following commands:
|
|
+
|
|
[source,terminal]
|
|
----
|
|
$ oc label node/worker-1a node/worker-1b topology.kubernetes.io/zone=rack1
|
|
----
|
|
+
|
|
[source,terminal]
|
|
----
|
|
$ oc label node/worker-2a node/worker-2b topology.kubernetes.io/zone=rack2
|
|
----
|
|
|
|
Pods for a hosted cluster have tolerations, and the scheduler uses affinity rules to schedule them. Pods tolerate taints for `control-plane` and the `cluster` for the pods. The scheduler prioritizes the scheduling of pods into nodes that are labeled with `hypershift.openshift.io/control-plane` and `hypershift.openshift.io/cluster: ${HostedControlPlane Namespace}`.
|
|
|
|
For the `ControllerAvailabilityPolicy` option, use `HighlyAvailable`, which is the default value that the {hcp} command-line interface, `hcp`, deploys. When you use that option, you can schedule pods for each deployment within a hosted cluster across different failure domains by setting `topology.kubernetes.io/zone` as the topology key. Scheduling pods for a deployment within a hosted cluster across different failure domains is available only for highly available control planes.
|
|
|
|
.Procedure
|
|
|
|
To enable a hosted cluster to require its pods to be scheduled into infrastructure nodes, set `HostedCluster.spec.nodeSelector`, as shown in the following example:
|
|
|
|
[source,yaml]
|
|
----
|
|
spec:
|
|
nodeSelector:
|
|
node-role.kubernetes.io/infra: ""
|
|
----
|
|
|
|
This way, {hcp} for each hosted cluster are eligible infrastructure node workloads, and you do not need to entitle the underlying {product-title} nodes.
|