1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

TELCODOCS#2433: Updated SNO+1 documentation with limit of 1 worker node

This commit is contained in:
srir
2025-12-01 18:59:11 +05:30
committed by openshift-cherrypick-robot
parent 03cad43184
commit 756be4ef52
4 changed files with 9 additions and 9 deletions

View File

@@ -5,11 +5,11 @@ include::_attributes/common-attributes.adoc[]
toc::[]
You can expand {sno} clusters with {ztp-first}. When you add worker nodes to {sno} clusters, the original {sno} cluster retains the control plane node role. Adding worker nodes does not require any downtime for the existing {sno} cluster.
You can expand {sno} clusters with {ztp-first}. When you add a worker node to {sno} clusters, the original {sno} cluster retains the control plane node role. Adding a worker node does not require any downtime for the existing {sno} cluster.
[NOTE]
====
Although there is no specified limit on the number of worker nodes that you can add to a {sno} cluster, you must revaluate the reserved CPU allocation on the control plane node for the additional worker nodes.
You can only expand a {sno} cluster with one additional worker node. It is not recommended to expand a {sno} cluster with more than one worker node.
====
If you require workload partitioning on the worker node, you must deploy and remediate the managed cluster policies on the hub cluster before installing the node. This way, the workload partitioning `MachineConfig` objects are rendered and associated with the `worker` machine config pool before the {ztp} workflow applies the `MachineConfig` ignition file to the worker node.
@@ -17,7 +17,7 @@ If you require workload partitioning on the worker node, you must deploy and rem
It is recommended that you first remediate the policies, and then install the worker node.
If you create the workload partitioning manifests after installing the worker node, you must drain the node manually and delete all the pods managed by daemon sets. When the managing daemon sets create the new pods, the new pods undergo the workload partitioning process.
:FeatureName: Adding worker nodes to {sno} clusters with {ztp}
:FeatureName: Adding a worker node to {sno} clusters with {ztp}
include::snippets/technology-preview.adoc[]
[role="_additional-resources"]

View File

@@ -4,9 +4,9 @@
:_mod-docs-content-type: PROCEDURE
[id="ztp-additional-worker-sno-proc_{context}"]
= Adding worker nodes to {sno} clusters with {ztp}
= Adding an additional worker node {sno} clusters with {ztp}
You can add one or more worker nodes to existing {sno} clusters to increase available CPU resources in the cluster.
You can add an additional worker node to existing {sno} clusters to increase available CPU resources in the cluster.
.Prerequisites
@@ -87,7 +87,7 @@ When the ArgoCD `cluster` application synchronizes, two new manifests appear on
+
[IMPORTANT]
====
The `cpuset` field should not be configured for the worker node. Workload partitioning for worker nodes is added through management policies after the node installation is complete.
The `cpuset` field should not be configured for the worker node. Workload partitioning for the worker node is added through management policies after the node installation is complete.
====
.Verification

View File

@@ -6,4 +6,4 @@
[id="ztp-additional-worker-node-selector-comp_{context}"]
= PTP and SR-IOV node selector compatibility
The PTP configuration resources and SR-IOV network node policies use `node-role.kubernetes.io/master: ""` as the node selector. If the additional worker nodes have the same NIC configuration as the control plane node, the policies used to configure the control plane node can be reused for the worker nodes. However, the node selector must be changed to select both node types, for example with the `"node-role.kubernetes.io/worker"` label.
The PTP configuration resources and SR-IOV network node policies use `node-role.kubernetes.io/master: ""` as the node selector. If the additional worker node has the same NIC configuration as the control plane node, the policies used to configure the control plane node can be reused for the worker node. However, the node selector must be changed to select both node types, for example with the `"node-role.kubernetes.io/worker"` label.

View File

@@ -4,9 +4,9 @@
:_mod-docs-content-type: PROCEDURE
[id="ztp-additional-worker-policies-{policy-gen-cr}_{context}"]
= Using {policy-gen-cr} CRs to apply worker node policies to worker nodes
= Using {policy-gen-cr} CRs to apply worker node policies to the worker node
You can create policies for worker nodes using `{policy-gen-cr}` CRs.
You can create policies for the additional worker node by using `{policy-gen-cr}` CRs.
.Procedure