1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/modules/scale-down-data-plane.adoc
Martin Gencur 0c4ce88a83 OCPBUGS-53162: Long command-line prompts not visible
This commit break long command-line prompts into multiple lines so that
the command is visible without scrolling.
2025-03-18 15:29:13 +01:00

101 lines
3.0 KiB
Plaintext

// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-troubleshooting.adoc
:_mod-docs-content-type: PROCEDURE
[id="scale-down-data-plane_{context}"]
= Scaling down the data plane to zero
If you are not using the hosted control plane, to save the resources and cost you can scale down a data plane to zero.
[NOTE]
====
Ensure you are prepared to scale down the data plane to zero. Because the workload from the worker nodes disappears after scaling down.
====
.Procedure
. Set the `kubeconfig` file to access the hosted cluster by running the following command:
+
[source,terminal]
----
$ export KUBECONFIG=<install_directory>/auth/kubeconfig
----
. Get the name of the `NodePool` resource associated to your hosted cluster by running the following command:
+
[source,terminal]
----
$ oc get nodepool --namespace <hosted_cluster_namespace>
----
. Optional: To prevent the pods from draining, add the `nodeDrainTimeout` field in the `NodePool` resource by running the following command:
+
[source,terminal]
----
$ oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace>
----
+
.Example output
[source,yaml]
----
apiVersion: hypershift.openshift.io/v1alpha1
kind: NodePool
metadata:
# ...
name: nodepool-1
namespace: clusters
# ...
spec:
arch: amd64
clusterName: clustername <1>
management:
autoRepair: false
replace:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
strategy: RollingUpdate
upgradeType: Replace
nodeDrainTimeout: 0s <2>
# ...
----
<1> Defines the name of your hosted cluster.
<2> Specifies the total amount of time that the controller spends to drain a node. By default, the `nodeDrainTimeout: 0s` setting blocks the node draining process.
+
[NOTE]
====
To allow the node draining process to continue for a certain period of time, you can set the value of the `nodeDrainTimeout` field accordingly, for example, `nodeDrainTimeout: 1m`.
====
. Scale down the `NodePool` resource associated to your hosted cluster by running the following command:
+
[source,terminal]
----
$ oc scale nodepool/<nodepool_name> --namespace <hosted_cluster_namespace> \
--replicas=0
----
+
[NOTE]
====
After scaling down the data plan to zero, some pods in the control plane stay in the `Pending` status and the hosted control plane stays up and running. If necessary, you can scale up the `NodePool` resource.
====
. Optional: Scale up the `NodePool` resource associated to your hosted cluster by running the following command:
+
[source,terminal]
----
$ oc scale nodepool/<nodepool_name> --namespace <hosted_cluster_namespace> --replicas=1
----
+
After rescaling the `NodePool` resource, wait for couple of minutes for the `NodePool` resource to become available in a `Ready` state.
.Verification
* Verify that the value for the `nodeDrainTimeout` field is greater than `0s` by running the following command:
+
[source,terminal]
----
$ oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -ojsonpath='{.spec.nodeDrainTimeout}'
----