1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-07 00:48:01 +01:00
Files
openshift-docs/modules/scale-down-data-plane.adoc
Shubham Pawar e7be279858 Update oc edit command from the "Scaling down the data plane to zero" section
Update oc edit command from the "Scaling down the data plane to zero" section
2024-10-03 20:34:47 +05:30

100 lines
3.0 KiB
Plaintext

// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-troubleshooting.adoc
:_mod-docs-content-type: PROCEDURE
[id="scale-down-data-plane_{context}"]
= Scaling down the data plane to zero
If you are not using the hosted control plane, to save the resources and cost you can scale down a data plane to zero.
[NOTE]
====
Ensure you are prepared to scale down the data plane to zero. Because the workload from the worker nodes disappears after scaling down.
====
.Procedure
. Set the `kubeconfig` file to access the hosted cluster by running the following command:
+
[source,terminal]
----
$ export KUBECONFIG=<install_directory>/auth/kubeconfig
----
. Get the name of the `NodePool` resource associated to your hosted cluster by running the following command:
+
[source,terminal]
----
$ oc get nodepool --namespace <HOSTED_CLUSTER_NAMESPACE>
----
. Optional: To prevent the pods from draining, add the `nodeDrainTimeout` field in the `NodePool` resource by running the following command:
+
[source,terminal]
----
$ oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace>
----
+
.Example output
[source,yaml]
----
apiVersion: hypershift.openshift.io/v1alpha1
kind: NodePool
metadata:
# ...
name: nodepool-1
namespace: clusters
# ...
spec:
arch: amd64
clusterName: clustername <1>
management:
autoRepair: false
replace:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
strategy: RollingUpdate
upgradeType: Replace
nodeDrainTimeout: 0s <2>
# ...
----
<1> Defines the name of your hosted cluster.
<2> Specifies the total amount of time that the controller spends to drain a node. By default, the `nodeDrainTimeout: 0s` setting blocks the node draining process.
+
[NOTE]
====
To allow the node draining process to continue for a certain period of time, you can set the value of the `nodeDrainTimeout` field accordingly, for example, `nodeDrainTimeout: 1m`.
====
. Scale down the `NodePool` resource associated to your hosted cluster by running the following command:
+
[source,terminal]
----
$ oc scale nodepool/<NODEPOOL_NAME> --namespace <HOSTED_CLUSTER_NAMESPACE> --replicas=0
----
+
[NOTE]
====
After scaling down the data plan to zero, some pods in the control plane stay in the `Pending` status and the hosted control plane stays up and running. If necessary, you can scale up the `NodePool` resource.
====
. Optional: Scale up the `NodePool` resource associated to your hosted cluster by running the following command:
+
[source,terminal]
----
$ oc scale nodepool/<NODEPOOL_NAME> --namespace <HOSTED_CLUSTER_NAMESPACE> --replicas=1
----
+
After rescaling the `NodePool` resource, wait for couple of minutes for the `NodePool` resource to become available in a `Ready` state.
.Verification
* Verify that the value for the `nodeDrainTimeout` field is greater than `0s` by running the following command:
+
[source,terminal]
----
$ oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -ojsonpath='{.spec.nodeDrainTimeout}'
----