diff --git a/hosted_control_planes/hcp-managing.adoc b/hosted_control_planes/hcp-managing.adoc index 22031369b6..a135e5a699 100644 --- a/hosted_control_planes/hcp-managing.adoc +++ b/hosted_control_planes/hcp-managing.adoc @@ -18,5 +18,5 @@ include::modules/hosted-control-planes-pause-reconciliation.adoc[leveloffset=+1] //using service-level DNS for control plane services include::modules/hosted-control-planes-metrics-sets.adoc[leveloffset=+1] //automated machine management +include::modules/scale-down-data-plane.adoc[leveloffset=+1] include::modules/delete-hosted-cluster.adoc[leveloffset=+1] - diff --git a/modules/scale-down-data-plane.adoc b/modules/scale-down-data-plane.adoc new file mode 100644 index 0000000000..8fb87224c4 --- /dev/null +++ b/modules/scale-down-data-plane.adoc @@ -0,0 +1,90 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-managing.adoc + +:_content-type: PROCEDURE +[id="scale-down-data-plane_{context}"] += Scaling down the data plane to zero + +If you are not using the hosted control plane, to save the resources and cost you can scale down a data plane to zero. + +[NOTE] +==== +Ensure you are prepared to scale down the data plane to zero. Because the workload from the worker nodes disappears after scaling down. +==== + +.Procedure + +. Set the `kubeconfig` file to access the hosted cluster by running the following command: ++ +[source,terminal] +---- +$ export KUBECONFIG=/auth/kubeconfig +---- + +. Get the name of the `NodePool` resource associated to your hosted cluster by running the following command: ++ +[source,terminal] +---- +$ oc get nodepool --namespace +---- + +. Optional: To prevent the pods from draining, add the `nodeDrainTimeout` field in the `NodePool` resource by running the following command: ++ +[source,terminal] +---- +$ oc edit NodePool -o yaml --namespace +---- ++ +.Example output +[source,yaml] +---- +apiVersion: hypershift.openshift.io/v1alpha1 +kind: NodePool +metadata: +# ... + name: nodepool-1 + namespace: clusters +# ... +spec: + arch: amd64 + clusterName: clustername <1> + management: + autoRepair: false + replace: + rollingUpdate: + maxSurge: 1 + maxUnavailable: 0 + strategy: RollingUpdate + upgradeType: Replace + nodeDrainTimeout: 0s <2> +# ... +---- +<1> Defines the name of your hosted cluster. +<2> Specifies the total amount of time that the controller spends to drain a node. By default, the `nodeDrainTimeout: 0s` setting blocks the node draining process. ++ +[NOTE] +==== +To allow the node draining process to continue for a certain period of time, you can set the value of the `nodeDrainTimeout` field accordingly, for example, `nodeDrainTimeout: 1m`. +==== + +. Scale down the `NodePool` resource associated to your hosted cluster by running the following command: ++ +[source,terminal] +---- +$ oc scale nodepool/ --namespace --replicas=0 +---- ++ +[NOTE] +==== +After scaling down the data plan to zero, some pods in the control plane stay in the `Pending` status and the hosted control plane stays up and running. If necessary, you can scale up the `NodePool` resource. +==== + +. Optional: Scale up the `NodePool` resource associated to your hosted cluster by running the following command: ++ +[source,terminal] +---- +$ oc scale nodepool/ --namespace --replicas=1 +---- ++ +After rescaling the `NodePool` resource, wait for couple of minutes for the `NodePool` resource to become available in a `Ready` state.