mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
Merge pull request #65990 from openshift-cherrypick-robot/cherry-pick-63784-to-enterprise-4.14
This commit is contained in:
@@ -18,5 +18,5 @@ include::modules/hosted-control-planes-pause-reconciliation.adoc[leveloffset=+1]
|
||||
//using service-level DNS for control plane services
|
||||
include::modules/hosted-control-planes-metrics-sets.adoc[leveloffset=+1]
|
||||
//automated machine management
|
||||
include::modules/scale-down-data-plane.adoc[leveloffset=+1]
|
||||
include::modules/delete-hosted-cluster.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
90
modules/scale-down-data-plane.adoc
Normal file
90
modules/scale-down-data-plane.adoc
Normal file
@@ -0,0 +1,90 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * hosted_control_planes/hcp-managing.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="scale-down-data-plane_{context}"]
|
||||
= Scaling down the data plane to zero
|
||||
|
||||
If you are not using the hosted control plane, to save the resources and cost you can scale down a data plane to zero.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Ensure you are prepared to scale down the data plane to zero. Because the workload from the worker nodes disappears after scaling down.
|
||||
====
|
||||
|
||||
.Procedure
|
||||
|
||||
. Set the `kubeconfig` file to access the hosted cluster by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ export KUBECONFIG=<install_directory>/auth/kubeconfig
|
||||
----
|
||||
|
||||
. Get the name of the `NodePool` resource associated to your hosted cluster by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get nodepool --namespace <HOSTED_CLUSTER_NAMESPACE>
|
||||
----
|
||||
|
||||
. Optional: To prevent the pods from draining, add the `nodeDrainTimeout` field in the `NodePool` resource by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc edit NodePool <nodepool> -o yaml --namespace <HOSTED_CLUSTER_NAMESPACE>
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: hypershift.openshift.io/v1alpha1
|
||||
kind: NodePool
|
||||
metadata:
|
||||
# ...
|
||||
name: nodepool-1
|
||||
namespace: clusters
|
||||
# ...
|
||||
spec:
|
||||
arch: amd64
|
||||
clusterName: clustername <1>
|
||||
management:
|
||||
autoRepair: false
|
||||
replace:
|
||||
rollingUpdate:
|
||||
maxSurge: 1
|
||||
maxUnavailable: 0
|
||||
strategy: RollingUpdate
|
||||
upgradeType: Replace
|
||||
nodeDrainTimeout: 0s <2>
|
||||
# ...
|
||||
----
|
||||
<1> Defines the name of your hosted cluster.
|
||||
<2> Specifies the total amount of time that the controller spends to drain a node. By default, the `nodeDrainTimeout: 0s` setting blocks the node draining process.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
To allow the node draining process to continue for a certain period of time, you can set the value of the `nodeDrainTimeout` field accordingly, for example, `nodeDrainTimeout: 1m`.
|
||||
====
|
||||
|
||||
. Scale down the `NodePool` resource associated to your hosted cluster by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc scale nodepool/<NODEPOOL_NAME> --namespace <HOSTED_CLUSTER_NAMESPACE> --replicas=0
|
||||
----
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
After scaling down the data plan to zero, some pods in the control plane stay in the `Pending` status and the hosted control plane stays up and running. If necessary, you can scale up the `NodePool` resource.
|
||||
====
|
||||
|
||||
. Optional: Scale up the `NodePool` resource associated to your hosted cluster by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc scale nodepool/<NODEPOOL_NAME> --namespace <HOSTED_CLUSTER_NAMESPACE> --replicas=1
|
||||
----
|
||||
+
|
||||
After rescaling the `NodePool` resource, wait for couple of minutes for the `NodePool` resource to become available in a `Ready` state.
|
||||
Reference in New Issue
Block a user