1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/modules/learning-deploying-application-scaling-pod-autoscaling.adoc

83 lines
3.2 KiB
Plaintext

// Module included in the following assemblies:
//
// * rosa_learning/deploying_application_workshop/learning-deploying-application-scaling.adoc
:_mod-docs-content-type: PROCEDURE
[id="learning-deploying-application-scaling-pod-autoscaling_{context}"]
= Pod autoscaling
[role="_abstract"]
{product-title} offers a link:https://docs.redhat.com/en/documentation/openshift_container_platform/latest/html/nodes/working-with-pods#nodes-pods-autoscaling[Horizontal Pod Autoscaler] (HPA). The HPA uses metrics to increase or decrease the number of pods when necessary.
.Prerequisites
* An active {product-title} cluster
* A deployed OSToy application
.Procedure
. From the navigational menu of the web UI, select *Pod Auto Scaling*.
+
image::deploy-scale-hpa-menu.png[HPA Menu]
. Create the HPA by running the following command:
+
[source,terminal]
----
$ oc autoscale deployment/ostoy-microservice --cpu-percent=80 --min=1 --max=10
----
+
This command creates an HPA that maintains between 1 and 10 replicas of the pods controlled by the ostoy-microservice deployment. During deployment, HPA increases and decreases the number of replicas to keep the average CPU use across all pods at 80% and 40 millicores.
. On the *Pod Auto Scaling > Horizontal Pod Autoscaling* page, select *Increase the load*.
+
[IMPORTANT]
====
Because increasing the load generates CPU intensive calculations, the page can become unresponsive. This is an expected response. Only click *Increase the Load* once. For more information about the process, see the link:https://github.com/openshift-cs/ostoy/blob/master/microservice/app.js#L32[microservice's GitHub repository].
====
+
After a few minutes, the new pods display on the page represented by colored boxes.
+
[NOTE]
====
The page can experience lag.
====
.Verification
Check your pod counts with one of the following methods:
* In the OSToy application's web UI, see the remote pods box:
+
image::deploy-scale-hpa-mainpage.png[HPA Main]
+
Because there is only one pod, increasing the workload should trigger an increase of pods.
+
* In the CLI, run the following command:
+
[source,terminal]
----
oc get pods --field-selector=status.phase=Running | grep microservice
----
+
*For example*:
+
[source,terminal]
----
ostoy-microservice-79894f6945-cdmbd 1/1 Running 0 3m14s
ostoy-microservice-79894f6945-mgwk7 1/1 Running 0 4h24m
ostoy-microservice-79894f6945-q925d 1/1 Running 0 3m14s
----
* You can also verify autoscaling from the {cluster-manager}
+
. In the OpenShift web console navigational menu, click *Observe > Dashboards*.
. In the dashboard, select *Kubernetes / Compute Resources / Namespace (Pods)* and your namespace *ostoy*.
+
image::deploy-scale-hpa-metrics.png[Select metrics]
+
. A graph appears showing your resource usage across CPU and memory. The top graph shows recent CPU consumption per pod and the lower graph indicates memory usage. The following lists the callouts in the graph:
.. The load increased (A).
.. Two new pods were created (B and C).
.. The thickness of each graph represents the CPU consumption and indicates which pods handled more load.
.. The load decreased (D), and the pods were deleted.
+
image::deploy-scale-metrics.png[Select metrics]