1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/modules/hcp-sizing-calculation.adoc
2024-08-20 19:57:40 +00:00

104 lines
2.9 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
// Module included in the following assemblies:
// * hosted-control-planes/hcp-prepare/hcp-sizing-guidance.adoc
:_mod-docs-content-type: CONCEPT
[id="hcp-sizing-calculation_{context}"]
= Sizing calculation example
This example provides sizing guidance for the following scenario:
* Three bare-metal workers that are labeled as `hypershift.openshift.io/control-plane` nodes
* `maxPods` value set to 500
* The expected API rate is medium or about 1000, according to the load-based limits
.Limit inputs
|===
| Limit description | Server 1 | Server 2
| Number of vCPUs on worker node
| 64
| 128
| Memory on worker node (GiB)
| 128
| 256
| Maximum pods per worker
| 500
| 500
| Number of workers used to host control planes
| 3
| 3
| Maximum QPS target rate (API requests per second)
| 1000
| 1000
|===
.Sizing calculation example
|===
| Calculated values based on worker node size and API rate | Server 1 | Server 2 | Calculation notes
| Maximum {hcp} per worker based on vCPU requests
| 12.8
| 25.6
| Number of worker vCPUs ÷ 5 total vCPU requests per hosted control plane
| Maximum {hcp} per worker based on vCPU usage
| 5.4
| 10.7
| Number of vCPUS ÷ (2.9 measured idle vCPU usage + (QPS target rate ÷ 1000) × 9.0 measured vCPU usage per 1000 QPS increase)
| Maximum {hcp} per worker based on memory requests
| 7.1
| 14.2
| Worker memory GiB ÷ 18 GiB total memory request per hosted control plane
| Maximum {hcp} per worker based on memory usage
| 9.4
| 18.8
| Worker memory GiB ÷ (11.1 measured idle memory usage + (QPS target rate ÷ 1000) × 2.5 measured memory usage per 1000 QPS increase)
| Maximum {hcp} per worker based on per node pod limit
| 6.7
| 6.7
| 500 `maxPods` ÷ 75 pods per hosted control plane
| Minimum of previously mentioned maximums
| 5.4
| 6.7
|
|
| vCPU limiting factor
| `maxPods` limiting factor
|
| Maximum number of {hcp} within a management cluster
| 16
| 20
| Minimum of previously mentioned maximums × 3 control-plane workers
|===
.{hcp-capital} capacity metrics
|===
| Name | Description
| `mce_hs_addon_request_based_hcp_capacity_gauge`
| Estimated maximum number of {hcp} the cluster can host based on a highly available {hcp} resource request.
| `mce_hs_addon_low_qps_based_hcp_capacity_gauge`
| Estimated maximum number of {hcp} the cluster can host if all {hcp} make around 50 QPS to the clusters Kube API server.
| `mce_hs_addon_medium_qps_based_hcp_capacity_gauge`
| Estimated maximum number of {hcp} the cluster can host if all {hcp} make around 1000 QPS to the clusters Kube API server.
| `mce_hs_addon_high_qps_based_hcp_capacity_gauge`
| Estimated maximum number of {hcp} the cluster can host if all {hcp} make around 2000 QPS to the clusters Kube API server.
| `mce_hs_addon_average_qps_based_hcp_capacity_gauge`
| Estimated maximum number of {hcp} the cluster can host based on the existing average QPS of {hcp}. If you do not have an active {hcp}, you can expect low QPS.
|===