1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/modules/openshift-cluster-maximums-environment.adoc
2023-11-15 11:13:03 +00:00

133 lines
3.4 KiB
Plaintext

// Module included in the following assemblies:
//
// * scalability_and_performance/planning-your-environment-according-to-object-maximums.adoc
[id="cluster-maximums-environment_{context}"]
= {product-title} environment and configuration on which the cluster maximums are tested
== AWS cloud platform
[options="header",cols="8*"]
|===
| Node |Flavor |vCPU |RAM(GiB) |Disk type|Disk size(GiB)/IOS |Count |Region
| Control plane/etcd ^[1]^
| r5.4xlarge
| 16
| 128
| gp3
| 220
| 3
| us-west-2
| Infra ^[2]^
| m5.12xlarge
| 48
| 192
| gp3
| 100
| 3
| us-west-2
| Workload ^[3]^
| m5.4xlarge
| 16
| 64
| gp3
| 500 ^[4]^
| 1
| us-west-2
| Compute
| m5.2xlarge
| 8
| 32
| gp3
| 100
| 3/25/250/500 ^[5]^
| us-west-2
|===
[.small]
--
1. gp3 disks with a baseline performance of 3000 IOPS and 125 MiB per second are used for control plane/etcd nodes because etcd is latency sensitive. gp3 volumes do not use burst performance.
2. Infra nodes are used to host Monitoring, Ingress, and Registry components to ensure they have enough resources to run at large scale.
3. Workload node is dedicated to run performance and scalability workload generators.
4. Larger disk size is used so that there is enough space to store the large amounts of data that is collected during the performance and scalability test run.
5. Cluster is scaled in iterations and performance and scalability tests are executed at the specified node counts.
--
== {ibm-power-title} platform
[options="header",cols="6*"]
|===
| Node |vCPU |RAM(GiB) |Disk type|Disk size(GiB)/IOS |Count
| Control plane/etcd ^[1]^
| 16
| 32
| io1
| 120 / 10 IOPS per GiB
| 3
| Infra ^[2]^
| 16
| 64
| gp2
| 120
| 2
| Workload ^[3]^
| 16
| 256
| gp2
| 120 ^[4]^
| 1
| Compute
| 16
| 64
| gp2
| 120
| 2 to 100 ^[5]^
|===
[.small]
--
1. io1 disks with 120 / 10 IOPS per GiB are used for control plane/etcd nodes as etcd is I/O intensive and latency sensitive.
2. Infra nodes are used to host Monitoring, Ingress, and Registry components to ensure they have enough resources to run at large scale.
3. Workload node is dedicated to run performance and scalability workload generators.
4. Larger disk size is used so that there is enough space to store the large amounts of data that is collected during the performance and scalability test run.
5. Cluster is scaled in iterations.
--
== {ibm-z-title} platform
[options="header",cols="6*"]
|===
| Node |vCPU ^[4]^ |RAM(GiB)^[5]^|Disk type|Disk size(GiB)/IOS |Count
| Control plane/etcd ^[1,2]^
| 8
| 32
| ds8k
| 300 / LCU 1
| 3
| Compute ^[1,3]^
| 8
| 32
| ds8k
| 150 / LCU 2
| 4 nodes (scaled to 100/250/500 pods per node)
|===
[.small]
--
1. Nodes are distributed between two logical control units (LCUs) to optimize disk I/O load of the control plane/etcd nodes as etcd is I/O intensive and latency sensitive. Etcd I/O demand should not interfere with other workloads.
2. Four compute nodes are used for the tests running several iterations with 100/250/500 pods at the same time. First, idling pods were used to evaluate if pods can be instanced. Next, a network and CPU demanding client/server workload were used to evaluate the stability of the system under stress. Client and server pods were pairwise deployed and each pair was spread over two compute nodes.
3. No separate workload node was used. The workload simulates a microservice workload between two compute nodes.
4. Physical number of processors used is six Integrated Facilities for Linux (IFLs).
5. Total physical memory used is 512 GiB.
--