diff --git a/modules/openshift-cluster-maximums-environment.adoc b/modules/openshift-cluster-maximums-environment.adoc index ac8659bf4f..31427f59dc 100644 --- a/modules/openshift-cluster-maximums-environment.adoc +++ b/modules/openshift-cluster-maximums-environment.adoc @@ -11,40 +11,48 @@ AWS cloud platform: |=== | Node |Flavor |vCPU |RAM(GiB) |Disk type|Disk size(GiB)/IOS |Count |Region -| Master/etcd footnoteref:[masteretcdnodeaws,io1 disks with 3000 IOPS are used for master/etcd nodes as etcd is I/O intensive and latency sensitive.] +| Master/etcd ^[1]^ | r5.4xlarge | 16 | 128 -| io1 +| io1 | 220 / 3000 | 3 | us-west-2 -| Infra footnoteref:[infranodesaws,Infra nodes are used to host Monitoring, Ingress and Registry components to make sure they have enough resources to run at large scale.] +| Infra ^[2]^ | m5.12xlarge | 48 | 192 -| gp2 -| 100 +| gp2 +| 100 | 3 | us-west-2 -| Workload footnoteref:[workloadnode,Workload node is dedicated to run performance and scalability workload generators.] +| Workload ^[3]^ | m5.4xlarge | 16 | 64 -| gp2 -| 500 footnoteref:[disksize,Larger disk size is used so that there is enough space to store the large amounts of data that is collected during the performance and scalability test run.] +| gp2 +| 500 ^[4]^ | 1 | us-west-2 | Worker -| m5.2xlarge +| m5.2xlarge | 8 | 32 -| gp2 -| 100 -| 3/25/250/500/2000 footnoteref:[nodescaleaws,Cluster is scaled in iterations and performance and scalability tests are executed at the specified node counts.] +| gp2 +| 100 +| 3/25/250/500/2000 ^[5]^ | us-west-2 |=== +[.small] +-- +1. io1 disks with 3000 IOPS are used for master/etcd nodes as etcd is I/O intensive and latency sensitive. +2. Infra nodes are used to host Monitoring, Ingress, and Registry components to ensure they have enough resources to run at large scale. +3. Workload node is dedicated to run performance and scalability workload generators. +4. Larger disk size is used so that there is enough space to store the large amounts of data that is collected during the performance and scalability test run. +5. Cluster is scaled in iterations and performance and scalability tests are executed at the specified node counts. +-- diff --git a/modules/openshift-cluster-maximums-major-releases.adoc b/modules/openshift-cluster-maximums-major-releases.adoc index b8f430ece5..73a30040da 100644 --- a/modules/openshift-cluster-maximums-major-releases.adoc +++ b/modules/openshift-cluster-maximums-major-releases.adoc @@ -16,19 +16,19 @@ Tested Cloud Platforms for {product-title} 4.x: Amazon Web Services, Microsoft A | 2,000 | 2,000 -| Number of Pods footnoteref:[numberofpodsmajorrelease,The Pod count displayed here is the number of test Pods. The actual number of Pods depends on the application’s memory, CPU, and storage requirements.] +| Number of Pods ^[1]^ | 150,000 | 150,000 | Number of Pods per node | 250 -| 500 footnoteref:[podspernodemajorrelease,This was tested on a cluster with 100 worker nodes with 500 Pods per worker node. The default `maxPods` is still 250. To get to 500 `maxPods`, the cluster must be created with a `maxPods` set to `500` using a custom KubeletConfig. If you need 500 user pods, you need a `hostPrefix` of `22` because there are 10-15 system Pods already running on the node. The maximum number of Pods with attached Persistent Volume Claims (PVC) depends on storage backend from where PVC are allocated. In our tests, only OpenShift Container Storage v4 (OCS v4) was able to satisfy the number of Pods per node discussed in this document.] +| 500 ^[2]^ | Number of Pods per core | There is no default value. | There is no default value. -| Number of Namespaces footnoteref:[numberofnamepacesmajorrelease,When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentaion, is highly recommended to free etcd storage.] +| Number of Namespaces ^[3]^ | 10,000 | 10,000 @@ -36,16 +36,11 @@ Tested Cloud Platforms for {product-title} 4.x: Amazon Web Services, Microsoft A | 10,000 (Default pod RAM 512 Mi) - Pipeline Strategy | 10,000 (Default pod RAM 512 Mi) - Source-to-Image (S2I) build strategy -| Number of Pods per namespace footnoteref:[objectpernamespacemajorrelease,There are -a number of control loops in the system that must iterate over all objects -in a given namespace as a reaction to some changes in state. Having a large -number of objects of a given type in a single namespace can make those loops -expensive and slow down processing given state changes. The limit assumes that -the system has enough CPU, memory, and disk to satisfy the application requirements.] +| Number of Pods per namespace ^[4]^ | 25,000 | 25,000 -| Number of Services footnoteref:[servicesandendpointsmajorrelease,Each Service port and each Service back-end has a corresponding entry in iptables. The number of back-ends of a given Service impact the size of the endpoints objects, which impacts the size of data that is being sent all over the system.] +| Number of Services ^[5]^ | 10,000 | 10,000 @@ -57,8 +52,16 @@ the system has enough CPU, memory, and disk to satisfy the application requireme | 5,000 | 5,000 -| Number of Deployments per Namespace footnoteref:[objectpernamespacemajorrelease] +| Number of Deployments per Namespace ^[4]^ | 2,000 | 2,000 |=== +[.small] +-- +1. The Pod count displayed here is the number of test Pods. The actual number of Pods depends on the application’s memory, CPU, and storage requirements. +2. This was tested on a cluster with 100 worker nodes with 500 Pods per worker node. The default `maxPods` is still 250. To get to 500 `maxPods`, the cluster must be created with a `maxPods` set to `500` using a custom KubeletConfig. If you need 500 user pods, you need a `hostPrefix` of `22` because there are 10-15 system Pods already running on the node. The maximum number of Pods with attached Persistent Volume Claims (PVC) depends on storage backend from where PVC are allocated. In our tests, only OpenShift Container Storage v4 (OCS v4) was able to satisfy the number of Pods per node discussed in this document. +3. When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentaion, is highly recommended to free etcd storage. +4. There are a number of control loops in the system that must iterate over all objects in a given namespace as a reaction to some changes in state. Having a large number of objects of a given type in a single namespace can make those loops expensive and slow down processing given state changes. The limit assumes that the system has enough CPU, memory, and disk to satisfy the application requirements. +5. Each Service port and each Service back-end has a corresponding entry in iptables. The number of back-ends of a given Service impact the size of the endpoints objects, which impacts the size of data that is being sent all over the system. +-- diff --git a/modules/openshift-cluster-maximums.adoc b/modules/openshift-cluster-maximums.adoc index 53fc201a07..615f7e1819 100644 --- a/modules/openshift-cluster-maximums.adoc +++ b/modules/openshift-cluster-maximums.adoc @@ -16,7 +16,7 @@ | 500 | 2,000 -| Number of Pods footnoteref:[numberofpods,The Pod count displayed here is the number of test Pods. The actual number of Pods depends on the application’s memory, CPU, and storage requirements.] +| Number of Pods ^[1]^ | 150,000 | 150,000 | 62,500 @@ -37,7 +37,7 @@ | There is no default value. | There is no default value. -| Number of Namespaces footnoteref:[numberofnamepaces, When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentaion, is highly recommended to free etcd storage.] +| Number of Namespaces ^[2]^ | 10,000 | 10,000 | 10,000 @@ -51,19 +51,14 @@ | 10,000 (Default pod RAM 512 Mi) - Source-to-Image (S2I) build strategy | 10,000 (Default pod RAM 512 Mi) - Source-to-Image (S2I) build strategy -| Number of Pods per Namespace footnoteref:[objectpernamespace,There are -a number of control loops in the system that must iterate over all objects -in a given namespace as a reaction to some changes in state. Having a large -number of objects of a given type in a single namespace can make those loops -expensive and slow down processing given state changes. The limit assumes that -the system has enough CPU, memory, and disk to satisfy the application requirements.] +| Number of Pods per Namespace ^[3]^ | 25,000 | 25,000 | 25,000 | 25,000 | 25,000 -| Number of Services footnoteref:[servicesandendpoints,Each service port and each service back-end has a corresponding entry in iptables. The number of back-ends of a given service impact the size of the endpoints objects, which impacts the size of data that is being sent all over the system.] +| Number of Services ^[4]^ | 10,000 | 10,000 | 10,000 @@ -84,7 +79,7 @@ the system has enough CPU, memory, and disk to satisfy the application requireme | 5,000 | 5,000 -| Number of Deployments per Namespace footnoteref:[objectpernamespace] +| Number of Deployments per Namespace ^[3]^ | 2,000 | 2,000 | 2,000 @@ -92,6 +87,12 @@ the system has enough CPU, memory, and disk to satisfy the application requireme | 2,000 |=== +[.small] +-- +1. The Pod count displayed here is the number of test Pods. The actual number of Pods depends on the application’s memory, CPU, and storage requirements. +2. When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentaion, is highly recommended to free etcd storage. +3. There are a number of control loops in the system that must iterate over all objects in a given namespace as a reaction to some changes in state. Having a large number of objects of a given type in a single namespace can make those loops expensive and slow down processing given state changes. The limit assumes that the system has enough CPU, memory, and disk to satisfy the application requirements. +4. Each service port and each service back-end has a corresponding entry in iptables. The number of back-ends of a given service impact the size of the endpoints objects, which impacts the size of data that is being sent all over the system. +-- -In {product-title} {product-version}, half of a CPU core (500 millicore) is -reserved by the system compared to {product-title} 3.11 and previous versions. +In {product-title} {product-version}, half of a CPU core (500 millicore) is reserved by the system compared to {product-title} 3.11 and previous versions.