From 16db23beecd357452b06526b3df2d0108bd2d2d8 Mon Sep 17 00:00:00 2001 From: xJustin Date: Thu, 12 Sep 2024 08:17:44 -0400 Subject: [PATCH] OSDOCS-11640 HCP 250 node scale --- _topic_maps/_topic_map_rosa.yml | 2 + _topic_maps/_topic_map_rosa_hcp.yml | 2 + modules/rosa-sdpolicy-instance-types.adoc | 2 +- modules/sd-hcp-planning-cluster-maximums.adoc | 52 +++++++++++++++++++ .../rosa-hcp-instance-types.adoc | 10 ++-- .../rosa-hcp-limits-scalability.adoc | 24 +++++++++ rosa_release_notes/rosa-release-notes.adoc | 2 +- 7 files changed, 87 insertions(+), 7 deletions(-) create mode 100644 modules/sd-hcp-planning-cluster-maximums.adoc create mode 100644 rosa_planning/rosa-hcp-limits-scalability.adoc diff --git a/_topic_maps/_topic_map_rosa.yml b/_topic_maps/_topic_map_rosa.yml index 0a332c845f..eafc6da900 100644 --- a/_topic_maps/_topic_map_rosa.yml +++ b/_topic_maps/_topic_map_rosa.yml @@ -238,6 +238,8 @@ Topics: File: rosa-sts-ocm-role - Name: Limits and scalability File: rosa-limits-scalability +- Name: ROSA with HCP limits and scalability + File: rosa-hcp-limits-scalability - Name: Planning your environment File: rosa-planning-environment - Name: Required AWS service quotas diff --git a/_topic_maps/_topic_map_rosa_hcp.yml b/_topic_maps/_topic_map_rosa_hcp.yml index 9f240e1b2c..555bf581b5 100644 --- a/_topic_maps/_topic_map_rosa_hcp.yml +++ b/_topic_maps/_topic_map_rosa_hcp.yml @@ -201,6 +201,8 @@ Topics: # File: rosa-sts-ocm-role # - Name: Limits and scalability # File: rosa-limits-scalability +#- Name: ROSA with HCP limits and scalability +# File: rosa-hcp-limits-scalability # - Name: Planning your environment # File: rosa-planning-environment # - Name: Required AWS service quotas diff --git a/modules/rosa-sdpolicy-instance-types.adoc b/modules/rosa-sdpolicy-instance-types.adoc index d97fdf232c..7d1f5551a0 100644 --- a/modules/rosa-sdpolicy-instance-types.adoc +++ b/modules/rosa-sdpolicy-instance-types.adoc @@ -13,7 +13,7 @@ endif::[] = Instance types ifdef::rosa-with-hcp[] -All {hcp-title} clusters require a minimum of 2 worker nodes. All {hcp-title} clusters support a maximum of 180 worker nodes. Shutting down the underlying infrastructure through the cloud provider console is unsupported and can lead to data loss. +All {hcp-title} clusters require a minimum of 2 worker nodes. All {hcp-title} clusters support a maximum of 250 worker nodes. Shutting down the underlying infrastructure through the cloud provider console is unsupported and can lead to data loss. endif::rosa-with-hcp[] ifndef::rosa-with-hcp[] Single availability zone clusters require a minimum of 3 control plane nodes, 2 infrastructure nodes, and 2 worker nodes deployed to a single availability zone. diff --git a/modules/sd-hcp-planning-cluster-maximums.adoc b/modules/sd-hcp-planning-cluster-maximums.adoc new file mode 100644 index 0000000000..4b2f7c3f64 --- /dev/null +++ b/modules/sd-hcp-planning-cluster-maximums.adoc @@ -0,0 +1,52 @@ +:_mod-docs-content-type: CONCEPT +// Module included in the following assemblies: +// +// * rosa_planning/rosa-hcp-limits-scalability.adoc + +[id="tested-cluster-maximums-hcp-sd_{context}"] += {hcp-title} cluster maximums + +Consider the following tested object maximums when you plan a {hcp-title-first} cluster installation. The table specifies the maximum limits for each tested type in a {hcp-title} cluster. + +These guidelines are based on a cluster of 250 compute (also known as worker) nodes. For smaller clusters, the maximums are lower. + + +.Tested cluster maximums +[options="header",cols="50,50"] +|=== +|Maximum type |4.x tested maximum + +|Number of pods ^[1]^ +|25,000 + +|Number of pods per node +|250 + +|Number of pods per core +|There is no default value + +|Number of namespaces ^[2]^ +|5,000 + +|Number of pods per namespace ^[3]^ +|25,000 + +|Number of services ^[4]^ +|10,000 + +|Number of services per namespace +|5,000 + +|Number of back ends per service +|5,000 + +|Number of deployments per namespace ^[3]^ +|2,000 +|=== +[.small] +-- +1. The pod count displayed here is the number of test pods. The actual number of pods depends on the memory, CPU, and storage requirements of the application. +2. When there are a large number of active projects, etcd can suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentation, is highly recommended to make etcd storage available. +3. There are several control loops in the system that must iterate over all objects in a given namespace as a reaction to some changes in state. Having a large number of objects of a type, in a single namespace, can make those loops expensive and slow down processing the state changes. The limit assumes that the system has enough CPU, memory, and disk to satisfy the application requirements. +4. Each service port and each service back end has a corresponding entry in `iptables`. The number of back ends of a given service impacts the size of the endpoints objects, which then impacts the size of data sent throughout the system. +-- diff --git a/rosa_architecture/rosa_policy_service_definition/rosa-hcp-instance-types.adoc b/rosa_architecture/rosa_policy_service_definition/rosa-hcp-instance-types.adoc index 0cef713ae7..b939869f64 100644 --- a/rosa_architecture/rosa_policy_service_definition/rosa-hcp-instance-types.adoc +++ b/rosa_architecture/rosa_policy_service_definition/rosa-hcp-instance-types.adoc @@ -7,15 +7,15 @@ include::_attributes/attributes-openshift-dedicated.adoc[] toc::[] {hcp-title} offers the following worker node instance types and sizes: +[NOTE] +==== +Currently, {hcp-title} supports a maximum of 250 worker nodes. +==== + include::modules/rosa-sdpolicy-am-aws-compute-types.adoc[leveloffset=+1] include::modules/rosa-sdpolicy-am-aws-compute-types-graviton.adoc[leveloffset=+1] -[NOTE] -==== -Currently, {hcp-title} supports a maximum of 180 worker nodes. -==== - [role="_additional-resources"] .Additional Resources diff --git a/rosa_planning/rosa-hcp-limits-scalability.adoc b/rosa_planning/rosa-hcp-limits-scalability.adoc new file mode 100644 index 0000000000..6204a69286 --- /dev/null +++ b/rosa_planning/rosa-hcp-limits-scalability.adoc @@ -0,0 +1,24 @@ +:_mod-docs-content-type: ASSEMBLY +include::_attributes/attributes-openshift-dedicated.adoc[] + +[id="rosa-hcp-limits-scalability"] += {hcp-title} limits and scalability +:context: rosa-hcp-limits-scalability + +toc::[] + +This document details the tested cluster maximums for {hcp-title-first} clusters, along with information about the test environment and configuration used to test the maximums. For {hcp-title} clusters, the control plane is fully managed in the service AWS account and will automatically scale with the cluster. + +include::modules/sd-hcp-planning-cluster-maximums.adoc[leveloffset=+1] + + +[id="next-steps_configuring-alert-notifications-hcp"] +== Next steps + +* xref:../rosa_planning/rosa-planning-environment.adoc#rosa-planning-environment[Planning your environment] + +[role="_additional-resources"] +[id="additional-resources_rosa-hcp-limits-scalability"] +== Additional resources + +* xref:../rosa_cluster_admin/rosa-cluster-notifications.adoc#managed-cluster-notification-view-hcc_rosa-cluster-notifications[Viewing cluster notifications using the {hybrid-console}] diff --git a/rosa_release_notes/rosa-release-notes.adoc b/rosa_release_notes/rosa-release-notes.adoc index 0c8ddbc18d..a6d94c7b95 100644 --- a/rosa_release_notes/rosa-release-notes.adoc +++ b/rosa_release_notes/rosa-release-notes.adoc @@ -16,7 +16,7 @@ toc::[] [id="rosa-q3-2024_{context}"] === Q3 2024 -* **{hcp-title} cluster node limit update.** {hcp-title} clusters can now scale to 180 worker nodes. This is an increase from the previous limit of 90 nodes. For more information, see xref:../rosa_planning/rosa-limits-scalability.html[Limits and scalability]. +* **{hcp-title} cluster node limit update.** {hcp-title} clusters can now scale to 250 worker nodes. This is an increase from the previous limit of 180 nodes. For more information, see xref:../rosa_planning/rosa-hcp-limits-scalability.adoc#tested-cluster-maximums-hcp-sd_rosa-hcp-limits-scalability[ROSA with HCP limits and scalability]. * **IMDSv2 support in {hcp-title}.** You can now enforce the use of the IMDSv2 endpoint for default machine pool worker nodes on new {hcp-title} clusters and for new machine pools on existing clusters. For more information, see xref:../rosa_hcp/terraform/rosa-hcp-creating-a-cluster-quickly-terraform.adoc#rosa-hcp-creating-a-cluster-quickly-terraform[Creating a default ROSA cluster using Terraform].