1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Merge pull request #46111 from openshift-cherrypick-robot/cherry-pick-44281-to-enterprise-4.11

[enterprise-4.11] OSDOCS-3490: add supported AWS instance types for OSD
This commit is contained in:
Brandi McElveen Munilla
2022-05-27 15:28:39 -04:00
committed by GitHub

View File

@@ -58,39 +58,115 @@ Control and infrastructure nodes are also provided by Red Hat. There are at leas
Approximately 1 vCPU core and 1 GiB of memory are reserved on each worker node and removed from allocatable resources. This is necessary to run link:https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved[processes required by the underlying platform]. This includes system daemons such as udev, kubelet, container runtime, and so on, and also accounts for kernel reservations. {OCP} core systems such as audit log aggregation, metrics collection, DNS, image registry, SDN, and so on might consume additional allocatable resources to maintain the stability and maintainability of the cluster. The additional resources consumed might vary based on usage.
====
[id="aws-compute-types_{context}"]
== AWS compute types
[id="aws-compute-types-ccs_{context}"]
== AWS compute types for Customer Cloud Subscription clusters
{product-title} offers the following worker node types and sizes on AWS:
.General purpose
* m5.xlarge (4 vCPU, 16 GiB)
* m5.2xlarge (8 vCPU, 32 GiB)
* m5.4xlarge (16 vCPU, 64 GiB)
* m5.8xlarge (32 vCPU, 128 GiB)
* m5.12xlarge (48 vCPU, 192 GiB)
* m5.16xlarge (64 vCPU, 256 GiB)
* m5.24xlarge (96 vCPU, 384 GiB)
[%collapsible]
====
- m5.xlarge (4 vCPU, 16 GiB)
- m5.2xlarge (8 vCPU, 32 GiB)
- m5.4xlarge (16 vCPU, 64 GiB)
- m5.8xlarge (32 vCPU, 128 GiB)
- m5.12xlarge (48 vCPU, 192 GiB)
- m5.16xlarge (64 vCPU, 256 GiB)
- m5.24xlarge (96 vCPU, 384 GiB)
====
.Memory-optimized
* r5.xlarge (4 vCPU, 32 GiB)
* r5.2xlarge (8 vCPU, 64 GiB)
* r5.4xlarge (16 vCPU, 128 GiB)
* r5.8xlarge (32 vCPU, 256 GiB)
* r5.12xlarge (48 vCPU, 384 GiB)
* r5.16xlarge (64 vCPU, 512 GiB)
* r5.24xlarge (96 vCPU, 768 GiB)
[%collapsible]
====
- r5.xlarge (4 vCPU, 32 GiB)
- r5.2xlarge (8 vCPU, 64 GiB)
- r5.4xlarge (16 vCPU, 128 GiB)
- r5.8xlarge (32 vCPU, 256 GiB)
- r5.12xlarge (48 vCPU, 384 GiB)
- r5.16xlarge (64 vCPU, 512 GiB)
- r5.24xlarge (96 vCPU, 768 GiB)
- r6i.xlarge (4 vCPU, 32 GiB)
- r6i.2xlarge (8 vCPU, 64 GiB)
- r6i.4xlarge (16 vCPU, 128 GiB)
- r6i.8xlarge (32 vCPU, 256 GiB)
- r6i.12xlarge (48 vCPU, 384 GiB)
- r6i.16xlarge (64 vCPU, 512 GiB)
- r6i.24xlarge (96 vCPU, 768 GiB)
- r6i.32xlarge (128 vCPU, 1,024 GiB)
- z1d.xlarge (4 vCPU, 32 GiB)
- z1d.2xlarge (8 vCPU, 64 GiB)
- z1d.3xlarge (12 vCPU, 96 GiB)
- z1d.6xlarge (24 vCPU, 192 GiB)
- z1d.12xlarge (48 vCPU, 384 GiB)
====
.Compute-optimized
[%collapsible]
====
- c5.xlarge (4 vCPU, 8 GiB)
- c5.2xlarge (8 vCPU, 16 GiB)
- c5.4xlarge (16 vCPU, 32 GiB)
- c5.9xlarge (36 vCPU, 72 GiB)
- c5.12xlarge (48 vCPU, 96 GiB)
- c5.18xlarge (72 vCPU, 144 GiB)
- c5.24xlarge (96 vCPU, 192 GiB)
====
* c5.2xlarge (8 vCPU, 16 GiB)
* c5.4xlarge (16 vCPU, 32 GiB)
* c5.9xlarge (36 vCPU, 72 GiB)
* c5.12xlarge (48 vCPU, 96 GiB)
* c5.18xlarge (72 vCPU, 144 GiB)
* c5.24xlarge (96 vCPU, 192 GiB)
.Storage-optimized compute types
[%collapsible]
====
- i3.xlarge (4 vCPU, 30.5 GiB)
- i3.2xlarge (8 vCPU, 61 GiB)
- i3.4xlarge (16 vCPU, 122 GiB)
- i3.8xlarge (32 vCPU, 244 GiB)
- i3.16xlarge (64 vCPU, 488 GiB)
- i3en.xlarge (4 vCPU, 32 GiB)
- i3en.2xlarge (8 vCPU, 64 GiB)
- i3en.3xlarge (12 vCPU, 96 GiB)
- i3en.6xlarge (24 vCPU, 192 GiB)
- i3en.12xlarge (48 vCPU, 384 GiB)
- i3en.24xlarge (96 vCPU, 768 GiB)
====
[id="aws-compute-types-non-ccs_{context}"]
== AWS compute types for standard clusters
{product-title} offers the following worker node types and sizes on AWS:
.General purpose
[%collapsible]
====
- m5.xlarge (4 vCPU, 16 GiB)
- m5.2xlarge (8 vCPU, 32 GiB)
- m5.4xlarge (16 vCPU, 64 GiB)
- m5.8xlarge (32 vCPU, 128 GiB)
- m5.12xlarge (48 vCPU, 192 GiB)
- m5.16xlarge (64 vCPU, 256 GiB)
- m5.24xlarge (96 vCPU, 384 GiB)
====
.Memory-optimized
[%collapsible]
====
- r5.xlarge (4 vCPU, 32 GiB)
- r5.2xlarge (8 vCPU, 64 GiB)
- r5.4xlarge (16 vCPU, 128 GiB)
- r5.8xlarge (32 vCPU, 256 GiB)
- r5.12xlarge (48 vCPU, 384 GiB)
- r5.16xlarge (64 vCPU, 512 GiB)
- r5.24xlarge (96 vCPU, 768 GiB)
====
.Compute-optimized
[%collapsible]
====
- c5.2xlarge (8 vCPU, 16 GiB)
- c5.4xlarge (16 vCPU, 32 GiB)
- c5.9xlarge (36 vCPU, 72 GiB)
- c5.12xlarge (48 vCPU, 96 GiB)
- c5.18xlarge (72 vCPU, 144 GiB)
- c5.24xlarge (96 vCPU, 192 GiB)
====
[id="gcp-compute-types_{context}"]
== Google Cloud compute types
@@ -98,7 +174,8 @@ Approximately 1 vCPU core and 1 GiB of memory are reserved on each worker node a
{product-title} offers the following worker node types and sizes on Google Cloud that are chosen to have a common CPU and memory capacity that are the same as other cloud instance types:
.General purpose
[%collapsible]
====
* custom-4-16384 (4 vCPU, 16 GiB)
* custom-8-32768 (8 vCPU, 32 GiB)
* custom-16-65536 (16 vCPU, 64 GiB)
@@ -106,9 +183,11 @@ Approximately 1 vCPU core and 1 GiB of memory are reserved on each worker node a
* custom-48-196608 (48 vCPU, 192 GiB)
* custom-64-262144 (64 vCPU, 256 GiB)
* custom-96-393216 (96 vCPU, 384 GiB)
====
.Memory-optimized
[%collapsible]
====
* custom-4-32768-ext (4 vCPU, 32 GiB)
* custom-8-65536-ext (8 vCPU, 64 GiB)
* custom-16-131072-ext (16 vCPU, 128 GiB)
@@ -116,16 +195,18 @@ Approximately 1 vCPU core and 1 GiB of memory are reserved on each worker node a
* custom-48-393216 (48 vCPU, 384 GiB)
* custom-64-524288 (64 vCPU, 512 GiB)
* custom-96-786432 (96 vCPU, 768 GiB)
====
.Compute-optimized
[%collapsible]
====
* custom-8-16384 (8 vCPU, 16 GiB)
* custom-16-32768 (16 vCPU, 32 GiB)
* custom-36-73728 (36 vCPU, 72 GiB)
* custom-48-98304 (48 vCPU, 96 GiB)
* custom-72-147456 (72 vCPU, 144 GiB)
* custom-96-196608 (96 vCPU, 192 GiB)
====
[id="regions-availability-zones_{context}"]
== Regions and availability zones