mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
Revert "Merge pull request #81292 from openshift-cherrypick-robot/cherry-pick-81063-to-enterprise-4.17"
This reverts commitd67e493dcb, reversing changes made tof140ac0fee.
This commit is contained in:
@@ -547,35 +547,35 @@ Topics:
|
||||
- Name: Cluster notifications
|
||||
File: rosa-cluster-notifications
|
||||
Distros: openshift-rosa-hcp
|
||||
- Name: Configuring private connections
|
||||
Dir: cloud_infrastructure_access
|
||||
Distros: openshift-rosa-hcp
|
||||
Topics:
|
||||
- Name: Configuring private connections
|
||||
File: rosa-configuring-private-connections
|
||||
- Name: Configuring AWS VPC peering
|
||||
File: dedicated-aws-peering
|
||||
- Name: Configuring AWS VPN
|
||||
File: dedicated-aws-vpn
|
||||
- Name: Configuring AWS Direct Connect
|
||||
File: dedicated-aws-dc
|
||||
# - Name: Cluster autoscaling # Cluster autoscaling not supported on HCP
|
||||
# - Name: Configuring private connections
|
||||
# Dir: cloud_infrastructure_access
|
||||
# Distros: openshift-rosa-hcp
|
||||
# Topics:
|
||||
# - Name: Configuring private connections
|
||||
# File: rosa-configuring-private-connections
|
||||
# - Name: Configuring AWS VPC peering
|
||||
# File: dedicated-aws-peering
|
||||
# - Name: Configuring AWS VPN
|
||||
# File: dedicated-aws-vpn
|
||||
# - Name: Configuring AWS Direct Connect
|
||||
# File: dedicated-aws-dc
|
||||
# - Name: Cluster autoscaling
|
||||
# File: rosa-cluster-autoscaling
|
||||
- Name: Manage nodes using machine pools
|
||||
Dir: rosa_nodes
|
||||
Distros: openshift-rosa-hcp
|
||||
Topics:
|
||||
- Name: About machine pools
|
||||
File: rosa-nodes-machinepools-about
|
||||
- Name: Managing compute nodes
|
||||
File: rosa-managing-worker-nodes
|
||||
# Local zones not yet implemented in HCP
|
||||
# - Name: Configuring machine pools in Local Zones
|
||||
# File: rosa-nodes-machinepools-configuring
|
||||
- Name: About autoscaling nodes on a cluster
|
||||
File: rosa-nodes-about-autoscaling-nodes
|
||||
- Name: Configuring cluster memory to meet container memory and risk requirements
|
||||
File: nodes-cluster-resource-configure
|
||||
# - Name: Manage nodes using machine pools
|
||||
# Dir: rosa_nodes
|
||||
# Distros: openshift-rosa-hcp
|
||||
# Topics:
|
||||
# - Name: About machine pools
|
||||
# File: rosa-nodes-machinepools-about
|
||||
# - Name: Managing compute nodes
|
||||
# File: rosa-managing-worker-nodes
|
||||
# - Name: Configuring machine pools in Local Zones
|
||||
# File: rosa-nodes-machinepools-configuring
|
||||
# Distros: openshift-rosa-hcp
|
||||
# - Name: About autoscaling nodes on a cluster
|
||||
# File: rosa-nodes-about-autoscaling-nodes
|
||||
# - Name: Configuring cluster memory to meet container memory and risk requirements
|
||||
# File: nodes-cluster-resource-configure
|
||||
- Name: Configuring PID limits
|
||||
File: rosa-configuring-pid-limits
|
||||
Distros: openshift-rosa-hcp
|
||||
@@ -584,10 +584,11 @@ Name: Security and compliance
|
||||
Dir: security
|
||||
Distros: openshift-rosa-hcp
|
||||
Topics:
|
||||
# - Name: Audit logs
|
||||
# File: audit-log-view
|
||||
#- Name: Audit logs
|
||||
# File: audit-log-view
|
||||
- Name: Adding additional constraints for IP-based AWS role assumption
|
||||
File: rosa-adding-additional-constraints-for-ip-based-aws-role-assumption
|
||||
---
|
||||
# - Name: Security
|
||||
# File: rosa-security
|
||||
# - Name: Application and cluster compliance
|
||||
@@ -671,7 +672,7 @@ Topics:
|
||||
# File: cco-mode-manual
|
||||
# - Name: Manual mode with short-term credentials for components
|
||||
# File: cco-short-term-creds
|
||||
---
|
||||
#---
|
||||
Name: Upgrading
|
||||
Dir: upgrading
|
||||
Distros: openshift-rosa-hcp
|
||||
|
||||
@@ -18,7 +18,7 @@
|
||||
----
|
||||
$ rosa create cluster --worker-disk-size=<disk_size>
|
||||
----
|
||||
The value can be in GB, GiB, TB, or TiB. Replace `<disk_size>` with a numeric value and unit, for example `--worker-disk-size=200GiB`. You cannot separate the digit and the unit. No spaces are allowed.
|
||||
The value can be in GB, GiB, TB, or TiB. Replace '<disk_size>' with a numeric value and unit, for example '--worker-disk-size=200GiB'. You cannot separate the digit and the unit. No spaces are allowed.
|
||||
|
||||
.Prerequisite for machine pool creation
|
||||
|
||||
@@ -30,9 +30,9 @@ The value can be in GB, GiB, TB, or TiB. Replace `<disk_size>` with a numeric va
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ rosa create machinepool --cluster=<cluster_id> \// <1>
|
||||
--disk-size=<disk_size> // <2>
|
||||
$ rosa create machinepool --cluster=<cluster_id> <1>
|
||||
--disk-size=<disk_size> <2>
|
||||
----
|
||||
<1> Specifies the ID or name of your existing OpenShift cluster.
|
||||
<2> Specifies the worker node disk size. The value can be in GB, GiB, TB, or TiB. Replace `<disk_size>` with a numeric value and unit, for example `--disk-size=200GiB`. You cannot separate the digit and the unit. No spaces are allowed.
|
||||
<1> Specifies the ID or name of your existing OpenShift cluster
|
||||
<2> Specifies the worker node disk size. The value can be in GB, GiB, TB, or TiB. Replace '<disk_size>' with a numeric value and unit, for example '--disk-size=200GiB'. You cannot separate the digit and the unit. No spaces are allowed.
|
||||
. Confirm new machine pool disk volume size by logging into the AWS console and find the EC2 virtual machine root volume size.
|
||||
|
||||
@@ -22,27 +22,23 @@ You can create additional machine pools for your {product-title} (ROSA) cluster
|
||||
[source,terminal]
|
||||
----
|
||||
$ rosa create machinepool --cluster=<cluster-name> \
|
||||
--name=<machine_pool_id> \// <1>
|
||||
--replicas=<replica_count> \// <2>
|
||||
--instance-type=<instance_type> \// <3>
|
||||
--labels=<key>=<value>,<key>=<value> \// <4>
|
||||
--taints=<key>=<value>:<effect>,<key>=<value>:<effect> \// <5>
|
||||
--use-spot-instances \// <6>
|
||||
--spot-max-price=0.5 \// <7>
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp[]
|
||||
--disk-size=<disk_size> \// <8>
|
||||
--availability-zone=<availability_zone_name> \// <9>
|
||||
--additional-security-group-ids <sec_group_id> \// <10>
|
||||
--subnet string // <11>
|
||||
endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
--name=<machine_pool_id> \ <1>
|
||||
--replicas=<replica_count> \ <2>
|
||||
--instance-type=<instance_type> \ <3>
|
||||
--labels=<key>=<value>,<key>=<value> \ <4>
|
||||
--taints=<key>=<value>:<effect>,<key>=<value>:<effect> \ <5>
|
||||
--use-spot-instances \ <6>
|
||||
--spot-max-price=0.5 \ <7>
|
||||
ifdef::openshift-rosa[]
|
||||
--disk-size=<disk_size> <8>
|
||||
--availability-zone=<availability_zone_name> <9>
|
||||
--additional-security-group-ids <sec_group_id> <10>
|
||||
--subnet string <11>
|
||||
|
||||
endif::openshift-rosa[]
|
||||
----
|
||||
<1> Specifies the name of the machine pool. Replace `<machine_pool_id>` with the name of your machine pool.
|
||||
ifndef::openshift-rosa-hcp[]
|
||||
<2> Specifies the number of compute nodes to provision. If you deployed ROSA using a single availability zone, this defines the number of compute nodes to provision to the machine pool for the zone. If you deployed your cluster using multiple availability zones, this defines the number of compute nodes to provision in total across all zones and the count must be a multiple of 3. The `--replicas` argument is required when autoscaling is not configured.
|
||||
endif::openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
<2> Specifies the number of compute nodes to provision. The `--replicas` argument is required when autoscaling is not configured.
|
||||
endif::openshift-rosa-hcp[]
|
||||
<3> Optional: Sets the instance type for the compute nodes in your machine pool. The instance type defines the vCPU and memory allocation for each compute node in the pool. Replace `<instance_type>` with an instance type. The default is `m5.xlarge`. You cannot change the instance type for a machine pool after the pool is created.
|
||||
<4> Optional: Defines the labels for the machine pool. Replace `<key>=<value>,<key>=<value>` with a comma-delimited list of key-value pairs, for example `--labels=key1=value1,key2=value2`.
|
||||
<5> Optional: Defines the taints for the machine pool. Replace `<key>=<value>:<effect>,<key>=<value>:<effect>` with a key, value, and effect for each taint, for example `--taints=key1=value1:NoSchedule,key2=value2:NoExecute`. Available effects include `NoSchedule`, `PreferNoSchedule`, and `NoExecute`.
|
||||
@@ -53,13 +49,8 @@ endif::openshift-rosa-hcp[]
|
||||
====
|
||||
Your Amazon EC2 Spot Instances might be interrupted at any time. Use Amazon EC2 Spot Instances only for workloads that can tolerate interruptions.
|
||||
====
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp[]
|
||||
<8> Optional: Specifies the worker node disk size. The value can be in GB, GiB, TB, or TiB. Replace `<disk_size>` with a numeric value and unit, for example `--disk-size=200GiB`.
|
||||
endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
<9> Optional: You can create a machine pool in an availability zone of your choice. Replace `<az>` with an availability zone name.
|
||||
endif::openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa[]
|
||||
<8> Optional: Specifies the worker node disk size. The value can be in GB, GiB, TB, or TiB. Replace `<disk_size>` with a numeric value and unit, for example `--disk-size=200GiB`.
|
||||
<9> Optional: For Multi-AZ clusters, you can create a machine pool in a Single-AZ of your choice. Replace `<az>` with a Single-AZ name.
|
||||
+
|
||||
[NOTE]
|
||||
@@ -75,19 +66,12 @@ For fault-tolerant worker machine pools, choosing a Multi-AZ machine pool distri
|
||||
* A Multi-AZ machine pool with three availability zones can have a machine count in multiples of 3 only, such as 3, 6, 9, and so on.
|
||||
* A Single-AZ machine pool with one availability zone can have a machine count in multiples of 1, such as 1, 2, 3, 4, and so on.
|
||||
====
|
||||
endif::openshift-rosa[]
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp[]
|
||||
<10> Optional: For machine pools in clusters that do not have Red{nbsp}Hat managed VPCs, you can select additional custom security groups to use in your machine pools. You must have already created the security groups and associated them with the VPC that you selected for this cluster. You cannot add or edit security groups after you create the machine pool.
|
||||
ifndef::openshift-rosa-hcp[]
|
||||
For more information, see the requirements for security groups in the "Additional resources" section.
|
||||
endif::openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
<10> Optional: For machine pools in clusters that do not have Red{nbsp}Hat managed VPCs, you can select additional custom security groups to use in your machine pools. You must have already created the security groups and associated them with the VPC that you selected for this cluster. You cannot add or edit security groups after you create the machine pool. For more information, see the requirements for security groups in the "Additional resources" section.
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
You can use up to ten additional security groups for machine pools on {hcp-title} clusters.
|
||||
====
|
||||
endif::openshift-rosa-hcp[]
|
||||
<11> Optional: For BYO VPC clusters, you can select a subnet to create a Single-AZ machine pool.
|
||||
If the subnet is out of your cluster creation subnets, there must be a tag with a key `kubernetes.io/cluster/<infra-id>` and value `shared`.
|
||||
Customers can obtain the Infra ID by using the following command:
|
||||
@@ -107,7 +91,7 @@ Infra ID: mycluster-xqvj7
|
||||
====
|
||||
You cannot set both `--subnet` and `--availability-zone` at the same time, only 1 is allowed for a Single-AZ machine pool creation.
|
||||
====
|
||||
endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
endif::openshift-rosa[]
|
||||
--
|
||||
+
|
||||
The following example creates a machine pool called `mymachinepool` that uses the `m5.xlarge` instance type and has 2 compute node replicas. The example also adds 2 workload-specific labels:
|
||||
@@ -130,47 +114,31 @@ I: To view all machine pools, run 'rosa list machinepools -c mycluster'
|
||||
[source,terminal]
|
||||
----
|
||||
$ rosa create machinepool --cluster=<cluster-name> \
|
||||
--name=<machine_pool_id> \// <1>
|
||||
--enable-autoscaling \// <2>
|
||||
--min-replicas=<minimum_replica_count> \// <3>
|
||||
--max-replicas=<maximum_replica_count> \// <3>
|
||||
--instance-type=<instance_type> \// <4>
|
||||
--labels=<key>=<value>,<key>=<value> \// <5>
|
||||
--taints=<key>=<value>:<effect>,<key>=<value>:<effect> \// <6>
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
--availability-zone=<availability_zone_name> // <7>
|
||||
endif::openshift-rosa-hcp[]
|
||||
ifndef::openshift-rosa-hcp[]
|
||||
--availability-zone=<availability_zone_name> \// <7>
|
||||
--use-spot-instances \// <8>
|
||||
--spot-max-price=0.5 //<9>
|
||||
endif::openshift-rosa-hcp[]
|
||||
--name=<machine_pool_id> \ <1>
|
||||
--enable-autoscaling \ <2>
|
||||
--min-replicas=<minimum_replica_count> \ <3>
|
||||
--max-replicas=<maximum_replica_count> \ <3>
|
||||
--instance-type=<instance_type> \ <4>
|
||||
--labels=<key>=<value>,<key>=<value> \ <5>
|
||||
--taints=<key>=<value>:<effect>,<key>=<value>:<effect> \ <6>
|
||||
--use-spot-instances \ <7>
|
||||
--spot-max-price=0.5 <8>
|
||||
--availability-zone=<availability_zone_name> <9>
|
||||
----
|
||||
<1> Specifies the name of the machine pool. Replace `<machine_pool_id>` with the name of your machine pool.
|
||||
<2> Enables autoscaling in the machine pool to meet the deployment needs.
|
||||
<3> Defines the minimum and maximum compute node limits. The cluster autoscaler does not reduce or increase the machine pool node count beyond the limits that you specify.
|
||||
ifndef::openshift-rosa-hcp[]
|
||||
If you deployed ROSA using a single availability zone, the `--min-replicas` and `--max-replicas` arguments define the autoscaling limits in the machine pool for the zone. If you deployed your cluster using multiple availability zones, the arguments define the autoscaling limits in total across all zones and the counts must be multiples of 3.
|
||||
endif::openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
The `--min-replicas` and `--max-replicas` arguments define the autoscaling limits in the machine pool for the availability zone.
|
||||
endif::openshift-rosa-hcp[]
|
||||
<3> Defines the minimum and maximum compute node limits. The cluster autoscaler does not reduce or increase the machine pool node count beyond the limits that you specify. If you deployed ROSA using a single availability zone, the `--min-replicas` and `--max-replicas` arguments define the autoscaling limits in the machine pool for the zone. If you deployed your cluster using multiple availability zones, the arguments define the autoscaling limits in total across all zones and the counts must be multiples of 3.
|
||||
<4> Optional: Sets the instance type for the compute nodes in your machine pool. The instance type defines the vCPU and memory allocation for each compute node in the pool. Replace `<instance_type>` with an instance type. The default is `m5.xlarge`. You cannot change the instance type for a machine pool after the pool is created.
|
||||
<5> Optional: Defines the labels for the machine pool. Replace `<key>=<value>,<key>=<value>` with a comma-delimited list of key-value pairs, for example `--labels=key1=value1,key2=value2`.
|
||||
<6> Optional: Defines the taints for the machine pool. Replace `<key>=<value>:<effect>,<key>=<value>:<effect>` with a key, value, and effect for each taint, for example `--taints=key1=value1:NoSchedule,key2=value2:NoExecute`. Available effects include `NoSchedule`, `PreferNoSchedule`, and `NoExecute`.
|
||||
ifndef::openshift-rosa-hcp[]
|
||||
<7> Optional: For Multi-AZ clusters, you can create a machine pool in a Single-AZ of your choice. Replace `<az>` with a Single-AZ name.
|
||||
<8> Optional: Configures your machine pool to deploy machines as non-guaranteed AWS Spot Instances. For information, see link:https://aws.amazon.com/ec2/spot/[Amazon EC2 Spot Instances] in the AWS documentation. If you select *Use Amazon EC2 Spot Instances* for a machine pool, you cannot disable the option after the machine pool is created.
|
||||
<7> Optional: Configures your machine pool to deploy machines as non-guaranteed AWS Spot Instances. For information, see link:https://aws.amazon.com/ec2/spot/[Amazon EC2 Spot Instances] in the AWS documentation. If you select *Use Amazon EC2 Spot Instances* for a machine pool, you cannot disable the option after the machine pool is created.
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
Your Amazon EC2 Spot Instances might be interrupted at any time. Use Amazon EC2 Spot Instances only for workloads that can tolerate interruptions.
|
||||
====
|
||||
<9> Optional: If you choose to use Spot Instances, you can specify this argument to define a maximum hourly price for a Spot Instance. If this argument is not specified, the on-demand price is used.
|
||||
endif::openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
<7> Optional: You can create a machine pool in an availability zone of your choice. Replace `<az>` with an availability zone name.
|
||||
endif::openshift-rosa-hcp[]
|
||||
<8> Optional: If you choose to use Spot Instances, you can specify this argument to define a maximum hourly price for a Spot Instance. If this argument is not specified, the on-demand price is used.
|
||||
<9> Optional: For Multi-AZ clusters, you can create a machine pool in a Single-AZ of your choice. Replace `<az>` with a Single-AZ name.
|
||||
--
|
||||
+
|
||||
The following example creates a machine pool called `mymachinepool` that uses the `m5.xlarge` instance type and has autoscaling enabled. The minimum compute node limit is 3 and the maximum is 6 overall. The example also adds 2 workload-specific labels:
|
||||
@@ -183,12 +151,7 @@ $ rosa create machinepool --cluster=mycluster --name=mymachinepool --enable-auto
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
ifndef::openshift-rosa-hcp[]
|
||||
I: Machine pool 'mymachinepool' created successfully on cluster 'mycluster'
|
||||
endif::openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
I: Machine pool 'mymachinepool' created successfully on hosted cluster 'mycluster'
|
||||
endif::openshift-rosa-hcp[]
|
||||
I: To view all machine pools, run 'rosa list machinepools -c mycluster'
|
||||
----
|
||||
|
||||
@@ -203,7 +166,6 @@ You can list all machine pools on your cluster or describe individual machine po
|
||||
$ rosa list machinepools --cluster=<cluster_name>
|
||||
----
|
||||
+
|
||||
ifndef::openshift-rosa-hcp[]
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -211,16 +173,6 @@ ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAI
|
||||
Default No 3 m5.xlarge us-east-1a, us-east-1b, us-east-1c N/A
|
||||
mymachinepool Yes 3-6 m5.xlarge app=db, tier=backend us-east-1a, us-east-1b, us-east-1c No
|
||||
----
|
||||
endif::openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONE SUBNET VERSION AUTOREPAIR
|
||||
Default No 1/1 m5. xlarge us-east-2c subnet-00552ad67728a6ba3 4.14.34 Yes
|
||||
mymachinepool Yes 3/3-6 m5.xlarge app=db, tier=backend us-east-2a subnet-0cb56f5f41880c413 4.14.34 Yes
|
||||
----
|
||||
endif::openshift-rosa-hcp[]
|
||||
|
||||
. Describe the information of a specific machine pool in your cluster:
|
||||
+
|
||||
@@ -229,7 +181,6 @@ endif::openshift-rosa-hcp[]
|
||||
$ rosa describe machinepool --cluster=<cluster_name> --machinepool=mymachinepool
|
||||
----
|
||||
+
|
||||
ifndef::openshift-rosa-hcp[]
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -246,28 +197,5 @@ Spot instances: No
|
||||
Disk size: 300 GiB
|
||||
Security Group IDs:
|
||||
----
|
||||
endif::openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
ID: mymachinepool
|
||||
Cluster ID: 2d6010rjvg17anri30v84vspf7c7kr6v
|
||||
Autoscaling: Yes
|
||||
Desired replicas: 3-6
|
||||
Current replicas: 3
|
||||
Instance type: m5.xlarge
|
||||
Labels: app=db, tier=backend
|
||||
Taints:
|
||||
Availability zone: us-east-2a
|
||||
Subnet: subnet-0cb56f5f41880c413
|
||||
Version: 4.14.34
|
||||
Autorepair: Yes
|
||||
Tuning configs:
|
||||
Additional security group IDs:
|
||||
Node drain grace period:
|
||||
Message:
|
||||
----
|
||||
endif::openshift-rosa-hcp[]
|
||||
|
||||
. Verify that the machine pool is included in the output and the configuration is as expected.
|
||||
|
||||
@@ -6,42 +6,42 @@
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="creating_machine_pools_ocm_{context}"]
|
||||
ifndef::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifndef::openshift-rosa[]
|
||||
= Creating a machine pool
|
||||
endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp[]
|
||||
endif::openshift-rosa[]
|
||||
ifdef::openshift-rosa[]
|
||||
= Creating a machine pool using OpenShift Cluster Manager
|
||||
endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
endif::openshift-rosa[]
|
||||
|
||||
ifndef::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifndef::openshift-rosa[]
|
||||
A machine pool is created when you install an {product-title} cluster. After installation, you can create additional machine pools for your cluster by using {cluster-manager}.
|
||||
endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp[]
|
||||
endif::openshift-rosa[]
|
||||
ifdef::openshift-rosa[]
|
||||
You can create additional machine pools for your {product-title} (ROSA) cluster by using {cluster-manager}.
|
||||
endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
endif::openshift-rosa[]
|
||||
|
||||
ifndef::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifndef::openshift-rosa[]
|
||||
[IMPORTANT]
|
||||
====
|
||||
The compute (also known as worker) node instance types, autoscaling options, and node counts that are available depend on your
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa[]
|
||||
ROSA
|
||||
endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifndef::openshift-rosa,openshift-rosa-hcp[]
|
||||
endif::openshift-rosa[]
|
||||
ifndef::openshift-rosa[]
|
||||
{product-title}
|
||||
endif::[]
|
||||
subscriptions, resource quotas and deployment scenario. For more information, contact your sales representative or Red{nbsp}Hat support.
|
||||
====
|
||||
endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
endif::openshift-rosa[]
|
||||
|
||||
.Prerequisites
|
||||
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa[]
|
||||
* You created a ROSA cluster.
|
||||
endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifndef::openshift-rosa,openshift-rosa-hcp[]
|
||||
endif::openshift-rosa[]
|
||||
ifndef::openshift-rosa[]
|
||||
* You created an {product-title} cluster.
|
||||
endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
endif::openshift-rosa[]
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -68,29 +68,20 @@ The *Enable autoscaling* option is only available for {product-title} if you hav
|
||||
====
|
||||
endif::openshift-dedicated[]
|
||||
.. Set the minimum and maximum node count limits for autoscaling. The cluster autoscaler does not reduce or increase the machine pool node count beyond the limits that you specify.
|
||||
ifndef::openshift-rosa-hcp[]
|
||||
** If you deployed your cluster using a single availability zone, set the *Minimum and maximum node count*. This defines the minimum and maximum compute node limits in the availability zone.
|
||||
** If you deployed your cluster using multiple availability zones, set the *Minimum nodes per zone* and *Maximum nodes per zone*. This defines the minimum and maximum compute node limits per zone.
|
||||
endif::openshift-rosa-hcp[]
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
Alternatively, you can set your autoscaling preferences for the machine pool after the machine pool is created.
|
||||
====
|
||||
|
||||
ifdef::openshift-dedicated,openshift-rosa[]
|
||||
. If you did not enable autoscaling, select a compute node count:
|
||||
* If you deployed your cluster using a single availability zone, select a *Compute node count* from the drop-down menu. This defines the number of compute nodes to provision to the machine pool for the zone.
|
||||
* If you deployed your cluster using multiple availability zones, select a *Compute node count (per zone)* from the drop-down menu. This defines the number of compute nodes to provision to the machine pool per zone.
|
||||
endif::openshift-dedicated,openshift-rosa[]
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
. If you did not enable autoscaling, select a *Compute node count* from the drop-down menu. This defines the number of compute nodes to provision to the machine pool for the availability zone.
|
||||
endif::openshift-rosa-hcp[]
|
||||
|
||||
ifdef::openshift-rosa[]
|
||||
. Optional: Configure *Root disk size*.
|
||||
endif::openshift-rosa[]
|
||||
|
||||
. Optional: Add node labels and taints for your machine pool:
|
||||
.. Expand the *Edit node labels and taints* menu.
|
||||
.. Under *Node labels*, add *Key* and *Value* entries for your node labels.
|
||||
@@ -107,20 +98,16 @@ Creating a machine pool with taints is only possible if the cluster already has
|
||||
Alternatively, you can add the node labels and taints after you create the machine pool.
|
||||
====
|
||||
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
. Optional: Select additional custom security groups to use for nodes in this machine pool. You must have already created the security groups and associated them with the VPC that you selected for this cluster. You cannot add or edit security groups after you create the machine pool.
|
||||
// This can be added back once all of the files have been added to the ROSA HCP distro.
|
||||
ifndef::openshift-rosa-hcp[]
|
||||
For more information, see the requirements for security groups in the "Additional resources" section.
|
||||
endif::openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa,openshift-dedicated[]
|
||||
. Optional: Select additional custom security groups to use for nodes in this machine pool. You must have already created the security groups and associated them with the VPC that you selected for this cluster. You cannot add or edit security groups after you create the machine pool. For more information, see the requirements for security groups in the "Additional resources" section.
|
||||
+
|
||||
endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[]
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
endif::openshift-rosa,openshift-dedicated[]
|
||||
ifdef::openshift-rosa[]
|
||||
[IMPORTANT]
|
||||
====
|
||||
You can use up to ten additional security groups for machine pools on {hcp-title} clusters.
|
||||
====
|
||||
endif::openshift-rosa-hcp[]
|
||||
endif::openshift-rosa[]
|
||||
ifdef::openshift-dedicated[]
|
||||
. Optional: If you deployed {product-title} on AWS using the Customer Cloud Subscription (CCS) model, use Amazon EC2 Spot Instances if you want to configure your machine pool to deploy machines as non-guaranteed AWS Spot Instances:
|
||||
.. Select *Use Amazon EC2 Spot Instances*.
|
||||
@@ -135,7 +122,6 @@ ifdef::openshift-rosa[]
|
||||
+
|
||||
For more information about Amazon EC2 Spot Instances, see the link:https://aws.amazon.com/ec2/spot/[AWS documentation].
|
||||
endif::openshift-rosa[]
|
||||
ifndef::openshift-rosa-hcp[]
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
@@ -146,10 +132,9 @@ Your Amazon EC2 Spot Instances might be interrupted at any time. Use Amazon EC2
|
||||
====
|
||||
If you select *Use Amazon EC2 Spot Instances* for a machine pool, you cannot disable the option after the machine pool is created.
|
||||
====
|
||||
endif::openshift-rosa-hcp[]
|
||||
|
||||
. Click *Add machine pool* to create the machine pool.
|
||||
|
||||
|
||||
.Verification
|
||||
|
||||
* Verify that the machine pool is visible on the *Machine pools* page and the configuration is as expected.
|
||||
|
||||
@@ -32,7 +32,6 @@ quick and clear output if a connection can be established:
|
||||
|
||||
.. Create a temporary pod using the `busybox` image, which cleans up after itself:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc run netcat-test \
|
||||
--image=busybox -i -t \
|
||||
@@ -45,7 +44,6 @@ $ oc run netcat-test \
|
||||
--
|
||||
* Example successful connection results:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
/ nc -zvv 192.168.1.1 8080
|
||||
10.181.3.180 (10.181.3.180:8080) open
|
||||
@@ -54,7 +52,6 @@ sent 0, rcvd 0
|
||||
|
||||
* Example failed connection results:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
/ nc -zvv 192.168.1.2 8080
|
||||
nc: 10.181.3.180 (10.181.3.180:8081): Connection refused
|
||||
@@ -64,7 +61,6 @@ sent 0, rcvd 0
|
||||
|
||||
.. Exit the container, which automatically deletes the Pod:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
/ exit
|
||||
----
|
||||
|
||||
@@ -30,7 +30,6 @@ quick and clear output if a connection can be established:
|
||||
|
||||
.. Create a temporary pod using the `busybox` image, which cleans up after itself:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc run netcat-test \
|
||||
--image=busybox -i -t \
|
||||
@@ -43,7 +42,6 @@ $ oc run netcat-test \
|
||||
--
|
||||
* Example successful connection results:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
/ nc -zvv 192.168.1.1 8080
|
||||
10.181.3.180 (10.181.3.180:8080) open
|
||||
@@ -52,7 +50,6 @@ sent 0, rcvd 0
|
||||
|
||||
* Example failed connection results:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
/ nc -zvv 192.168.1.2 8080
|
||||
nc: 10.181.3.180 (10.181.3.180:8081): Connection refused
|
||||
@@ -62,7 +59,6 @@ sent 0, rcvd 0
|
||||
|
||||
.. Exit the container, which automatically deletes the Pod:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
/ exit
|
||||
----
|
||||
|
||||
@@ -5,21 +5,21 @@
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="deleting-machine-pools-cli{context}"]
|
||||
= Deleting a machine pool using the ROSA CLI
|
||||
You can delete a machine pool for your {product-title} cluster by using the ROSA CLI.
|
||||
You can delete a machine pool for your Red{nbsp}Hat OpenShift Service on AWS (ROSA) cluster by using the ROSA CLI.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
For users of ROSA CLI `rosa` version 1.2.25 and earlier versions, the machine pool (ID='Default') that is created along with the cluster cannot be deleted. For users of ROSA CLI `rosa` version 1.2.26 and later, the machine pool (ID='worker') that is created along with the cluster can be deleted if there is one machine pool within the cluster that contains no taints, and at least two replicas for a Single-AZ cluster or three replicas for a Multi-AZ cluster.
|
||||
For users of ROSA CLI `rosa` version 1.2.25 and earlier versions, the machine pool (ID='Default') that is created along with the cluster cannot be deleted. For users of ROSA CLI `rosa` version 1.2.26 and later, the machine pool (ID='worker') that is created along with the cluster can be deleted as long as there is one machine pool within the cluster that contains no taints, and at least two replicas for a Single-AZ cluster or three replicas for a Multi-AZ cluster.
|
||||
====
|
||||
|
||||
.Prerequisites
|
||||
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa[]
|
||||
* You created a ROSA cluster.
|
||||
* The cluster is in the ready state.
|
||||
* You have an existing machine pool without any taints and with at least two instances for a Single-AZ cluster or three instances for a Multi-AZ cluster.
|
||||
endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifndef::openshift-rosa,openshift-rosa-hcp[]
|
||||
endif::openshift-rosa[]
|
||||
ifndef::openshift-rosa[]
|
||||
* You have created an {product-title} cluster.
|
||||
endif::[]
|
||||
|
||||
@@ -36,6 +36,6 @@ $ rosa delete machinepool -c=<cluster_name> <machine_pool_ID>
|
||||
----
|
||||
? Are you sure you want to delete machine pool <machine_pool_ID> on cluster <cluster_name>? (y/N)
|
||||
----
|
||||
. Enter `y` to delete the machine pool.
|
||||
. Enter 'y' to delete the machine pool.
|
||||
+
|
||||
The selected machine pool is deleted.
|
||||
|
||||
@@ -6,26 +6,26 @@
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="deleting-machine-pools-ocm{context}"]
|
||||
ifndef::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifndef::openshift-rosa[]
|
||||
= Deleting a machine pool
|
||||
endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp[]
|
||||
= Deleting a machine pool using {cluster-manager}
|
||||
endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
endif::openshift-rosa[]
|
||||
ifdef::openshift-rosa[]
|
||||
= Deleting a machine pool using OpenShift Cluster Manager
|
||||
endif::openshift-rosa[]
|
||||
|
||||
You can delete a machine pool for your {product-title} cluster by using {cluster-manager-first}.
|
||||
You can delete a machine pool for your Red{nbsp}Hat OpenShift Service on AWS (ROSA) cluster by using OpenShift Cluster Manager.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa[]
|
||||
* You created a ROSA cluster.
|
||||
* The cluster is in the ready state.
|
||||
* You have an existing machine pool without any taints and with at least two instances for a single-AZ cluster or three instances for a multi-AZ cluster.
|
||||
endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifndef::openshift-rosa,openshift-rosa-hcp[]
|
||||
endif::openshift-rosa[]
|
||||
ifndef::openshift-rosa[]
|
||||
* You have created an {product-title} cluster.
|
||||
* The newly created cluster is in the ready state.
|
||||
endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
endif::openshift-rosa[]
|
||||
|
||||
.Procedure
|
||||
. From {cluster-manager-url}, navigate to the *Cluster List* page and select the cluster that contains the machine pool that you want to delete.
|
||||
@@ -33,7 +33,6 @@ endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
. On the selected cluster, select the *Machine pools* tab.
|
||||
|
||||
. Under the *Machine pools* tab, click the options menu {kebab} for the machine pool that you want to delete.
|
||||
. Click Delete.
|
||||
|
||||
. Click *Delete*.
|
||||
+
|
||||
The selected machine pool is deleted.
|
||||
The selected machine pool is deleted.
|
||||
@@ -9,15 +9,14 @@
|
||||
You can delete a machine pool in the event that your workload requirements have changed and your current machine pools no longer meet your needs.
|
||||
|
||||
|
||||
You can delete machine pools using
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp[]
|
||||
{cluster-manager-first} or the ROSA CLI (`rosa`).
|
||||
endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifndef::openshift-rosa,openshift-rosa-hcp[]
|
||||
{cluster-manager-first}.
|
||||
You can delete machine pools using the
|
||||
ifdef::openshift-rosa[]
|
||||
OpenShift Cluster Manager or the ROSA CLI (`rosa`).
|
||||
endif::openshift-rosa[]
|
||||
ifndef::openshift-rosa[]
|
||||
OpenShift Cluster Manager.
|
||||
endif::[]
|
||||
|
||||
ifndef::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifndef::openshift-rosa[]
|
||||
|
||||
.Prerequisites
|
||||
|
||||
@@ -34,4 +33,4 @@ ifndef::openshift-rosa,openshift-rosa-hcp[]
|
||||
. Click *Delete*.
|
||||
|
||||
The selected machine pool is deleted.
|
||||
endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
endif::openshift-rosa[]
|
||||
|
||||
@@ -37,14 +37,16 @@ Note the following about memory requests and memory limits:
|
||||
that can be allocated across all the processes in a container.
|
||||
|
||||
- If the memory allocated by all of the processes in a container exceeds the
|
||||
memory limit, the node Out of Memory (OOM) killer will immediately select and kill a process in the container.
|
||||
memory limit, the node Out of Memory (OOM) killer will immediately select and kill a
|
||||
process in the container.
|
||||
|
||||
- If both memory request and limit are specified, the memory limit value must
|
||||
be greater than or equal to the memory request.
|
||||
|
||||
- The cluster administrator can assign quota or assign default values for the memory limit value.
|
||||
|
||||
- The minimum memory limit is 12 MB. If a container fails to start due to a `Cannot allocate memory` pod event, the memory limit is too low. Either increase or remove the memory limit. Removing the limit allows pods to consume unbounded node resources.
|
||||
- The minimum memory limit is 12 MB. If a container fails to start due to a `Cannot allocate memory` pod event, the memory limit is too low.
|
||||
Either increase or remove the memory limit. Removing the limit allows pods to consume unbounded node resources.
|
||||
|
||||
[id="nodes-cluster-resource-configure-about-memory_{context}"]
|
||||
== Managing application memory strategy
|
||||
|
||||
@@ -92,12 +92,14 @@ ensure that they are all configured appropriately. For many workloads it will
|
||||
be necessary to grant each JVM a percentage memory budget, leaving a perhaps
|
||||
substantial additional safety margin.
|
||||
|
||||
Many Java tools use different environment variables (`JAVA_OPTS`, `GRADLE_OPTS`, and so on) to configure their JVMs and it can be challenging to ensure that the right settings are being passed to the right JVM.
|
||||
Many Java tools use different environment variables (`JAVA_OPTS`, `GRADLE_OPTS`, and so on) to configure their JVMs and it can be challenging to ensure
|
||||
that the right settings are being passed to the right JVM.
|
||||
|
||||
The `JAVA_TOOL_OPTIONS` environment variable is always respected by the OpenJDK,
|
||||
and values specified in `JAVA_TOOL_OPTIONS` will be overridden by other options
|
||||
specified on the JVM command line. By default, to ensure that these options are
|
||||
used by default for all JVM workloads run in the Java-based agent image, the {product-title} Jenkins Maven agent image sets:
|
||||
used by default for all JVM workloads run in the Java-based agent image, the {product-title} Jenkins
|
||||
Maven agent image sets:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -84,7 +84,9 @@ oom_kill 1
|
||||
If one or more processes in a pod are OOM killed, when the pod subsequently
|
||||
exits, whether immediately or not, it will have phase *Failed* and reason
|
||||
*OOMKilled*. An OOM-killed pod might be restarted depending on the value of
|
||||
`restartPolicy`. If not restarted, controllers such as the replication controller will notice the pod's failed status and create a new pod to replace the old one.
|
||||
`restartPolicy`. If not restarted, controllers such as the
|
||||
replication controller will notice the pod's failed status and create a new pod
|
||||
to replace the old one.
|
||||
+
|
||||
Use the follwing command to get the pod status:
|
||||
+
|
||||
|
||||
@@ -11,7 +11,7 @@ within a pod should use the Downward API.
|
||||
|
||||
.Procedure
|
||||
|
||||
* Configure the pod to add the `MEMORY_REQUEST` and `MEMORY_LIMIT` stanzas:
|
||||
. Configure the pod to add the `MEMORY_REQUEST` and `MEMORY_LIMIT` stanzas:
|
||||
|
||||
.. Create a YAML file similar to the following:
|
||||
+
|
||||
@@ -33,12 +33,12 @@ spec:
|
||||
- sleep
|
||||
- "3600"
|
||||
env:
|
||||
- name: MEMORY_REQUEST # <1>
|
||||
- name: MEMORY_REQUEST <1>
|
||||
valueFrom:
|
||||
resourceFieldRef:
|
||||
containerName: test
|
||||
resource: requests.memory
|
||||
- name: MEMORY_LIMIT # <2>
|
||||
- name: MEMORY_LIMIT <2>
|
||||
valueFrom:
|
||||
resourceFieldRef:
|
||||
containerName: test
|
||||
@@ -60,7 +60,7 @@ spec:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f <file_name>.yaml
|
||||
$ oc create -f <file-name>.yaml
|
||||
----
|
||||
|
||||
.Verification
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
[id="ocm-disabling-autoscaling_{context}"]
|
||||
= Disabling autoscaling nodes in an existing cluster using {cluster-manager-first}
|
||||
|
||||
Disable autoscaling for worker nodes in the machine pool definition from {cluster-manager}.
|
||||
Disable autoscaling for worker nodes in the machine pool definition from {cluster-manager} console.
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -16,8 +16,8 @@ Disable autoscaling for worker nodes in the machine pool definition from {cluste
|
||||
|
||||
. On the selected cluster, select the *Machine pools* tab.
|
||||
|
||||
. Click the Options menu {kebab} at the end of the machine pool with autoscaling and select *Edit*.
|
||||
. Click the Options menu {kebab} at the end of the machine pool with autoscaling and select *Scale*.
|
||||
|
||||
. On the *Edit machine pool* dialog, deselect the *Enable autoscaling* checkbox.
|
||||
. On the "Edit node count" dialog, deselect the *Enable autoscaling* checkbox.
|
||||
|
||||
. Select *Save* to save these changes and disable autoscaling from the machine pool.
|
||||
. Select *Apply* to save these changes and disable autoscaling from the cluster.
|
||||
|
||||
@@ -16,8 +16,8 @@ Enable autoscaling for worker nodes in the machine pool definition from {cluster
|
||||
|
||||
. On the selected cluster, select the *Machine pools* tab.
|
||||
|
||||
. Click the Options menu {kebab} at the end of the machine pool that you want to enable autoscaling for and select *Edit*.
|
||||
. Click the Options menu {kebab} at the end of the machine pool that you want to enable autoscaling for and select *Scale*.
|
||||
|
||||
. On the *Edit machine pool* dialog, select the *Enable autoscaling* checkbox.
|
||||
. On the *Edit node count* dialog, select the *Enable autoscaling* checkbox.
|
||||
|
||||
. Select *Save* to save these changes and enable autoscaling for the machine pool.
|
||||
. Select *Apply* to save these changes and enable autoscaling for the cluster.
|
||||
|
||||
@@ -14,19 +14,19 @@ Labels are assigned as key-value pairs. Each key must be unique to the object it
|
||||
|
||||
.Prerequisites
|
||||
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa[]
|
||||
* You installed and configured the latest {product-title} (ROSA) CLI, `rosa`, on your workstation.
|
||||
* You logged in to your Red{nbsp}Hat account using the ROSA CLI (`rosa`).
|
||||
* You created a {product-title} (ROSA) cluster.
|
||||
endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifndef::openshift-rosa,openshift-rosa-hcp[]
|
||||
endif::openshift-rosa[]
|
||||
ifndef::openshift-rosa[]
|
||||
* You created an {product-title} cluster.
|
||||
endif::[]
|
||||
* You have an existing machine pool.
|
||||
|
||||
.Procedure
|
||||
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa[]
|
||||
|
||||
. List the machine pools in the cluster:
|
||||
+
|
||||
@@ -37,22 +37,12 @@ $ rosa list machinepools --cluster=<cluster_name>
|
||||
+
|
||||
.Example output
|
||||
+
|
||||
ifdef::openshift-rosa[]
|
||||
[source,terminal]
|
||||
----
|
||||
ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONES SPOT INSTANCES
|
||||
Default No 2 m5.xlarge us-east-1a N/A
|
||||
db-nodes-mp No 2 m5.xlarge us-east-1a No
|
||||
----
|
||||
endif::openshift-rosa[]
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
[source,terminal]
|
||||
----
|
||||
ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONE SUBNET VERSION AUTOREPAIR
|
||||
workers No 2/2 m5.xlarge us-east-2a subnet-0df2ec3377847164f 4.16.6 Yes
|
||||
db-nodes-mp No 2/2 m5.xlarge us-east-2a subnet-0df2ec3377847164f 4.16.6 Yes
|
||||
----
|
||||
endif::openshift-rosa-hcp[]
|
||||
|
||||
. Add or update the node labels for a machine pool:
|
||||
|
||||
@@ -61,17 +51,11 @@ endif::openshift-rosa-hcp[]
|
||||
[source,terminal]
|
||||
----
|
||||
$ rosa edit machinepool --cluster=<cluster_name> \
|
||||
--replicas=<replica_count> \// <1>
|
||||
--labels=<key>=<value>,<key>=<value> \// <2>
|
||||
--replicas=<replica_count> \ <1>
|
||||
--labels=<key>=<value>,<key>=<value> \ <2>
|
||||
<machine_pool_id>
|
||||
----
|
||||
<1> For machine pools that do not use autoscaling, you must provide a replica count when adding node labels. If you do not specify the `--replicas` argument, you are prompted for a replica count before the command completes.
|
||||
ifdef::openshift-rosa[]
|
||||
If you deployed {product-title} (ROSA) using a single availability zone, the replica count defines the number of compute nodes to provision to the machine pool for the zone. If you deployed your cluster using multiple availability zones, the count defines the total number of compute nodes in the machine pool across all zones and must be a multiple of 3.
|
||||
endif::openshift-rosa[]
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
The replica count defines the number of compute nodes to provision to the machine pool for the availability zone.
|
||||
endif::openshift-rosa-hcp[]
|
||||
<1> For machine pools that do not use autoscaling, you must provide a replica count when adding node labels. If you do not specify the `--replicas` argument, you are prompted for a replica count before the command completes. If you deployed {product-title} (ROSA) using a single availability zone, the replica count defines the number of compute nodes to provision to the machine pool for the zone. If you deployed your cluster using multiple availability zones, the count defines the total number of compute nodes in the machine pool across all zones and must be a multiple of 3.
|
||||
<2> Replace `<key>=<value>,<key>=<value>` with a comma-delimited list of key-value pairs, for example `--labels=key1=value1,key2=value2`. This list overwrites any modifications made to node labels on an ongoing basis.
|
||||
+
|
||||
The following example adds labels to the `db-nodes-mp` machine pool:
|
||||
@@ -92,18 +76,12 @@ I: Updated machine pool 'db-nodes-mp' on cluster 'mycluster'
|
||||
[source,terminal]
|
||||
----
|
||||
$ rosa edit machinepool --cluster=<cluster_name> \
|
||||
--min-replicas=<minimum_replica_count> \// <1>
|
||||
--max-replicas=<maximum_replica_count> \// <1>
|
||||
--labels=<key>=<value>,<key>=<value> \// <2>
|
||||
--min-replicas=<minimum_replica_count> \ <1>
|
||||
--max-replicas=<maximum_replica_count> \ <1>
|
||||
--labels=<key>=<value>,<key>=<value> \ <2>
|
||||
<machine_pool_id>
|
||||
----
|
||||
<1> For machine pools that use autoscaling, you must provide minimum and maximum compute node replica limits. If you do not specify the arguments, you are prompted for the values before the command completes. The cluster autoscaler does not reduce or increase the machine pool node count beyond the limits that you specify.
|
||||
ifdef::openshift-rosa[]
|
||||
If you deployed ROSA using a single availability zone, the `--min-replicas` and `--max-replicas` arguments define the autoscaling limits in the machine pool for the zone. If you deployed your cluster using multiple availability zones, the arguments define the autoscaling limits in total across all zones and the counts must be multiples of 3.
|
||||
endif::openshift-rosa[]
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
The `--min-replicas` and `--max-replicas` arguments define the autoscaling limits in the machine pool for the availability zone.
|
||||
endif::openshift-rosa-hcp[]
|
||||
<1> For machine pools that use autoscaling, you must provide minimum and maximum compute node replica limits. If you do not specify the arguments, you are prompted for the values before the command completes. The cluster autoscaler does not reduce or increase the machine pool node count beyond the limits that you specify. If you deployed ROSA using a single availability zone, the `--min-replicas` and `--max-replicas` arguments define the autoscaling limits in the machine pool for the zone. If you deployed your cluster using multiple availability zones, the arguments define the autoscaling limits in total across all zones and the counts must be multiples of 3.
|
||||
<2> Replace `<key>=<value>,<key>=<value>` with a comma-delimited list of key-value pairs, for example `--labels=key1=value1,key2=value2`. This list overwrites any modifications made to node labels on an ongoing basis.
|
||||
+
|
||||
The following example adds labels to the `db-nodes-mp` machine pool:
|
||||
@@ -129,7 +107,6 @@ $ rosa describe machinepool --cluster=<cluster_name> --machinepool=<machine-pool
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
ifdef::openshift-rosa[]
|
||||
[source,terminal]
|
||||
----
|
||||
ID: db-nodes-mp
|
||||
@@ -145,38 +122,9 @@ Spot instances: No
|
||||
Disk size: 300 GiB
|
||||
Security Group IDs:
|
||||
----
|
||||
endif::openshift-rosa[]
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
[source,terminal]
|
||||
----
|
||||
ID: db-nodes-mp
|
||||
Cluster ID: <ID_of_cluster>
|
||||
Autoscaling: No
|
||||
Desired replicas: 2
|
||||
Current replicas: 2
|
||||
Instance type: m5.xlarge
|
||||
Labels: app=db, tier=backend
|
||||
Tags:
|
||||
Taints:
|
||||
Availability zone: us-east-2a
|
||||
Subnet: subnet-0df2ec3377847164f
|
||||
Version: 4.16.6
|
||||
EC2 Metadata Http Tokens: optional
|
||||
Autorepair: Yes
|
||||
Tuning configs:
|
||||
Kubelet configs:
|
||||
Additional security group IDs:
|
||||
Node drain grace period:
|
||||
Management upgrade:
|
||||
- Type: Replace
|
||||
- Max surge: 1
|
||||
- Max unavailable: 0
|
||||
Message:
|
||||
----
|
||||
endif::openshift-rosa-hcp[]
|
||||
|
||||
. Verify that the labels are included for your machine pool in the output.
|
||||
endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
endif::[]
|
||||
|
||||
ifdef::openshift-dedicated[]
|
||||
. Navigate to {cluster-manager-url} and select your cluster.
|
||||
|
||||
@@ -27,7 +27,7 @@ You must ensure that your tag keys are not `aws`, `red-hat-managed`, `red-hat-cl
|
||||
[source,terminal]
|
||||
----
|
||||
$ rosa create machinepools --cluster=<name> --replicas=<replica_count> \
|
||||
--name <mp_name> --tags='<key> <value>,<key> <value>' // <1>
|
||||
--name <mp_name> --tags='<key> <value>,<key> <value>' \ <1>
|
||||
----
|
||||
<1> Replace `<key> <value>,<key> <value>` with a key and value for each tag.
|
||||
--
|
||||
@@ -53,9 +53,10 @@ $ rosa describe machinepool --cluster=<cluster_name> --machinepool=<machinepool_
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
ifndef::openshift-rosa-hcp[]
|
||||
[source,terminal]
|
||||
----
|
||||
$ rosa describe machinepool --cluster classic-rosa --machinepool mp-1
|
||||
|
||||
ID: mp-1
|
||||
Cluster ID: 2baiirqa2141oreotoivp4sipq84vp5g
|
||||
Autoscaling: No
|
||||
@@ -69,21 +70,4 @@ Spot instances: No
|
||||
Disk size: 300 GiB
|
||||
Additional Security Group IDs:
|
||||
Tags: red-hat-clustertype=rosa, red-hat-managed=true, tagkey1=tagvalue1, tagkey2=tagvaluev2
|
||||
----
|
||||
endif::openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
[source,terminal]
|
||||
----
|
||||
ID: db-nodes-mp
|
||||
Cluster ID: <ID_of_cluster>
|
||||
Autoscaling: No
|
||||
Desired replicas: 2
|
||||
Current replicas: 2
|
||||
Instance type: m5.xlarge
|
||||
Labels:
|
||||
Tags: red-hat-clustertype=rosa, red-hat-managed=true, tagkey1=tagvalue1, tagkey2=tagvaluev2
|
||||
Taints:
|
||||
Availability zone: us-east-2a
|
||||
...
|
||||
----
|
||||
endif::openshift-rosa-hcp[]
|
||||
----
|
||||
@@ -8,34 +8,27 @@
|
||||
[id="rosa-adding-taints-cli{context}"]
|
||||
= Adding taints to a machine pool using the ROSA CLI
|
||||
|
||||
You can add taints to a machine pool for your {product-title} cluster by using the ROSA CLI.
|
||||
You can add taints to a machine pool for your Red{nbsp}Hat OpenShift Service on AWS (ROSA) cluster by using the ROSA CLI.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
For users of ROSA CLI `rosa` version 1.2.25 and prior versions, the number of taints cannot be changed within the machine pool (ID=`Default`) created along with the cluster. For users of ROSA CLI `rosa` version 1.2.26 and beyond, the number of taints can be changed within the machine pool (ID=`worker`) created along with the cluster.
|
||||
ifndef::openshift-rosa-hcp[]
|
||||
There must be at least one machine pool without any taints and with at least two replicas for a Single-AZ cluster or three replicas for a Multi-AZ cluster.
|
||||
endif::openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
There must be at least one machine pool without any taints and with at least two replicas.
|
||||
endif::openshift-rosa-hcp[]
|
||||
For users of ROSA CLI `rosa` version 1.2.25 and prior versions, the number of taints cannot be changed within the machine pool (ID=`Default`) created along with the cluster. For users of ROSA CLI `rosa` version 1.2.26 and beyond, the number of taints can be changed within the machine pool (ID=`worker`) created along with the cluster. There must be at least one machine pool without any taints and with at least two replicas for a Single-AZ cluster or three replicas for a Multi-AZ cluster.
|
||||
====
|
||||
|
||||
.Prerequisites
|
||||
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa[]
|
||||
* You installed and configured the latest AWS (`aws`), ROSA (`rosa`), and OpenShift (`oc`) CLIs on your workstation.
|
||||
* You logged in to your Red{nbsp}Hat account by using the `rosa` CLI.
|
||||
* You created a {product-title} (ROSA) cluster.
|
||||
endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifndef::openshift-rosa,openshift-rosa-hcp[]
|
||||
endif::openshift-rosa[]
|
||||
ifndef::openshift-rosa[]
|
||||
* You created an {product-title} cluster.
|
||||
endif::[]
|
||||
* You have an existing machine pool that does not contain any taints and contains at least two instances.
|
||||
|
||||
.Procedure
|
||||
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa[]
|
||||
. List the machine pools in the cluster by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
@@ -44,22 +37,13 @@ $ rosa list machinepools --cluster=<cluster_name>
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
ifndef::openshift-rosa,openshift-rosa-hcp[]
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONES SPOT INSTANCES DISK SIZE SG IDs
|
||||
Default No 2 m5.xlarge us-east-1a N/A 300 GiB sg-0e375ff0ec4a6cfa2
|
||||
db-nodes-mp No 2 m5.xlarge us-east-1a No 300 GiB sg-0e375ff0ec4a6cfa2
|
||||
----
|
||||
endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
[source,terminal]
|
||||
----
|
||||
ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONE SUBNET VERSION AUTOREPAIR
|
||||
workers No 2/2 m5.xlarge us-east-2a subnet-0df2ec3377847164f 4.16.6 Yes
|
||||
db-nodes-mp No 2/2 m5.xlarge us-east-2a subnet-0df2ec3377847164f 4.16.6 Yes
|
||||
----
|
||||
endif::openshift-rosa-hcp[]
|
||||
|
||||
. Add or update the taints for a machine pool:
|
||||
|
||||
@@ -68,17 +52,11 @@ endif::openshift-rosa-hcp[]
|
||||
[source,terminal]
|
||||
----
|
||||
$ rosa edit machinepool --cluster=<cluster_name> \
|
||||
--replicas=<replica_count> \// <1>
|
||||
--taints=<key>=<value>:<effect>,<key>=<value>:<effect> \// <2>
|
||||
--replicas=<replica_count> \ <1>
|
||||
--taints=<key>=<value>:<effect>,<key>=<value>:<effect> \ <2>
|
||||
<machine_pool_id>
|
||||
----
|
||||
<1> For machine pools that do not use autoscaling, you must provide a replica count when adding taints. If you do not specify the `--replicas` argument, you are prompted for a replica count before the command completes.
|
||||
ifndef::openshift-rosa-hcp[]
|
||||
If you deployed {product-title} (ROSA) using a single availability zone, the replica count defines the number of compute nodes to provision to the machine pool for the zone. If you deployed your cluster using multiple availability zones, the count defines the total number of compute nodes in the machine pool across all zones and must be a multiple of 3.
|
||||
endif::openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
The replica count defines the number of compute nodes to provision to the machine pool for the availability zone.
|
||||
endif::openshift-rosa-hcp[]
|
||||
<1> For machine pools that do not use autoscaling, you must provide a replica count when adding taints. If you do not specify the `--replicas` argument, you are prompted for a replica count before the command completes. If you deployed {product-title} (ROSA) using a single availability zone, the replica count defines the number of compute nodes to provision to the machine pool for the zone. If you deployed your cluster using multiple availability zones, the count defines the total number of compute nodes in the machine pool across all zones and must be a multiple of 3.
|
||||
<2> Replace `<key>=<value>:<effect>,<key>=<value>:<effect>` with a key, value, and effect for each taint, for example `--taints=key1=value1:NoSchedule,key2=value2:NoExecute`. Available effects include `NoSchedule`, `PreferNoSchedule`, and `NoExecute`.This list overwrites any modifications made to node taints on an ongoing basis.
|
||||
+
|
||||
The following example adds taints to the `db-nodes-mp` machine pool:
|
||||
@@ -99,18 +77,12 @@ I: Updated machine pool 'db-nodes-mp' on cluster 'mycluster'
|
||||
[source,terminal]
|
||||
----
|
||||
$ rosa edit machinepool --cluster=<cluster_name> \
|
||||
--min-replicas=<minimum_replica_count> \// <1>
|
||||
--max-replicas=<maximum_replica_count> \// <1>
|
||||
--taints=<key>=<value>:<effect>,<key>=<value>:<effect> \// <2>
|
||||
--min-replicas=<minimum_replica_count> \ <1>
|
||||
--max-replicas=<maximum_replica_count> \ <1>
|
||||
--taints=<key>=<value>:<effect>,<key>=<value>:<effect> \ <2>
|
||||
<machine_pool_id>
|
||||
----
|
||||
<1> For machine pools that use autoscaling, you must provide minimum and maximum compute node replica limits. If you do not specify the arguments, you are prompted for the values before the command completes. The cluster autoscaler does not reduce or increase the machine pool node count beyond the limits that you specify.
|
||||
ifndef::openshift-rosa-hcp[]
|
||||
If you deployed ROSA using a single availability zone, the `--min-replicas` and `--max-replicas` arguments define the autoscaling limits in the machine pool for the zone. If you deployed your cluster using multiple availability zones, the arguments define the autoscaling limits in total across all zones and the counts must be multiples of 3.
|
||||
endif::openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
The `--min-replicas` and `--max-replicas` arguments define the autoscaling limits in the machine pool for the availability zone.
|
||||
endif::openshift-rosa-hcp[]
|
||||
<1> For machine pools that use autoscaling, you must provide minimum and maximum compute node replica limits. If you do not specify the arguments, you are prompted for the values before the command completes. The cluster autoscaler does not reduce or increase the machine pool node count beyond the limits that you specify. If you deployed ROSA using a single availability zone, the `--min-replicas` and `--max-replicas` arguments define the autoscaling limits in the machine pool for the zone. If you deployed your cluster using multiple availability zones, the arguments define the autoscaling limits in total across all zones and the counts must be multiples of 3.
|
||||
<2> Replace `<key>=<value>:<effect>,<key>=<value>:<effect>` with a key, value, and effect for each taint, for example `--taints=key1=value1:NoSchedule,key2=value2:NoExecute`. Available effects include `NoSchedule`, `PreferNoSchedule`, and `NoExecute`. This list overwrites any modifications made to node taints on an ongoing basis.
|
||||
+
|
||||
The following example adds taints to the `db-nodes-mp` machine pool:
|
||||
@@ -136,7 +108,6 @@ $ rosa describe machinepool --cluster=<cluster_name> --machinepool=<machinepool_
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
ifndef::openshift-rosa-hcp[]
|
||||
[source,terminal]
|
||||
----
|
||||
ID: db-nodes-mp
|
||||
@@ -152,23 +123,6 @@ Spot instances: No
|
||||
Disk size: 300 GiB
|
||||
Security Group IDs:
|
||||
----
|
||||
endif::openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
[source,terminal]
|
||||
----
|
||||
ID: db-nodes-mp
|
||||
Cluster ID: <ID_of_cluster>
|
||||
Autoscaling: No
|
||||
Desired replicas: 2
|
||||
Current replicas: 2
|
||||
Instance type: m5.xlarge
|
||||
Labels:
|
||||
Tags:
|
||||
Taints: key1=value1:NoSchedule, key2=value2:NoExecute
|
||||
Availability zone: us-east-2a
|
||||
...
|
||||
----
|
||||
endif::openshift-rosa-hcp[]
|
||||
|
||||
. Verify that the taints are included for your machine pool in the output.
|
||||
endif::[]
|
||||
endif::[]
|
||||
@@ -6,17 +6,17 @@
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="rosa-adding-taints-ocm{context}"]
|
||||
= Adding taints to a machine pool using {cluster-manager}
|
||||
= Adding taints to a machine pool using OpenShift Cluster Manager
|
||||
|
||||
You can add taints to a machine pool for your {product-title} cluster by using {cluster-manager-first}.
|
||||
You can add taints to a machine pool for your Red{nbsp}Hat OpenShift Service on AWS (ROSA) cluster by using OpenShift Cluster Manager.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
ifndef::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifndef::openshift-rosa[]
|
||||
* You created an OpenShift Dedicated cluster.
|
||||
endif::[]
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp[]
|
||||
* You created a {product-title} cluster.
|
||||
ifdef::openshift-rosa[]
|
||||
* You created a Red{nbsp}Hat OpenShift Service on AWS (ROSA) cluster.
|
||||
endif::[]
|
||||
* You have an existing machine pool that does not contain any taints and contains at least two instances.
|
||||
|
||||
|
||||
@@ -9,22 +9,21 @@
|
||||
= Adding taints to a machine pool
|
||||
|
||||
You can add taints for compute (also known as worker) nodes in a machine pool to control which pods are scheduled to them. When you apply a taint to a machine pool, the scheduler cannot place a pod on the nodes in the pool unless the pod specification includes a toleration for the taint.
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp[]
|
||||
Taints can be added to a machine pool using {cluster-manager-first} or the {product-title} (ROSA) CLI, `rosa`.
|
||||
endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
|
||||
ifdef::openshift-rosa[]
|
||||
Taints can be added to a machine pool using the OpenShift Cluster Manager or the {product-title} (ROSA) CLI, `rosa`.
|
||||
endif::openshift-rosa[]
|
||||
[NOTE]
|
||||
====
|
||||
A cluster must have at least one machine pool that does not contain any taints.
|
||||
====
|
||||
ifndef::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifndef::openshift-rosa[]
|
||||
.Prerequisites
|
||||
// ifdef::openshift-rosa[]
|
||||
// * You created a Red{nbsp}Hat OpenShift Service on AWS (ROSA) cluster.
|
||||
// endif::openshift-rosa[]
|
||||
* You created an {product-title} cluster.
|
||||
* You have an existing machine pool that does not contain any taints and contains at least two instances.
|
||||
endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
endif::openshift-rosa[]
|
||||
|
||||
ifdef::openshift-dedicated[]
|
||||
.Procedure
|
||||
|
||||
@@ -6,13 +6,18 @@
|
||||
[id="rosa-adding-tuning_{context}"]
|
||||
= Adding node tuning to a machine pool
|
||||
|
||||
You can add tunings for compute, also called worker, nodes in a machine pool to control their configuration on {product-title} clusters.
|
||||
You can add tunings for compute, also called worker, nodes in a machine pool to control their configuration on {hcp-title-first} clusters.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
This feature is only supported on {hcp-title-first} clusters.
|
||||
====
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You installed and configured the latest {product-title} (ROSA) CLI, `rosa`, on your workstation.
|
||||
* You logged in to your Red{nbsp}Hat account by using the ROSA CLI.
|
||||
* You created a {product-title} cluster.
|
||||
* You created a {hcp-title-first} cluster.
|
||||
* You have an existing machine pool.
|
||||
* You have an existing tuning configuration.
|
||||
|
||||
@@ -29,9 +34,9 @@ $ rosa list machinepools --cluster=<cluster_name>
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONE SUBNET VERSION AUTOREPAIR
|
||||
db-nodes-mp No 0/2 m5.xlarge us-east-2a subnet-08d4d81def67847b6 4.14.34 Yes
|
||||
workers No 2/2 m5.xlarge us-east-2a subnet-08d4d81def67847b6 4.14.34 Yes
|
||||
ID AUTOSCALING REPLICAS INSTANCE TYPE [...] AVAILABILITY ZONES SUBNET VERSION AUTOREPAIR TUNING CONFIGS
|
||||
workers No 2 m5.xlarge [...] us-east-1a N/A 4.12.14 Yes
|
||||
db-nodes-mp No 2 m5.xlarge [...] us-east-1a No 4.12.14 Yes
|
||||
----
|
||||
|
||||
. You can add tuning configurations to an existing or new machine pool.
|
||||
@@ -40,7 +45,7 @@ workers No 2/2 m5.xlarge us-east-2
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ rosa create machinepool -c <cluster-name> --name <machinepoolname> --tuning-configs <tuning_config_name>
|
||||
$ rosa create machinepool -c <cluster-name> <machinepoolname> --tuning-configs <tuning_config_name>
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
@@ -55,7 +60,7 @@ I: To view all machine pools, run 'rosa list machinepools -c sample-cluster'
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ rosa edit machinepool -c <cluster-name> --name <machinepoolname> --tuning-configs <tuning_config_name>
|
||||
$ rosa edit machinepool -c <cluster-name> <machinepoolname> --tuning-configs <tuningconfigname>
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
@@ -66,32 +71,19 @@ I: Updated machine pool 'db-nodes-mp' on cluster 'mycluster'
|
||||
|
||||
.Verification
|
||||
|
||||
. Describe the machine pool for which you added a tuning config:
|
||||
. List the available machine pools in your cluster:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ rosa describe machinepool --cluster=<cluster_name> --machinepool=<machine_pool_name>
|
||||
$ rosa list machinepools --cluster=<cluster_name>
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
ID: db-nodes-mp
|
||||
Cluster ID: <cluster_ID>
|
||||
Autoscaling: No
|
||||
Desired replicas: 2
|
||||
Current replicas: 2
|
||||
Instance type: m5.xlarge
|
||||
Labels:
|
||||
Tags:
|
||||
Taints:
|
||||
Availability zone: us-east-2a
|
||||
Subnet: subnet-08d4d81def67847b6
|
||||
Version: 4.14.34
|
||||
EC2 Metadata Http Tokens: optional
|
||||
Autorepair: Yes
|
||||
Tuning configs: sample-tuning
|
||||
...
|
||||
ID AUTOSCALING REPLICAS INSTANCE TYPE [...] AVAILABILITY ZONES SUBNET VERSION AUTOREPAIR TUNING CONFIGS
|
||||
workers No 2 m5.xlarge [...] us-east-1a N/A 4.12.14 Yes
|
||||
db-nodes-mp No 2 m5.xlarge [...] us-east-1a No 4.12.14 Yes sample-tuning
|
||||
----
|
||||
|
||||
. Verify that the tuning config is included for your machine pool in the output.
|
||||
. Verify that the tuning config is included for your machine pool in the output.
|
||||
@@ -12,7 +12,7 @@ Disable autoscaling for worker nodes in the machine pool definition using the {p
|
||||
|
||||
.Procedure
|
||||
|
||||
* Enter the following command:
|
||||
. Enter the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -20,6 +20,7 @@ $ rosa edit machinepool --cluster=<cluster_name> <machinepool_ID> --enable-autos
|
||||
----
|
||||
+
|
||||
.Example
|
||||
+
|
||||
Disable autoscaling on the `default` machine pool on a cluster named `mycluster`:
|
||||
+
|
||||
[source,terminal]
|
||||
|
||||
@@ -22,23 +22,14 @@ $ rosa list machinepools --cluster=<cluster_name>
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
ifndef::openshift-rosa-hcp[]
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONES SUBNETS SPOT INSTANCES DISK SIZE SG IDs
|
||||
worker No 2 m5.xlarge us-east-2a No 300 GiB
|
||||
mp1 No 2 m5.xlarge us-east-2a No 300 GiB
|
||||
ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONES DISK SIZE SG IDs
|
||||
default No 2 m5.xlarge us-east-1a 300GiB sg-0e375ff0ec4a6cfa2
|
||||
mp1 No 2 m5.xlarge us-east-1a 300GiB sg-0e375ff0ec4a6cfa2
|
||||
----
|
||||
endif::openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
[source,terminal]
|
||||
----
|
||||
ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONE SUBNET VERSION AUTOREPAIR
|
||||
workers No 2/2 m5.xlarge us-east-2a subnet-03c2998b482bf3b20 4.16.6 Yes
|
||||
mp1 No 2/2 m5.xlarge us-east-2a subnet-03c2998b482bf3b20 4.16.6 Yes
|
||||
----
|
||||
endif::openshift-rosa-hcp[]
|
||||
|
||||
+
|
||||
. Get the ID of the machine pools that you want to configure.
|
||||
|
||||
. To enable autoscaling on a machine pool, enter the following command:
|
||||
@@ -49,6 +40,7 @@ $ rosa edit machinepool --cluster=<cluster_name> <machinepool_ID> --enable-autos
|
||||
----
|
||||
+
|
||||
.Example
|
||||
+
|
||||
Enable autoscaling on a machine pool with the ID `mp1` on a cluster named `mycluster`, with the number of replicas set to scale between 2 and 5 worker nodes:
|
||||
+
|
||||
[source,terminal]
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="rosa-node-drain-grace-period_{context}"]
|
||||
= Configuring node drain grace periods
|
||||
= Configuring node drain grace periods in {hcp-title} clusters
|
||||
|
||||
You can configure the node drain grace period for machine pools in your cluster. The node drain grace period for a machine pool is how long the cluster respects the Pod Disruption Budget protected workloads when upgrading or replacing the machine pool. After this grace period, all remaining workloads are forcibly evicted. The value range for the node drain grace period is from `0` to `1 week`. With the default value `0`, or empty value, the machine pool drains without any time limitation until complete.
|
||||
|
||||
@@ -13,7 +13,7 @@ You can configure the node drain grace period for machine pools in your cluster.
|
||||
.Prerequisites
|
||||
|
||||
* You installed and configured the latest {product-title} (ROSA) CLI, `rosa`, on your workstation.
|
||||
* You created a {product-title} cluster.
|
||||
* You created a {hcp-title-first} cluster.
|
||||
* You have an existing machine pool.
|
||||
|
||||
.Procedure
|
||||
@@ -28,9 +28,9 @@ $ rosa list machinepools --cluster=<cluster_name>
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONE SUBNET VERSION AUTOREPAIR
|
||||
db-nodes-mp No 2/2 m5.xlarge us-east-2a subnet-08d4d81def67847b6 4.14.34 Yes
|
||||
workers No 2/2 m5.xlarge us-east-2a subnet-08d4d81def67847b6 4.14.34 Yes
|
||||
ID AUTOSCALING REPLICAS INSTANCE TYPE [...] AVAILABILITY ZONES SUBNET VERSION AUTOREPAIR TUNING CONFIGS
|
||||
workers No 2 m5.xlarge [...] us-east-1a N/A 4.14.18 Yes
|
||||
db-nodes-mp No 2 m5.xlarge [...] us-east-1a No 4.14.18 Yes
|
||||
----
|
||||
|
||||
. Check the node drain grace period for a machine pool by running the following command:
|
||||
@@ -45,9 +45,7 @@ $ rosa describe machinepool --cluster <cluster_name> --machinepool=<machinepool_
|
||||
----
|
||||
ID: workers
|
||||
Cluster ID: 2a90jdl0i4p9r9k9956v5ocv40se1kqs
|
||||
...
|
||||
Node drain grace period: // <1>
|
||||
...
|
||||
Node drain grace period: <1>
|
||||
----
|
||||
+
|
||||
<1> If this value is empty, the machine pool drains without any time limitation until complete.
|
||||
@@ -61,7 +59,7 @@ $ rosa edit machinepool --node-drain-grace-period="<node_drain_grace_period_valu
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
Changing the node drain grace period during a machine pool upgrade applies to future upgrades, not in-progress upgrades.
|
||||
Changing the node drain grace period during a machine pool upgrade applies to future upgrades, not in progress upgrades.
|
||||
====
|
||||
|
||||
.Verification
|
||||
@@ -78,9 +76,7 @@ $ rosa describe machinepool --cluster <cluster_name> <machinepool_name>
|
||||
----
|
||||
ID: workers
|
||||
Cluster ID: 2a90jdl0i4p9r9k9956v5ocv40se1kqs
|
||||
...
|
||||
Node drain grace period: 30 minutes
|
||||
...
|
||||
----
|
||||
|
||||
. Verify the correct `Node drain grace period` for your machine pool in the output.
|
||||
|
||||
@@ -14,19 +14,19 @@ You must scale each machine pool separately.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa[]
|
||||
* You installed and configured the latest {product-title} (ROSA) CLI, `rosa`, on your workstation.
|
||||
* You logged in to your Red{nbsp}Hat account using the ROSA CLI (`rosa`).
|
||||
* You created a {product-title} (ROSA) cluster.
|
||||
endif::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifndef::openshift-rosa,openshift-rosa-hcp[]
|
||||
endif::openshift-rosa[]
|
||||
ifndef::openshift-rosa[]
|
||||
* You created an {product-title} cluster.
|
||||
endif::[]
|
||||
* You have an existing machine pool.
|
||||
|
||||
.Procedure
|
||||
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa[]
|
||||
|
||||
. List the machine pools in the cluster:
|
||||
+
|
||||
@@ -49,15 +49,10 @@ mp1 No 2 m5.xlarge us-east-1a
|
||||
[source,terminal]
|
||||
----
|
||||
$ rosa edit machinepool --cluster=<cluster_name> \
|
||||
--replicas=<replica_count> \// <1>
|
||||
<machine_pool_id> // <2>
|
||||
--replicas=<replica_count> \ <1>
|
||||
<machine_pool_id> <2>
|
||||
----
|
||||
ifdef::openshift-rosa[]
|
||||
<1> If you deployed {rosa-classic-first} using a single availability zone, the replica count defines the number of compute nodes to provision to the machine pool for the zone. If you deployed your cluster using multiple availability zones, the count defines the total number of compute nodes in the machine pool across all zones and must be a multiple of 3.
|
||||
endif::openshift-rosa[]
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
<1> The replica count defines the number of compute nodes to provision to the machine pool for the zone.
|
||||
endif::openshift-rosa-hcp[]
|
||||
<1> If you deployed {product-title} (ROSA) using a single availability zone, the replica count defines the number of compute nodes to provision to the machine pool for the zone. If you deployed your cluster using multiple availability zones, the count defines the total number of compute nodes in the machine pool across all zones and must be a multiple of 3.
|
||||
<2> Replace `<machine_pool_id>` with the ID of your machine pool, as listed in the output of the preceding command.
|
||||
|
||||
.Verification
|
||||
|
||||
@@ -6,13 +6,20 @@ include::_attributes/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
As a cluster administrator, you can help your clusters operate efficiently through managing application memory by:
|
||||
|
||||
* Determining the memory and risk requirements of a containerized application component and configuring the container memory parameters to suit those requirements.
|
||||
|
||||
* Configuring containerized application runtimes (for example, OpenJDK) to adhere optimally to the configured container memory parameters.
|
||||
As a cluster administrator, you can help your clusters operate efficiently through
|
||||
managing application memory by:
|
||||
|
||||
* Diagnosing and resolving memory-related error conditions associated with running in a container.
|
||||
* Determining the memory and risk requirements of a containerized application
|
||||
component and configuring the container memory parameters to suit those
|
||||
requirements.
|
||||
|
||||
* Configuring containerized application runtimes (for example, OpenJDK) to adhere
|
||||
optimally to the configured container memory parameters.
|
||||
|
||||
* Diagnosing and resolving memory-related error conditions associated with
|
||||
running in a container.
|
||||
|
||||
// The following include statements pull in the module files that comprise
|
||||
// the assembly. Include any combination of concept, procedure, or reference
|
||||
|
||||
@@ -17,13 +17,4 @@ Private cluster access can be implemented to suit the needs of your {product-tit
|
||||
|
||||
- xref:../../rosa_cluster_admin/cloud_infrastructure_access/dedicated-aws-dc.adoc#dedicated-aws-dc[Configuring AWS Direct Connect]: Configure AWS Direct Connect to establish a dedicated network connection between your private network and an AWS Direct Connect location.
|
||||
|
||||
+
|
||||
// Link to ROSA Classic procedure.
|
||||
ifdef::openshift-rosa[]
|
||||
. xref:../../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-private-cluster.adoc#rosa-private-cluster[Configure a private cluster on ROSA].
|
||||
endif::openshift-rosa[]
|
||||
|
||||
// Link to ROSA HCP procedure. This can be included once the xref target is included in the ROSA HCP topic map.
|
||||
// ifdef::openshift-rosa-hcp[]
|
||||
// . xref:../../rosa_hcp/rosa-hcp-aws-private-creating-cluster.adoc#rosa-hcp-aws-private-creating-cluster[Configure a private cluster on ROSA].
|
||||
// endif::openshift-rosa-hcp[]
|
||||
. xref:../../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-private-cluster.adoc#rosa-private-cluster[Configure a private cluster on ROSA].
|
||||
@@ -10,10 +10,7 @@ include::snippets/managed-openshift-about-cluster-notifications.adoc[leveloffset
|
||||
|
||||
[role="_additional-resources"]
|
||||
== Additional resources
|
||||
// TODO: Add this xref to ARO HCP.
|
||||
ifdef::openshift-rosa[]
|
||||
* xref:../rosa_architecture/rosa_policy_service_definition/rosa-policy-responsibility-matrix.adoc#notifications_rosa-policy-responsibility-matrix[Customer responsibilities: Review and action cluster notifications]
|
||||
endif::openshift-rosa[]
|
||||
* xref:#cluster-notification-emails_rosa-cluster-notifications[Cluster notification emails]
|
||||
* xref:#troubleshoot_rosa-cluster-notifications[Troubleshooting: Cluster notifications]
|
||||
|
||||
@@ -52,7 +49,4 @@ include::modules/managed-cluster-remove-notification-contacts.adoc[leveloffset=+
|
||||
|
||||
.If your cluster does not receive notifications
|
||||
* Ensure that your cluster can access resources at `api.openshift.com`.
|
||||
// Include this xref once all of the files have been added to the ROSA HCP distro.
|
||||
ifndef::openshift-rosa-hcp[]
|
||||
* Ensure that your firewall is configured according to the documented prerequisites: xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#osd-aws-privatelink-firewall-prerequisites_rosa-sts-aws-prereqs[AWS firewall prerequisites]
|
||||
endif::openshift-rosa-hcp[]
|
||||
* Ensure that your firewall is configured according to the documented prerequisites: xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#osd-aws-privatelink-firewall-prerequisites_rosa-sts-aws-prereqs[AWS firewall prerequisites]
|
||||
@@ -32,8 +32,7 @@ include::modules/sd-understanding-process-id-limits.adoc[leveloffset=+1]
|
||||
// Risks of setting higher process ID limits
|
||||
include::modules/risks-setting-higher-process-id-limits.adoc[leveloffset=+1]
|
||||
|
||||
//TODO Add these links when HCP docs are published separately.
|
||||
ifndef::openshift-rosa-hcp[]
|
||||
//TODO OSDOCS-10439: Confirm these links work when HCP docs are published separately.
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
@@ -42,14 +41,25 @@ ifndef::openshift-rosa-hcp[]
|
||||
* xref:../rosa_planning/rosa-planning-environment.adoc#rosa-planning-environment[Planning your environment]
|
||||
|
||||
* xref:../rosa_planning/rosa-limits-scalability.adoc#rosa-limits-scalability[Limits and scalability]
|
||||
endif::openshift-rosa-hcp[]
|
||||
|
||||
ifdef::openshift-rosa[]
|
||||
include::modules/setting-higher-pid-limit-on-existing-cluster.adoc[leveloffset=+1]
|
||||
include::modules/removing-custom-config-from-cluster.adoc[leveloffset=+1]
|
||||
endif::openshift-rosa[]
|
||||
//TODO OSDOCS-10439: Add conditions back in and remove variant based headings when HCP docs are published separately
|
||||
// Setting or removing a higher pid limit on existing clusters
|
||||
//ifdef::openshift-rosa-classic[]
|
||||
[id="rosa-classic-configuring-pid-limits"]
|
||||
== Configuring PID limits on ROSA Classic clusters
|
||||
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
include::modules/setting-higher-pid-limit-on-machinepool.adoc[leveloffset=+1]
|
||||
include::modules/removing-custom-config-from-machinepool.adoc[leveloffset=+1]
|
||||
endif::openshift-rosa-hcp[]
|
||||
include::modules/setting-higher-pid-limit-on-existing-cluster.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/removing-custom-config-from-cluster.adoc[leveloffset=+2]
|
||||
//endif::openshift-rosa-classic[]
|
||||
|
||||
|
||||
//TODO OSDOCS-10439: Add conditions back in and remove variant based headings when HCP docs are published separately
|
||||
//ifdef::openshift-rosa-hcp[]
|
||||
[id="rosa-hcp-configuring-pid-limits"]
|
||||
== Configuring PID limits on ROSA with HCP clusters
|
||||
|
||||
include::modules/setting-higher-pid-limit-on-machinepool.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/removing-custom-config-from-machinepool.adoc[leveloffset=+2]
|
||||
//endif::openshift-rosa-hcp[]
|
||||
|
||||
@@ -14,23 +14,16 @@ You can edit machine pool configuration options such as scaling, adding node lab
|
||||
include::modules/creating-a-machine-pool.adoc[leveloffset=+1]
|
||||
include::modules/creating-a-machine-pool-ocm.adoc[leveloffset=+2]
|
||||
|
||||
// This additional resource can be added back once all of the files are added to the ROSA HCP distro.
|
||||
ifndef::openshift-rosa-hcp[]
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
* xref:../../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-aws-prereqs.adoc#rosa-security-groups_prerequisites[Additional custom security groups]
|
||||
endif::openshift-rosa-hcp[]
|
||||
|
||||
include::modules/creating-a-machine-pool-cli.adoc[leveloffset=+2]
|
||||
|
||||
// This additional resource can be added back once all of the files are added to the ROSA HCP distro.
|
||||
ifndef::openshift-rosa-hcp[]
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
* xref:../../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-aws-prereqs.adoc#rosa-security-groups_prerequisites[Additional custom security groups]
|
||||
endif::openshift-rosa-hcp[]
|
||||
|
||||
ifndef::openshift-rosa-hcp[]
|
||||
include::modules/configuring-machine-pool-disk-volume.adoc[leveloffset=+1]
|
||||
include::modules/configuring-machine-pool-disk-volume-ocm.adoc[leveloffset=+2]
|
||||
include::modules/configuring-machine-pool-disk-volume-cli.adoc[leveloffset=+2]
|
||||
@@ -38,7 +31,6 @@ include::modules/configuring-machine-pool-disk-volume-cli.adoc[leveloffset=+2]
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
* For a detailed list of the arguments that are available for the `rosa create machinepool` subcommand, see xref:../../cli_reference/rosa_cli/rosa-manage-objects-cli.adoc#rosa-managing-objects-cli[Managing objects with the ROSA CLI].
|
||||
endif::openshift-rosa-hcp[]
|
||||
|
||||
include::modules/deleting-machine-pools.adoc[leveloffset=+1]
|
||||
include::modules/deleting-machine-pools-ocm.adoc[leveloffset=+2]
|
||||
@@ -66,20 +58,12 @@ include::modules/rosa-adding-tags-cli.adoc[leveloffset=+2]
|
||||
include::modules/rosa-adding-taints.adoc[leveloffset=+1]
|
||||
include::modules/rosa-adding-taints-ocm.adoc[leveloffset=+2]
|
||||
include::modules/rosa-adding-taints-cli.adoc[leveloffset=+2]
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
include::modules/rosa-adding-tuning.adoc[leveloffset=+1]
|
||||
include::modules/rosa-node-drain-grace-period.adoc[leveloffset=+1]
|
||||
endif::openshift-rosa-hcp[]
|
||||
|
||||
== Additional resources
|
||||
* xref:../../rosa_cluster_admin/rosa_nodes/rosa-nodes-machinepools-about.adoc#rosa-nodes-machinepools-about[About machine pools]
|
||||
* xref:../../rosa_cluster_admin/rosa_nodes/rosa-nodes-about-autoscaling-nodes.adoc#rosa-nodes-about-autoscaling-nodes[About autoscaling]
|
||||
* xref:../../rosa_cluster_admin/rosa_nodes/rosa-nodes-about-autoscaling-nodes.adoc#rosa-nodes-about-autoscaling-nodes[Enabling autoscaling]
|
||||
* xref:../../rosa_cluster_admin/rosa_nodes/rosa-nodes-about-autoscaling-nodes.adoc#nodes-disabling-autoscaling-nodes[Disabling autoscaling]
|
||||
ifdef::openshift-rosa[]
|
||||
* xref:../../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-service-definition[{rosa-classic} Service Definition]
|
||||
endif::openshift-rosa[]
|
||||
// This xref can be included once all of the ROSA HCP files have been added.
|
||||
// ifdef::openshift-rosa-hcp[]
|
||||
// * xref:../../rosa_architecture/rosa_policy_service_definition/rosa-hcp-service-definition.adoc#rosa-hcp-service-definition[{hcp-title} Service Definition]
|
||||
// endif::openshift-rosa-hcp[]
|
||||
* xref:../../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-service-definition[ROSA Service Definition]
|
||||
|
||||
@@ -38,16 +38,7 @@ ifdef::openshift-rosa[]
|
||||
====
|
||||
Additionally, you can configure autoscaling on the default machine pool when you xref:../../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-creating-cluster.adoc#rosa-creating-cluster[create the cluster using interactive mode].
|
||||
====
|
||||
endif::[]
|
||||
// This can be included once the ROSA HCP files are added.
|
||||
// ifdef::openshift-rosa-hcp[]
|
||||
// [NOTE]
|
||||
// ====
|
||||
// Additionally, you can configure autoscaling on the default machine pool when you xref:../../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-hcp-sts-creating-a-cluster-quickly[create the cluster].
|
||||
// ====
|
||||
// endif::[]
|
||||
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp[]
|
||||
[discrete]
|
||||
include::modules/rosa-enabling-autoscaling-nodes.adoc[leveloffset=+2]
|
||||
endif::[]
|
||||
@@ -58,31 +49,23 @@ endif::[]
|
||||
You can disable autoscaling on worker nodes to increase or decrease the number of nodes available by editing the machine pool definition for an existing cluster.
|
||||
|
||||
ifdef::openshift-dedicated[]
|
||||
You can disable autoscaling on a cluster using {cluster-manager-first}.
|
||||
endif::[]
|
||||
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp[]
|
||||
You can disable autoscaling on a cluster using {cluster-manager-first} or the {product-title} CLI.
|
||||
You can disable autoscaling on a cluster using {cluster-manager} console.
|
||||
endif::[]
|
||||
|
||||
ifdef::openshift-rosa[]
|
||||
You can disable autoscaling on a cluster using {cluster-manager} console or the {product-title} CLI.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Additionally, you can configure autoscaling on the default machine pool when you xref:../../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-creating-cluster.adoc#rosa-creating-cluster[create the cluster using interactive mode].
|
||||
====
|
||||
endif::[]
|
||||
// This can be included once the ROSA HCP files are added.
|
||||
// ifdef::openshift-rosa-hcp[]
|
||||
// [NOTE]
|
||||
// ====
|
||||
// Additionally, you can configure autoscaling on the default machine pool when you xref:../../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-hcp-sts-creating-a-cluster-quickly[create the cluster].
|
||||
// ====
|
||||
// endif::[]
|
||||
|
||||
[discrete]
|
||||
include::modules/ocm-disabling-autoscaling-nodes.adoc[leveloffset=+2]
|
||||
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa[]
|
||||
|
||||
[discrete]
|
||||
include::modules/rosa-disabling-autoscaling-nodes.adoc[leveloffset=+2]
|
||||
endif::[]
|
||||
@@ -91,10 +74,7 @@ endif::[]
|
||||
== Additional resources
|
||||
* link:https://access.redhat.com/solutions/6821651[Troubleshooting: Autoscaling is not scaling down nodes]
|
||||
* xref:../../rosa_cluster_admin/rosa_nodes/rosa-nodes-machinepools-about.adoc#rosa-nodes-machinepools-about[About machinepools]
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa[]
|
||||
* xref:../../rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc#rosa-managing-worker-nodes[Managing compute nodes]
|
||||
// This xref can be included in ROSA HCP when all of the files are added.
|
||||
ifndef::openshift-rosa-hcp[]
|
||||
* xref:../../cli_reference/rosa_cli/rosa-manage-objects-cli.adoc#rosa-managing-objects-cli[Managing objects with the ROSA CLI]
|
||||
endif::openshift-rosa-hcp[]
|
||||
endif::[]
|
||||
|
||||
@@ -26,29 +26,14 @@ Machine pools are a higher level construct to compute machine sets.
|
||||
|
||||
A machine pool creates compute machine sets that are all clones of the same configuration across availability zones. Machine pools perform all of the host node provisioning management actions on a worker node. If you need more machines or must scale them down, change the number of replicas in the machine pool to meet your compute needs. You can manually configure scaling or set autoscaling.
|
||||
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
In {hcp-title} clusters, the hosted control plane spans three availability zones (AZ) in the installed cloud region. Each machine pool in a {hcp-title} cluster deploys in a single subnet within a single AZ. Each of these AZs can have only one machine pool.
|
||||
endif::openshift-rosa-hcp[]
|
||||
|
||||
Multiple machine pools can exist on a single cluster, and each machine pool can contain a unique node type and node size configuration.
|
||||
|
||||
=== Machine pools during cluster installation
|
||||
|
||||
By default, a cluster has one machine pool. During cluster installation, you can define instance type or size and add labels to this machine pool.
|
||||
|
||||
=== Configuring machine pools after cluster installation
|
||||
|
||||
After a cluster's installation:
|
||||
|
||||
* You can remove or add labels to any machine pool.
|
||||
* You can add additional machine pools to an existing cluster.
|
||||
* You can add taints to any machine pool if there is one machine pool without any taints.
|
||||
ifndef::openshift-rosa-hcp[]
|
||||
* You can create or delete a machine pool if there is one machine pool without any taints and at least two replicas for a Single-AZ cluster or three replicas for a Multi-AZ cluster.
|
||||
endif::openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
* You can create or delete a machine pool if there is one machine pool without any taints and at least two replicas.
|
||||
endif::openshift-rosa-hcp[]
|
||||
* You can add taints to any machine pool as long as there is one machine pool without any taints.
|
||||
* You can create or delete a machine pool as long as there is one machine pool without any taints and at least two replicas for a Single-AZ cluster or three replicas for a Multi-AZ cluster.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
@@ -56,22 +41,8 @@ You cannot change the machine pool node type or size. The machine pool node type
|
||||
====
|
||||
* You can add a label to each added machine pool.
|
||||
|
||||
ifdef::openshift-rosa-hcp[]
|
||||
=== Machine pool upgrade requirements
|
||||
Multiple machine pools can exist on a single cluster, and each machine pool can contain a unique node type and node size configuration.
|
||||
|
||||
Each machine pool in an {hcp-title} cluster upgrades independently. Because the machine pools upgrade independently, they must remain within 2 minor (Y-stream) versions of the hosted control plane. For example, if your hosted control plane is 4.16.z, your machine pools must be at least 4.14.z.
|
||||
|
||||
The following image depicts how machine pools work within ROSA and {hcp-title} clusters:
|
||||
|
||||
image::hcp-rosa-machine-pools.png[Machine pools on ROSA classic and ROSA with HCP clusters]
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Machine pools in {hcp-title} clusters each upgrade independently and the machine pool versions must remain within two minor (Y-stream) versions of the control plane.
|
||||
====
|
||||
endif::openshift-rosa-hcp[]
|
||||
|
||||
ifndef::openshift-rosa-hcp[]
|
||||
== Machine pools in multiple zone clusters
|
||||
In a cluster created across multiple Availability Zones (AZ), the machine pools can be created across either all of the three AZs or any single AZ of your choice. The machine pool created by default at the time of cluster creation will be created with machines in all three AZs and scale in multiples of three.
|
||||
|
||||
@@ -85,10 +56,12 @@ You can override this default setting and create a machine pool in a Single-AZ o
|
||||
|
||||
Similarly, deleting a machine pool will delete it from all zones.
|
||||
Due to this multiplicative effect, using machine pools in Multi-AZ cluster can consume more of your project's quota for a specific region when creating machine pools.
|
||||
endif::openshift-rosa-hcp[]
|
||||
|
||||
// ROSA HCP content applies to the following subsection
|
||||
include::modules/machine-pools-hcp.adoc[leveloffset=+1]
|
||||
|
||||
== Additional resources
|
||||
ifdef::openshift-rosa,openshift-rosa-hcp[]
|
||||
ifdef::openshift-rosa[]
|
||||
* xref:../../rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc#rosa-managing-worker-nodes[Managing compute nodes]
|
||||
endif::[]
|
||||
* xref:../../rosa_cluster_admin/rosa_nodes/rosa-nodes-about-autoscaling-nodes.adoc#rosa-nodes-about-autoscaling-nodes[About autoscaling]
|
||||
|
||||
Reference in New Issue
Block a user