1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

adding new info for HCP

adding suggestions from gdoc for HCP

fixing a warning

fixing peer review comments
This commit is contained in:
Frances_McDonald
2025-07-29 16:40:36 +01:00
committed by openshift-cherrypick-robot
parent 6410db6e09
commit 13ccccbca6
24 changed files with 112 additions and 206 deletions

View File

@@ -51,7 +51,7 @@ endif::openshift-rosa,openshift-rosa-hcp[]
. Add a *Machine pool name*.
. Select a *Compute node instance type* from the drop-down menu. The instance type defines the vCPU and memory allocation for each compute node in the machine pool.
. Select a *Compute node instance type* from the list. The instance type defines the vCPU and memory allocation for each compute node in the machine pool.
+
[NOTE]
====

View File

@@ -18,19 +18,20 @@
. Log in to the {product-title} AWS Account Dashboard and select the correct region.
. From the {product-title} AWS Account region, select *VPC* from the *Services* menu.
. From *VPN Connections*, select *Virtual Private Gateways*.
. Select *Create Virtual Private Gateway*.
. Give the Virtual Private Gateway a suitable name.
. From *Virtual private network (VPN)*, select *Virtual private gateways*.
. Select *Create virtual private gateway*.
. Give the virtual private gateway a suitable name in the *Details* field.
. Click *Custom ASN* and enter the *Amazon side ASN* value gathered previously or use the Amazon Provided ASN.
. Create the Virtual Private Gateway.
. In the *Navigation* pane of the {product-title} AWS Account Dashboard, choose *Virtual private gateways* and select the virtual private gateway. Choose *View details*.
. Choose *Direct Connect gateway associations* and click *Associate Direct Connect gateway*.
. Under *Association account type*, for Account owner, choose *Another account*.
. For *Direct Connect gateway owner*, enter the ID of the AWS account that owns the Direct Connect gateway.
. Click *Create virtual private gateway*.
. From the {product-title} AWS Account region, select *Direct Connect* from the *Services* menu.
. Click *virtual private gateways* and select the virtual private gateway.
. Click *View details*.
. Click the *Direct Connect gateway associations* tab.
. Click *Associate Direct Connect gateway*
. Under *Association account type*, for Account owner, click *Another account*.
. Under *Association settings*, for Direct Connect gateway ID, enter the ID of the Direct Connect gateway.
. Under *Association settings*, for Virtual interface owner, enter the ID of the AWS account that owns the virtual interface for the association.
. Optional: Add prefixes to Allowed prefixes, separating them using commas.
. Choose *Associate Direct Connect gateway*.
. After the Association Proposal has been sent, it will be waiting for your
acceptance. The final steps you must perform are available in the
. For *Direct Connect gateway owner*, enter the ID of the AWS account that owns the Direct Connect gateway.
. Optional: Add prefixes to *Allowed prefixes*, separating them using commas or put them on separate lines.
. Click *Associate Direct Connect gateway*.
. After the Association Proposal has been sent, it will be waiting for your acceptance. The final steps you must perform are available in the
link:https://docs.aws.amazon.com/directconnect/latest/UserGuide/multi-account-associate-vgw.html[AWS Documentation].

View File

@@ -31,25 +31,21 @@ Connect Gateway is created.
[id="dedicated-aws-dc-hvif-private"]
== Creating a Private Direct Connect
A Private Direct Connect is created if the Direct Connect Virtual Interface type
is Private.
A Private Direct Connect is created if the Direct Connect Virtual Interface type is Private.
.Procedure
. Log in to the {product-title} AWS Account Dashboard and select the correct region.
. From the AWS region, select *VPC* from the *Services* menu.
. Select *Virtual Private Gateways* from *VPN Connections*.
. Click *Create Virtual Private Gateway*.
. From *Virtual private network (VPN)*, select *Virtual private gateways*.
. Click *Create virtual private gateway*.
. Give the Virtual Private Gateway a suitable name.
. Select *Custom ASN* and enter the *Amazon side ASN* value gathered previously.
. Create the Virtual Private Gateway.
. Select *Custom ASN* in the *Enter custom ASN* field enter the *Amazon side ASN* value gathered previously.
. Click *Create virtual private gateway*.
. Click the newly created Virtual Private Gateway and choose *Attach to VPC* from the *Actions* tab.
. Select the *{product-title} Cluster VPC* from the list, and attach the Virtual Private Gateway to the VPC.
. From the *Services* menu, click *Direct Connect*. Choose one of the Direct Connect Virtual Interfaces from the list.
. Acknowledge the *I understand that Direct Connect port charges apply once I click Accept Connection* message, then choose *Accept Connection*.
. Choose to *Accept* the Virtual Private Gateway Connection and select the Virtual Private Gateway that was created in the previous steps.
. Select *Accept* to accept the connection.
. Repeat the previous steps if there is more than one Virtual Interface.
. Select the *{product-title} Cluster VPC* from the list, and click *Attach VPC*.
Note: Editing the kubelet config will cause the nodes for your machine pool to be recreated. This ma???
[id="dedicated-aws-dc-hvif-public"]
== Creating a Public Direct Connect
@@ -61,16 +57,10 @@ is Public.
. Log in to the {product-title} AWS Account Dashboard and select the correct region.
. From the {product-title} AWS Account region, select *Direct Connect* from the *Services* menu.
. Select *Direct Connect Gateways* and *Create Direct Connect Gateway*.
. Give the Direct Connect Gateway a suitable name.
. Select *Direct Connect gateways* and *Create Direct Connect gateway*.
. Give the Direct Connect gateway a suitable name.
. In the *Amazon side ASN*, enter the Amazon side ASN value gathered previously.
. Create the Direct Connect Gateway.
. Select *Direct Connect* from the *Services* menu.
. Select one of the Direct Connect Virtual Interfaces from the list.
. Acknowledge the *I understand that Direct Connect port charges apply once I click Accept Connection* message, then choose *Accept Connection*.
. Choose to *Accept* the Direct Connect Gateway Connection and select the Direct Connect Gateway that was created in the previous steps.
. Click *Accept* to accept the connection.
. Repeat the previous steps if there is more than one Virtual Interface.
. Click *Create the Direct Connect gateway*.
[id="dedicated-aws-dc-hvif-verifying"]
== Verifying the Virtual Interfaces

View File

@@ -16,7 +16,7 @@ to communicate across the peering connection.
.Procedure
. Log in to the AWS Web Console for the {product-title} AWS Account.
. Navigate to the *VPC Service*, then *Route Tables*.
. Navigate to the *VPC Service*, then *Route tables*.
. Select the Route Table for the {product-title} Cluster VPC.
+
[NOTE]
@@ -36,7 +36,7 @@ Select the private one that has a number of explicitly associated subnets.
.. Select the *Routes* tab, then *Edit*.
.. Enter the {product-title} Cluster VPC CIDR block in the *Destination* text box.
.. Enter the Peering Connection ID in the *Target* text box.
.. Click *Save*.
.. Click *Save changes*.
The VPC peering connection is now complete. Follow the verification procedure to
ensure connectivity across the peering connection is working.

View File

@@ -33,12 +33,11 @@ button.
. Verify the details of the account you are logged in to and the details of the
account and VPC you are connecting to:
.. *Peering connection name tag*: Set a descriptive name for the VPC Peering Connection.
.. *VPC (Requester)*: Select the {product-title} Cluster VPC ID from the dropdown
*list.
.. *VPC (Requester)*: Select the {product-title} Cluster VPC ID from the list.
.. *Account*: Select *Another account* and provide the Customer AWS Account number
*(without dashes).
.. *Region*: If the Customer VPC Region differs from the current region, select
*Another Region* and select the customer VPC Region from the dropdown list.
*Another Region* and select the customer VPC Region from the list.
.. *VPC (Accepter)*: Set the Customer VPC ID.
. Click *Create Peering Connection*.
. Confirm that the request enters a *Pending* state. If it enters a *Failed*

View File

@@ -6,8 +6,7 @@
[id="dedicated-aws-vpn-creating"]
= Creating a VPN connection
You can configure an Amazon Web Services (AWS) {product-title} cluster to use a
customer's on-site hardware VPN device using the following procedures.
You can configure an Amazon Web Services (AWS) {product-title} cluster to use a customer's on-site hardware VPN device using the following procedures.
.Prerequisites
@@ -18,7 +17,7 @@ to confirm whether your gateway device is supported by AWS.
* Public, static IP address for the VPN gateway device.
* BGP or static routing: if BGP, the ASN is required. If static routing, you must
configure at least one static route.
* Optional: IP and Port/Protocol of a reachable service to test the VPN connection.
* *Optional*: IP and Port/Protocol of a reachable service to test the VPN connection.
[id="dedicated-aws-vpn-creating-configuring"]
== Configuring the VPN connection
@@ -26,36 +25,37 @@ configure at least one static route.
.Procedure
. Log in to the {product-title} AWS Account Dashboard, and navigate to the VPC Dashboard.
. Click on *Your VPCs* and identify the name and VPC ID for the VPC containing the {product-title} cluster.
. From the VPC Dashboard, click *Customer Gateway*.
. Click *Create Customer Gateway* and give it a meaningful name.
. Select the routing method: *Dynamic* or *Static*.
. If Dynamic, enter the BGP ASN in the field that appears.
. Paste in the VPN gateway endpoint IP address.
. Click *Create*.
. Under *Virtual private cloud* click on *Your VPCs* and identify the name and VPC ID for the VPC containing the {product-title} cluster.
. Under *Virtual private network (VPN)* click *Customer gateways*.
. Click *Create customer gateway* and give it a meaningful name.
. Enter the ASN of your customer gateway device in the *BGP ASN* field.
. Enter the IP address for your customer gateway devicess external interface in the *IP address* field.
. Click *Create customer gateway*.
. If you do not already have a Virtual Private Gateway attached to the intended VPC:
.. From the VPC Dashboard, click on *Virtual Private Gateway*.
.. Click *Create Virtual Private Gateway*, give it a meaningful name, and click *Create*.
.. Leave the default Amazon default ASN.
.. Select the newly created gateway, click *Attach to VPC*, and attach it to the cluster VPC you identified earlier.
.. From the VPC Dashboard, click on *Virtual Private Gateways*.
.. Click *Create virtual private gateway*, give it a meaningful name.
.. Click *Create virtual private gateway*, leaving the *Amazon default ASN*.
.. Select the newly created gateway.
.. Select *Actions* from the list and click *Attach to VPC*.
.. Select the newly created gateway under Available VPC's and click *Attach to VPC* to attach it to the cluster VPC you identified earlier.
[id="dedicated-aws-vpn-creating-establishing"]
== Establishing the VPN Connection
.Procedure
. From the VPC dashboard, click on *Site-to-Site VPN Connections*.
. Click *Create VPN Connection*.
. From the VPC dashboard, under Virtual private network (VPN) click on *Site-to-Site VPN connections*.
. Click *Create VPN connection*.
.. Give it a meaningful name tag.
.. Select the virtual private gateway created previously.
.. For Customer Gateway, select *Existing*.
.. Select the customer gateway device by name.
.. If the VPN will use BGP, select *Dynamic*, otherwise select *Static*. Enter
.. Select the Virtual private gateway created previously.
.. For Customer gateway, select *Existing*.
.. Select the Customer gateway id by name.
.. If the VPN will use BGP, select *Dynamic*, otherwise select *Static* and enter the
Static IP CIDRs. If there are multiple CIDRs, add each CIDR as *Another Rule*.
.. Click *Create*.
.. Wait for VPN status to change to *Available*, approximately 5 to 10 minutes.
. Select the VPN you just created and click *Download Configuration*.
.. From the dropdown list, select the vendor, platform, and version of the customer
.. Click *Create VPN connection*.
.. Under *State* wait for the VPN status to change from *Pending* to *Available*, approximately 5 to 10 minutes.
. Select the VPN you just created and click *Download configuration*.
.. From the list, select the vendor, platform, and version of the customer
gateway device, then click *Download*.
.. The *Generic* vendor configuration is also available for retrieving information
in a plain text format.
@@ -80,7 +80,7 @@ is enabled so that the necessary routes are added to the VPC's route table.
.Procedure
. From the VPC Dashboard, click on *Route Tables*.
. From the VPC Dashboard, under Virtual private cloud, click on *Route tables*.
. Select the private Route table associated with the VPC that contains your
{product-title} cluster.
+
@@ -90,10 +90,9 @@ On some clusters, there may be more than one route table for a particular VPC.
Select the private one that has a number of explicitly associated subnets.
====
. Click on the *Route Propagation* tab.
. In the table that appears, you should see the virtual private gateway you
created previously. Check the value in the *Propagate column*.
.. If Propagate is set to *No*, click *Edit route propagation*, check the Propagate
checkbox next to the virtual private gateway's name and click *Save*.
. In the table that appears, you should see the Virtual Private Gateway you
created previously. Check the value in the *Propagate* column.
.. If *Propagation* is set to *No*, click *Edit route propagation*, check the *Enable* checkbox in Propagation and click *Save*.
After you configure your VPN tunnel and AWS detects it as *Up*, your static or
BGP routes are automatically added to the route table.

View File

@@ -16,13 +16,13 @@ working.
.Procedure
. *Verify the tunnel is up in AWS.*
. *Verify the tunnel is up in AWS*.
.. From the VPC Dashboard, click on *VPN Connections*.
.. Select the VPN connection you created previously and click the *Tunnel Details* tab.
.. You should be able to see that at least one of the VPN tunnels is *Up*.
.. From the VPC Dashboard, under *Virtual private network (VPN)*, click on *Site-to-Site VPN connections*.
.. Select the VPN connection you created previously and click the *Tunnel details* tab.
.. You should see that at least one of the VPN tunnels is in an *Up* status.
. *Verify the connection.*
. *Verify the connection*.
+
To test network connectivity to an endpoint device, `nc` (or `netcat`) is a
helpful troubleshooting tool. It is included in the default image and provides

View File

@@ -10,9 +10,9 @@ In {hcp-title} clusters, the hosted control plane spans three availability zones
Each machine pool in an {hcp-title} cluster upgrades independently. Because the machine pools upgrade independently, they must remain within 2 minor (Y-stream) versions of the hosted control plane. For example, if your hosted control plane is 4.16.z, your machine pools must be at least 4.14.z.
The following image depicts how machine pools work within ROSA and {hcp-title} clusters:
The following image depicts how machine pools work within ROSA and {product-title} clusters:
image::hcp-rosa-machine-pools.png[Machine pools on ROSA classic and {hcp-title} clusters]
image::hcp-rosa-machine-pools.png[Machine pools on ROSA classic and {product-title} clusters]
[NOTE]
====

View File

@@ -24,7 +24,7 @@ follows:
+
[source,terminal]
----
# oc rsh test
# oc rsh <pod name>
----
. Run the following command to see the current OOM kill count in `/sys/fs/cgroup/memory/memory.oom_control`:
@@ -53,21 +53,6 @@ $ sed -e '' </dev/zero
Killed
----
. Run the following command to view the exit status of the `sed` command:
+
[source,terminal]
----
$ echo $?
----
+
.Example output
[source,terminal]
----
137
----
+
The `137` code indicates the container process exited with code 137, indicating it received a SIGKILL signal.
. Run the following command to see that the OOM kill counter in `/sys/fs/cgroup/memory/memory.oom_control` incremented:
+
[source,terminal]
@@ -86,7 +71,7 @@ exits, whether immediately or not, it will have phase *Failed* and reason
*OOMKilled*. An OOM-killed pod might be restarted depending on the value of
`restartPolicy`. If not restarted, controllers such as the replication controller will notice the pod's failed status and create a new pod to replace the old one.
+
Use the follwing command to get the pod status:
Use the following command to get the pod status:
+
[source,terminal]
----

View File

@@ -6,8 +6,7 @@
[id="nodes-cluster-resource-configure-request-limit_{context}"]
= Finding the memory request and limit from within a pod
An application wishing to dynamically discover its memory request and limit from
within a pod should use the Downward API.
An application wishing to dynamically discover its memory request and limit from within a pod should use the Downward API.
.Procedure
@@ -23,7 +22,7 @@ metadata:
name: test
spec:
securityContext:
runAsNonRoot: true
runAsNonRoot: false
seccompProfile:
type: RuntimeDefault
containers:

View File

@@ -21,7 +21,7 @@ To remove all `KubeletConfig` objects from the machine pool, set an empty value
+
[source,terminal]
----
$ rosa edit machinepool -c <cluster_name> --kubeletconfigs="" <machinepool_name>
$ rosa edit machinepool -c <cluster_name> --kubelet-configs="" <machinepool_name>
----
.Verification steps

View File

@@ -61,18 +61,10 @@ endif::openshift-rosa-hcp[]
[source,terminal]
----
$ rosa edit machinepool --cluster=<cluster_name> \
--replicas=<replica_count> \// <1>
--labels=<key>=<value>,<key>=<value> \// <2>
--labels=<key>=<value>,<key>=<value> \// <1>
<machine_pool_id>
----
<1> For machine pools that do not use autoscaling, you must provide a replica count when adding node labels. If you do not specify the `--replicas` argument, you are prompted for a replica count before the command completes.
ifdef::openshift-rosa[]
If you deployed {product-title} (ROSA) using a single availability zone, the replica count defines the number of compute nodes to provision to the machine pool for the zone. If you deployed your cluster using multiple availability zones, the count defines the total number of compute nodes in the machine pool across all zones and must be a multiple of 3.
endif::openshift-rosa[]
ifdef::openshift-rosa-hcp[]
The replica count defines the number of compute nodes to provision to the machine pool for the availability zone.
endif::openshift-rosa-hcp[]
<2> Replace `<key>=<value>,<key>=<value>` with a comma-delimited list of key-value pairs, for example `--labels=key1=value1,key2=value2`. This list overwrites any modifications made to node labels on an ongoing basis.
<1> Replace `<key>=<value>,<key>=<value>` with a comma-delimited list of key-value pairs, for example `--labels=key1=value1,key2=value2`. This list overwrites any modifications made to node labels on an ongoing basis.
+
The following example adds labels to the `db-nodes-mp` machine pool:
+
@@ -87,38 +79,6 @@ $ rosa edit machinepool --cluster=mycluster --replicas=2 --labels=app=db,tier=ba
I: Updated machine pool 'db-nodes-mp' on cluster 'mycluster'
----
* To add or update node labels for a machine pool that uses autoscaling, run the following command:
+
[source,terminal]
----
$ rosa edit machinepool --cluster=<cluster_name> \
--min-replicas=<minimum_replica_count> \// <1>
--max-replicas=<maximum_replica_count> \// <1>
--labels=<key>=<value>,<key>=<value> \// <2>
<machine_pool_id>
----
<1> For machine pools that use autoscaling, you must provide minimum and maximum compute node replica limits. If you do not specify the arguments, you are prompted for the values before the command completes. The cluster autoscaler does not reduce or increase the machine pool node count beyond the limits that you specify.
ifdef::openshift-rosa[]
If you deployed ROSA using a single availability zone, the `--min-replicas` and `--max-replicas` arguments define the autoscaling limits in the machine pool for the zone. If you deployed your cluster using multiple availability zones, the arguments define the autoscaling limits in total across all zones and the counts must be multiples of 3.
endif::openshift-rosa[]
ifdef::openshift-rosa-hcp[]
The `--min-replicas` and `--max-replicas` arguments define the autoscaling limits in the machine pool for the availability zone.
endif::openshift-rosa-hcp[]
<2> Replace `<key>=<value>,<key>=<value>` with a comma-delimited list of key-value pairs, for example `--labels=key1=value1,key2=value2`. This list overwrites any modifications made to node labels on an ongoing basis.
+
The following example adds labels to the `db-nodes-mp` machine pool:
+
[source,terminal]
----
$ rosa edit machinepool --cluster=mycluster --min-replicas=2 --max-replicas=3 --labels=app=db,tier=backend db-nodes-mp
----
+
.Example output
[source,terminal]
----
I: Updated machine pool 'db-nodes-mp' on cluster 'mycluster'
----
.Verification
. Describe the details of the machine pool with the new labels:

View File

@@ -6,7 +6,7 @@
[id="rosa-adding-tags-cli_{context}"]
= Adding tags to a machine pool using the ROSA CLI
You can add tags to a machine pool for your {product-title} cluster by using the ROSA command-line interface (CLI).
You can add tags to a machine pool for your {product-title} cluster by using the ROSA command-line interface (CLI). You can not edit the tags after after you create the machine pool.
[IMPORTANT]
====

View File

@@ -6,4 +6,4 @@
[id="rosa-adding-tags_{context}"]
= Adding tags to a machine pool
You can add tags for compute nodes, also known as worker nodes, in a machine pool to introduce custom user tags for AWS resources that are generated when you provision your machine pool.
You can add tags for compute nodes, also known as worker nodes, in a machine pool to introduce custom user tags for AWS resources that are generated when you provision your machine pool, noting that you can not edit the tags after you create the machine pool.

View File

@@ -68,18 +68,10 @@ endif::openshift-rosa-hcp[]
[source,terminal]
----
$ rosa edit machinepool --cluster=<cluster_name> \
--replicas=<replica_count> \// <1>
--taints=<key>=<value>:<effect>,<key>=<value>:<effect> \// <2>
--taints=<key>=<value>:<effect>,<key>=<value>:<effect> \// <1>
<machine_pool_id>
----
<1> For machine pools that do not use autoscaling, you must provide a replica count when adding taints. If you do not specify the `--replicas` argument, you are prompted for a replica count before the command completes.
ifndef::openshift-rosa-hcp[]
If you deployed {product-title} (ROSA) using a single availability zone, the replica count defines the number of compute nodes to provision to the machine pool for the zone. If you deployed your cluster using multiple availability zones, the count defines the total number of compute nodes in the machine pool across all zones and must be a multiple of 3.
endif::openshift-rosa-hcp[]
ifdef::openshift-rosa-hcp[]
The replica count defines the number of compute nodes to provision to the machine pool for the availability zone.
endif::openshift-rosa-hcp[]
<2> Replace `<key>=<value>:<effect>,<key>=<value>:<effect>` with a key, value, and effect for each taint, for example `--taints=key1=value1:NoSchedule,key2=value2:NoExecute`. Available effects include `NoSchedule`, `PreferNoSchedule`, and `NoExecute`.This list overwrites any modifications made to node taints on an ongoing basis.
<1> Replace `<key>=<value>:<effect>,<key>=<value>:<effect>` with a key, value, and effect for each taint, for example `--taints=key1=value1:NoSchedule,key2=value2:NoExecute`. Available effects include `NoSchedule`, `PreferNoSchedule`, and `NoExecute`.This list overwrites any modifications made to node taints on an ongoing basis.
+
The following example adds taints to the `db-nodes-mp` machine pool:
+
@@ -94,38 +86,6 @@ $ rosa edit machinepool --cluster=mycluster --replicas 2 --taints=key1=value1:No
I: Updated machine pool 'db-nodes-mp' on cluster 'mycluster'
----
* To add or update taints for a machine pool that uses autoscaling, run the following command:
+
[source,terminal]
----
$ rosa edit machinepool --cluster=<cluster_name> \
--min-replicas=<minimum_replica_count> \// <1>
--max-replicas=<maximum_replica_count> \// <1>
--taints=<key>=<value>:<effect>,<key>=<value>:<effect> \// <2>
<machine_pool_id>
----
<1> For machine pools that use autoscaling, you must provide minimum and maximum compute node replica limits. If you do not specify the arguments, you are prompted for the values before the command completes. The cluster autoscaler does not reduce or increase the machine pool node count beyond the limits that you specify.
ifndef::openshift-rosa-hcp[]
If you deployed ROSA using a single availability zone, the `--min-replicas` and `--max-replicas` arguments define the autoscaling limits in the machine pool for the zone. If you deployed your cluster using multiple availability zones, the arguments define the autoscaling limits in total across all zones and the counts must be multiples of 3.
endif::openshift-rosa-hcp[]
ifdef::openshift-rosa-hcp[]
The `--min-replicas` and `--max-replicas` arguments define the autoscaling limits in the machine pool for the availability zone.
endif::openshift-rosa-hcp[]
<2> Replace `<key>=<value>:<effect>,<key>=<value>:<effect>` with a key, value, and effect for each taint, for example `--taints=key1=value1:NoSchedule,key2=value2:NoExecute`. Available effects include `NoSchedule`, `PreferNoSchedule`, and `NoExecute`. This list overwrites any modifications made to node taints on an ongoing basis.
+
The following example adds taints to the `db-nodes-mp` machine pool:
+
[source,terminal]
----
$ rosa edit machinepool --cluster=mycluster --min-replicas=2 --max-replicas=3 --taints=key1=value1:NoSchedule,key2=value2:NoExecute db-nodes-mp
----
+
.Example output
[source,terminal]
----
I: Updated machine pool 'db-nodes-mp' on cluster 'mycluster'
----
.Verification
. Describe the details of the machine pool with the new taints:

View File

@@ -27,7 +27,7 @@ endif::[]
. Under the *Machine pools* tab, click the Options menu {kebab} for the machine pool that you want to add a taint to.
. Select *Edit taints*.
. Add *Key* and *Value* entries for your taint.
. Select an *Effect* for your taint from the drop-down menu. Available options include `NoSchedule`, `PreferNoSchedule`, and `NoExecute`.
. Select an *Effect* for your taint from the list. Available options include `NoSchedule`, `PreferNoSchedule`, and `NoExecute`.
. Optional: Select *Add taint* if you want to add more taints to the machine pool.
. Click *Save* to apply the taints to the machine pool.

View File

@@ -32,7 +32,7 @@ ifdef::openshift-dedicated[]
. Under the *Machine pools* tab, click the Options menu {kebab} for the machine pool that you want to add a taint to.
. Select *Edit taints*.
. Add *Key* and *Value* entries for your taint.
. Select an *Effect* for your taint from the drop-down menu. Available options include `NoSchedule`, `PreferNoSchedule`, and `NoExecute`.
. Select an *Effect* for your taint from the list. Available options include `NoSchedule`, `PreferNoSchedule`, and `NoExecute`.
. Select *Add taint* if you want to add more taints to the machine pool.
. Click *Save* to apply the taints to the machine pool.

View File

@@ -30,8 +30,8 @@ $ rosa list machinepools --cluster=<cluster_name>
[source,terminal]
----
ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONE SUBNET VERSION AUTOREPAIR
db-nodes-mp No 0/2 m5.xlarge us-east-2a subnet-08d4d81def67847b6 4.14.34 Yes
workers No 2/2 m5.xlarge us-east-2a subnet-08d4d81def67847b6 4.14.34 Yes
db-nodes-mp No 0/2 m5.xlarge us-east-2a subnet-08d4d81def67847b6 4.14.34 Yes
workers No 2/2 m5.xlarge us-east-2a subnet-08d4d81def67847b6 4.14.34 Yes
----
. You can add tuning configurations to an existing or new machine pool.
@@ -55,7 +55,7 @@ I: To view all machine pools, run 'rosa list machinepools -c sample-cluster'
+
[source,terminal]
----
$ rosa edit machinepool -c <cluster-name> --name <machinepoolname> --tuning-configs <tuning_config_name>
$ rosa edit machinepool -c <cluster-name> --machinepool <machinepoolname> --tuning-configs <tuning_config_name>
----
+
.Example output
@@ -82,9 +82,9 @@ Autoscaling: No
Desired replicas: 2
Current replicas: 2
Instance type: m5.xlarge
Labels:
Tags:
Taints:
Labels:
Tags:
Taints:
Availability zone: us-east-2a
Subnet: subnet-08d4d81def67847b6
Version: 4.14.34

View File

@@ -38,11 +38,18 @@ db-nodes-mp No 2/2 m5.xlarge us-east-2
. Enable or disable AutoRepair on a machine pool:
* To enable or disable AutoRepair for a machine pool, run the following command:
* To disable AutoRepair for a machine pool, run the following command:
+
[source,terminal]
----
$ rosa edit machinepool --cluster=mycluster --machinepool=<machinepool_name> --autorepair false
$ rosa edit machinepool --cluster=mycluster --machinepool=<machinepool_name> --autorepair=false
----
* To enable AutoRepair for a machine pool, run the following command:
+
[source,terminal]
----
$ rosa edit machinepool --cluster=mycluster --machinepool=<machinepool_name> --autorepair=true
----
+
.Example output

View File

@@ -17,7 +17,7 @@ ifdef::openshift-rosa-hcp[]
.Example
[source,terminal]
----
$ rosa edit autoscaler -h --cluster=<mycluster>
$ rosa edit autoscaler --cluster=<mycluster>
----
+
** To edit a specific parameter, run the following command:
@@ -25,7 +25,7 @@ $ rosa edit autoscaler -h --cluster=<mycluster>
.Example
[source,terminal]
----
$ rosa edit autoscaler -h --cluster=<mycluster> <parameter>
$ rosa edit autoscaler -h --cluster=<mycluster> <parameter>=<value>
----
endif::openshift-rosa-hcp[]

View File

@@ -50,6 +50,12 @@ $ rosa create machinepool -c <cluster_name> --name <machinepool_name> --kubelet-
----
$ rosa edit machinepool -c <cluster_name> --kubelet-configs=<kubeletconfig_name> <machinepool_name>
----
+
.Example output
[source,terminal]
----
Editing the kubelet config will cause the Nodes for your Machine Pool to be recreated. This may cause outages to your applications. Do you wish to continue? (y/N)
----
--
+
For example, the following command associates the `set-high-pids` `KubeletConfig` object with the `high-pid-pool` machine pool in the `my-cluster` cluster:

View File

@@ -89,7 +89,7 @@ You can edit any specific parameters of the cluster autoscaler after creating th
.Example
[source,terminal]
----
$ rosa edit autoscaler -h --cluster=<mycluster>
$ rosa edit autoscaler --cluster=<mycluster>
----
+
.. To edit a specific parameter, run the following command:
@@ -97,7 +97,7 @@ $ rosa edit autoscaler -h --cluster=<mycluster>
.Example
[source,terminal]
----
$ rosa edit autoscaler -h --cluster=<mycluster> <parameter>
$ rosa edit autoscaler --cluster=<mycluster> <parameter>
----
//::modules/rosa-cluster-autoscaler-cli-describe.adoc[leveloffset=+1]

View File

@@ -12,11 +12,11 @@ Autoscaling is available only on clusters that were purchased through the Red{nb
====
endif::[]
The autoscaler option can be configured to automatically scale the number of machines in a cluster.
The autoscaler option can be configured to automatically scale the number of machines in a machine pool.
The cluster autoscaler increases the size of the cluster when there are pods that failed to schedule on any of the current nodes due to insufficient resources or when another node is necessary to meet deployment needs. The cluster autoscaler does not increase the cluster resources beyond the limits that you specify.
The cluster autoscaler increases the size of the machine pool when there are pods that failed to schedule on any of the current nodes due to insufficient resources or when another node is necessary to meet deployment needs. The cluster autoscaler does not increase the cluster resources beyond the limits that you specify.
Additionally, the cluster autoscaler decreases the size of the cluster when some nodes are consistently not needed for a significant period, such as when it has low resource use and all of its important pods can fit on other nodes.
Additionally, the cluster autoscaler decreases the size of the machine pool when some nodes are consistently not needed for a significant period, such as when it has low resource use and all of its important pods can fit on other nodes.
When you enable autoscaling, you must also set a minimum and maximum number of worker nodes.

View File

@@ -27,17 +27,17 @@ Machine pools are a higher level construct to compute machine sets.
A machine pool creates compute machine sets that are all clones of the same configuration across availability zones. Machine pools perform all of the host node provisioning management actions on a worker node. If you need more machines or must scale them down, change the number of replicas in the machine pool to meet your compute needs. You can manually configure scaling or set autoscaling.
ifdef::openshift-rosa-hcp[]
In {product-title} clusters, the hosted control plane spans three availability zones (AZ) in the installed cloud region. Each machine pool in a {product-title} cluster deploys in a single subnet within a single AZ. Each of these AZs can have only one machine pool.
In {product-title} clusters, the hosted control plane spans multiple availability zones (AZ) in the installed cloud region. Each machine pool in a {product-title} cluster deploys in a single subnet within a single AZ.
endif::openshift-rosa-hcp[]
ifdef::openshift-rosa,openshift-rosa-hcp[]
include::snippets/rosa-node-lifecycle.adoc[]
endif::openshift-rosa,openshift-rosa-hcp[]
Multiple machine pools can exist on a single cluster, and each machine pool can contain a unique node type and node size configuration.
Multiple machine pools can exist on a single cluster, and each machine pool can contain a unique node type and node size (AWS EC2 instance type and size) configuration.
=== Machine pools during cluster installation
By default, a cluster has one machine pool. During cluster installation, you can define instance type or size and add labels to this machine pool.
By default, a cluster has one machine pool. During cluster installation, you can define instance type or size and add labels to this machine pool as well as define the size of the root disk.
=== Configuring machine pools after cluster installation
@@ -68,7 +68,7 @@ endif::openshift-rosa,openshift-rosa-hcp[]
* *Optional:* Add a label to the default machine pool after configuration by using the default machine pool labels and running the following command:
+
[source,terminal]
----
----
$ rosa edit machinepool -c <cluster_name> <machinepool_name> -i
----
+
@@ -88,9 +88,9 @@ ifdef::openshift-rosa-hcp[]
Each machine pool in a {product-title} cluster upgrades independently. Because the machine pools upgrade independently, they must remain within 2 minor (Y-stream) versions of the hosted control plane. For example, if your hosted control plane is 4.16.z, your machine pools must be at least 4.14.z.
The following image depicts how machine pools work within ROSA and {rosa-classic} clusters:
The following image depicts how machine pools work within ROSA and {product-title} clusters:
image::hcp-rosa-machine-pools.png[Machine pools on ROSA classic and {hcp-tilte} clusters]
image::hcp-rosa-machine-pools.png[Machine pools on ROSA classic and {product-titLe} clusters]
[NOTE]
====