mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
API terminology and formatting updates for machine mgmt book
This commit is contained in:
@@ -5,13 +5,11 @@ include::modules/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
Applying autoscaling to an {product-title} cluster involves deploying a
|
||||
ClusterAutoscaler and then deploying MachineAutoscalers for each Machine type
|
||||
in your cluster.
|
||||
Applying autoscaling to an {product-title} cluster involves deploying a cluster autoscaler and then deploying machine autoscalers for each Machine type in your cluster.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
You can configure the ClusterAutoscaler only in clusters where the machine API is operational.
|
||||
You can configure the cluster autoscaler only in clusters where the machine API is operational.
|
||||
====
|
||||
|
||||
include::modules/cluster-autoscaler-about.adoc[leveloffset=+1]
|
||||
@@ -19,15 +17,13 @@ include::modules/cluster-autoscaler-about.adoc[leveloffset=+1]
|
||||
include::modules/machine-autoscaler-about.adoc[leveloffset=+1]
|
||||
|
||||
[id="configuring-clusterautoscaler"]
|
||||
== Configuring the ClusterAutoscaler
|
||||
== Configuring the cluster autoscaler
|
||||
|
||||
First, deploy the ClusterAutoscaler to manage automatic resource scaling in
|
||||
your {product-title} cluster.
|
||||
First, deploy the cluster autoscaler to manage automatic resource scaling in your {product-title} cluster.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Because the ClusterAutoscaler is scoped to the entire cluster, you can make only
|
||||
one ClusterAutoscaler for the cluster.
|
||||
Because the cluster autoscaler is scoped to the entire cluster, you can make only one cluster autoscaler for the cluster.
|
||||
====
|
||||
|
||||
include::modules/cluster-autoscaler-cr.adoc[leveloffset=+2]
|
||||
@@ -37,27 +33,21 @@ include::modules/deploying-resource.adoc[leveloffset=+2]
|
||||
|
||||
== Next steps
|
||||
|
||||
* After you configure the ClusterAutoscaler, you must configure at least one
|
||||
MachineAutoscaler.
|
||||
* After you configure the cluster autoscaler, you must configure at least one machine autoscaler.
|
||||
|
||||
[id="configuring-machineautoscaler"]
|
||||
== Configuring the MachineAutoscalers
|
||||
== Configuring the machine autoscalers
|
||||
|
||||
After you deploy the ClusterAutoscaler,
|
||||
deploy MachineAutoscaler resources that reference the MachineSets that are used
|
||||
to scale the cluster.
|
||||
After you deploy the cluster autoscaler, deploy `MachineAutoscaler` resources that reference the machine sets that are used to scale the cluster.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
You must deploy at least one MachineAutoscaler resource after you deploy the
|
||||
ClusterAutoscaler resource.
|
||||
You must deploy at least one `MachineAutoscaler` resource after you deploy the `ClusterAutoscaler` resource.
|
||||
====
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
You must configure separate resources for each MachineSet. Remember that
|
||||
MachineSets are different in each region, so consider whether you want to
|
||||
enable machine scaling in multiple regions. The MachineSet that you scale must have at least one machine in it.
|
||||
You must configure separate resources for each machine set. Remember that machine sets are different in each region, so consider whether you want to enable machine scaling in multiple regions. The machine set that you scale must have at least one machine in it.
|
||||
====
|
||||
|
||||
include::modules/machine-autoscaler-cr.adoc[leveloffset=+2]
|
||||
@@ -67,5 +57,4 @@ include::modules/deploying-resource.adoc[leveloffset=+2]
|
||||
|
||||
== Additional resources
|
||||
|
||||
* For more information about pod priority, see
|
||||
xref:../nodes/pods/nodes-pods-priority.adoc#nodes-pods-priority[Including pod priority in pod scheduling decisions in {product-title}].
|
||||
* For more information about pod priority, see xref:../nodes/pods/nodes-pods-priority.adoc#nodes-pods-priority[Including pod priority in pod scheduling decisions in {product-title}].
|
||||
|
||||
@@ -1,60 +1,40 @@
|
||||
[id="creating-infrastructure-machinesets"]
|
||||
= Creating infrastructure MachineSets
|
||||
= Creating infrastructure machine sets
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: creating-infrastructure-machinesets
|
||||
|
||||
toc::[]
|
||||
|
||||
You can create a MachineSet to host only infrastructure components.
|
||||
You apply specific Kubernetes labels to these Machines and then
|
||||
update the infrastructure components to run on only those Machines. These
|
||||
infrastructure nodes are not counted toward the total number of subscriptions
|
||||
that are required to run the environment.
|
||||
You can create a machine set to host only infrastructure components. You apply specific Kubernetes labels to these machines and then update the infrastructure components to run on only those machines. These infrastructure nodes are not counted toward the total number of subscriptions that are required to run the environment.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
Unlike earlier versions of {product-title}, you cannot move the infrastructure
|
||||
components to the master Machines. To move the components, you must create a
|
||||
new MachineSet.
|
||||
Unlike earlier versions of {product-title}, you cannot move the infrastructure components to the master machines. To move the components, you must create a new machine set.
|
||||
====
|
||||
|
||||
include::modules/infrastructure-components.adoc[leveloffset=+1]
|
||||
|
||||
[id="creating-infrastructure-machinesets-production"]
|
||||
== Creating infrastructure MachineSets for production environments
|
||||
== Creating infrastructure machine sets for production environments
|
||||
|
||||
In a production deployment, deploy at least three MachineSets to hold
|
||||
infrastructure components. Both the logging aggregation solution and
|
||||
the service mesh deploy Elasticsearch, and Elasticsearch requires three
|
||||
instances that are installed on different nodes. For high availability, install
|
||||
deploy these nodes to different availability zones. Since you need different
|
||||
MachineSets for each availability zone, create at least three MachineSets.
|
||||
In a production deployment, deploy at least three machine sets to hold infrastructure components. Both the logging aggregation solution and the service mesh deploy Elasticsearch, and Elasticsearch requires three instances that are installed on different nodes. For high availability, deploy these nodes to different availability zones. Since you need different machine sets for each availability zone, create at least three machine sets.
|
||||
|
||||
[id="creating-infrastructure-machinesets-clouds"]
|
||||
=== Creating MachineSets for different clouds
|
||||
=== Creating machine sets for different clouds
|
||||
|
||||
Use the sample MachineSet for your cloud.
|
||||
Use the sample machine set for your cloud.
|
||||
|
||||
include::modules/machineset-yaml-aws.adoc[leveloffset=+3]
|
||||
|
||||
MachineSets running on AWS support non-guaranteed xref:../machine_management/creating_machinesets/creating-machineset-aws.adoc#machineset-non-guaranteed-instance_creating-machineset-aws[Spot Instances].
|
||||
You can save on costs by using Spot Instances at a lower price compared to
|
||||
On-Demand Instances on AWS. xref:../machine_management/creating_machinesets/creating-machineset-aws.adoc#machineset-creating-non-guaranteed-instance_creating-machineset-aws[Configure Spot Instances]
|
||||
by adding `spotMarketOptions` to the MachineSet YAML file.
|
||||
Machine sets running on AWS support non-guaranteed xref:../machine_management/creating_machinesets/creating-machineset-aws.adoc#machineset-non-guaranteed-instance_creating-machineset-aws[Spot Instances]. You can save on costs by using Spot Instances at a lower price compared to On-Demand Instances on AWS. xref:../machine_management/creating_machinesets/creating-machineset-aws.adoc#machineset-creating-non-guaranteed-instance_creating-machineset-aws[Configure Spot Instances] by adding `spotMarketOptions` to the machine set YAML file.
|
||||
|
||||
include::modules/machineset-yaml-azure.adoc[leveloffset=+3]
|
||||
|
||||
MachineSets running on Azure support non-guaranteed xref:../machine_management/creating_machinesets/creating-machineset-azure.adoc#machineset-non-guaranteed-instance_creating-machineset-azure[Spot VMs].
|
||||
You can save on costs by using Spot VMs at a lower price compared to
|
||||
standard VMs on Azure. You can xref:../machine_management/creating_machinesets/creating-machineset-azure.adoc#machineset-creating-non-guaranteed-instance_creating-machineset-azure[configure Spot VMs]
|
||||
by adding `spotVMOptions` to the MachineSet YAML file.
|
||||
Machine sets running on Azure support non-guaranteed xref:../machine_management/creating_machinesets/creating-machineset-azure.adoc#machineset-non-guaranteed-instance_creating-machineset-azure[Spot VMs]. You can save on costs by using Spot VMs at a lower price compared to standard VMs on Azure. You can xref:../machine_management/creating_machinesets/creating-machineset-azure.adoc#machineset-creating-non-guaranteed-instance_creating-machineset-azure[configure Spot VMs] by adding `spotVMOptions` to the machine set YAML file.
|
||||
|
||||
include::modules/machineset-yaml-gcp.adoc[leveloffset=+3]
|
||||
|
||||
MachineSets running on GCP support non-guaranteed xref:../machine_management/creating_machinesets/creating-machineset-gcp.adoc#machineset-non-guaranteed-instance_creating-machineset-gcp[preemptible VM instances].
|
||||
You can save on costs by using preemptible VM instances at a lower price
|
||||
compared to normal instances on GCP. You can xref:../machine_management/creating_machinesets/creating-machineset-gcp.adoc#machineset-creating-non-guaranteed-instance_creating-machineset-gcp[configure preemptible VM instances]
|
||||
by adding `preemptible` to the MachineSet YAML file.
|
||||
Machine sets running on GCP support non-guaranteed xref:../machine_management/creating_machinesets/creating-machineset-gcp.adoc#machineset-non-guaranteed-instance_creating-machineset-gcp[preemptible VM instances]. You can save on costs by using preemptible VM instances at a lower price compared to normal instances on GCP. You can xref:../machine_management/creating_machinesets/creating-machineset-gcp.adoc#machineset-creating-non-guaranteed-instance_creating-machineset-gcp[configure preemptible VM instances] by adding `preemptible` to the machine set YAML file.
|
||||
|
||||
include::modules/machineset-yaml-osp.adoc[leveloffset=+3]
|
||||
|
||||
@@ -74,14 +54,12 @@ include::modules/binding-infra-node-workloads-using-taints-tolerations.adoc[leve
|
||||
.Additional resources
|
||||
|
||||
* See xref:../nodes/scheduling/nodes-scheduler-about.adoc#nodes-scheduler-about[Controlling pod placement using the scheduler] for general information on scheduling a pod to a node.
|
||||
* See xref:moving-resources-to-infrastructure-machinesets[Moving resources to infrastructure machine sets]
|
||||
for instructions on scheduling pods to infra nodes.
|
||||
* See xref:moving-resources-to-infrastructure-machinesets[Moving resources to infrastructure machine sets] for instructions on scheduling pods to infra nodes.
|
||||
|
||||
[id="moving-resources-to-infrastructure-machinesets"]
|
||||
== Moving resources to infrastructure MachineSets
|
||||
== Moving resources to infrastructure machine sets
|
||||
|
||||
Some of the infrastructure resources are deployed in your cluster by default.
|
||||
You can move them to the infrastructure MachineSets that you created.
|
||||
Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created.
|
||||
|
||||
include::modules/infrastructure-moving-router.adoc[leveloffset=+2]
|
||||
|
||||
|
||||
@@ -1,12 +1,9 @@
|
||||
[id="creating-machineset-aws"]
|
||||
= Creating a MachineSet in AWS
|
||||
= Creating a machine set in AWS
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: creating-machineset-aws
|
||||
|
||||
You can create a different MachineSet to serve a specific purpose in your
|
||||
{product-title} cluster on Amazon Web Services (AWS). For example, you might
|
||||
create infrastructure MachineSets and related Machines so that you can move
|
||||
supporting workloads to the new Machines.
|
||||
You can create a different machine set to serve a specific purpose in your {product-title} cluster on Amazon Web Services (AWS). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.
|
||||
|
||||
toc::[]
|
||||
|
||||
|
||||
@@ -1,12 +1,9 @@
|
||||
[id="creating-machineset-azure"]
|
||||
= Creating a MachineSet in Azure
|
||||
= Creating a machine set in Azure
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: creating-machineset-azure
|
||||
|
||||
You can create a different MachineSet to serve a specific purpose in your
|
||||
{product-title} cluster on Microsoft Azure. For example, you might
|
||||
create infrastructure MachineSets and related Machines so that you can move
|
||||
supporting workloads to the new Machines.
|
||||
You can create a different machine set to serve a specific purpose in your {product-title} cluster on Microsoft Azure. For example, you might create infrastructure machine sets and related Machines so that you can move supporting workloads to the new Machines.
|
||||
|
||||
toc::[]
|
||||
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
[id="creating-machineset-gcp"]
|
||||
= Creating a MachineSet in GCP
|
||||
= Creating a machine set in GCP
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: creating-machineset-gcp
|
||||
|
||||
You can create a different MachineSet to serve a specific purpose in your
|
||||
You can create a different machine set to serve a specific purpose in your
|
||||
{product-title} cluster on Google Cloud Platform (GCP). For example, you might
|
||||
create infrastructure MachineSets and related Machines so that you can move
|
||||
create infrastructure machine sets and related Machines so that you can move
|
||||
supporting workloads to the new Machines.
|
||||
|
||||
toc::[]
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
[id="creating-machineset-osp"]
|
||||
= Creating a MachineSet on OpenStack
|
||||
= Creating a machine set on OpenStack
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: creating-machineset-osp
|
||||
|
||||
You can create a different MachineSet to serve a specific purpose in your
|
||||
You can create a different machine set to serve a specific purpose in your
|
||||
{product-title} cluster on {rh-openstack-first}. For example, you might
|
||||
create infrastructure MachineSets and related Machines so that you can move
|
||||
create infrastructure machine sets and related Machines so that you can move
|
||||
supporting workloads to the new Machines.
|
||||
|
||||
toc::[]
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
[id="creating-machineset-vsphere"]
|
||||
= Creating a MachineSet on vSphere
|
||||
= Creating a machine set on vSphere
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: creating-machineset-vsphere
|
||||
|
||||
toc::[]
|
||||
|
||||
You can create a different MachineSet to serve a specific purpose in your {product-title} cluster on VMware vSphere. For example, you might create infrastructure MachineSets and related Machines so that you can move supporting workloads to the new Machines.
|
||||
You can create a different machine set to serve a specific purpose in your {product-title} cluster on VMware vSphere. For example, you might create infrastructure machine sets and related Machines so that you can move supporting workloads to the new Machines.
|
||||
|
||||
include::modules/machine-api-overview.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
@@ -4,8 +4,7 @@ include::modules/common-attributes.adoc[]
|
||||
:context: deploying-machine-health-checks
|
||||
toc::[]
|
||||
|
||||
You can configure and deploy a machine health check to automatically repair
|
||||
damaged machines in a machine pool.
|
||||
You can configure and deploy a machine health check to automatically repair damaged machines in a machine pool.
|
||||
|
||||
include::modules/machine-user-provisioned-limitations.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
@@ -1,16 +1,15 @@
|
||||
[id="manually-scaling-machineset"]
|
||||
= Manually scaling a MachineSet
|
||||
= Manually scaling a machine set
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: manually-scaling-machineset
|
||||
|
||||
toc::[]
|
||||
|
||||
You can add or remove an instance of a machine in a MachineSet.
|
||||
You can add or remove an instance of a machine in a machine set.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
If you need to modify aspects of a MachineSet outside of scaling,
|
||||
see xref:../machine_management/modifying-machineset.adoc#modifying-machineset[Modifying a MachineSet].
|
||||
If you need to modify aspects of a machine set outside of scaling, see xref:../machine_management/modifying-machineset.adoc#modifying-machineset[Modifying a machine set].
|
||||
====
|
||||
|
||||
== Prerequisites
|
||||
@@ -21,4 +20,4 @@ include::modules/machine-user-provisioned-limitations.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/machineset-manually-scaling.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/machineset-delete-policy.adoc[leveloffset=+1]
|
||||
include::modules/machineset-delete-policy.adoc[leveloffset=+1]
|
||||
|
||||
@@ -1,18 +1,16 @@
|
||||
[id="modifying-machineset"]
|
||||
= Modifying a MachineSet
|
||||
= Modifying a machine set
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: modifying-machineset
|
||||
|
||||
toc::[]
|
||||
|
||||
|
||||
You can make changes to a MachineSet, such as adding labels, changing the instance type,
|
||||
or changing block storage.
|
||||
You can make changes to a machine set, such as adding labels, changing the instance type, or changing block storage.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
If you need to scale a MachineSet without making other changes,
|
||||
see xref:../machine_management/manually-scaling-machineset.adoc#manually-scaling-machineset[Manually scaling a MachineSet].
|
||||
If you need to scale a machine set without making other changes, see xref:../machine_management/manually-scaling-machineset.adoc#manually-scaling-machineset[Manually scaling a machine set].
|
||||
====
|
||||
|
||||
|
||||
|
||||
@@ -10,17 +10,12 @@ You can add more compute machines to your {product-title} cluster on bare metal.
|
||||
== Prerequisites
|
||||
|
||||
* You xref:../../installing/installing_bare_metal/installing-bare-metal.adoc#installing-bare-metal[installed a cluster on bare metal].
|
||||
* You have installation media and {op-system-first} images that you used to
|
||||
create your cluster. If you do not have these files, you must obtain them by
|
||||
following the instructions in the
|
||||
xref:../../installing/installing_bare_metal/installing-bare-metal.adoc#installing-bare-metal[installation procedure].
|
||||
* You have installation media and {op-system-first} images that you used to create your cluster. If you do not have these files, you must obtain them by following the instructions in the xref:../../installing/installing_bare_metal/installing-bare-metal.adoc#installing-bare-metal[installation procedure].
|
||||
|
||||
[id="creating-machines-bare-metal"]
|
||||
== Creating {op-system-first} machines
|
||||
|
||||
Before you add more compute machines to a cluster that you installed on bare
|
||||
metal infrastructure, you must create {op-system} machines for it to use.
|
||||
Follow either the steps to use an ISO image or network PXE booting to create the machines.
|
||||
Before you add more compute machines to a cluster that you installed on bare metal infrastructure, you must create {op-system} machines for it to use. Follow either the steps to use an ISO image or network PXE booting to create the machines.
|
||||
|
||||
include::modules/machine-user-infra-machines-iso.adoc[leveloffset=+2]
|
||||
|
||||
|
||||
@@ -5,9 +5,7 @@ include::modules/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
In {product-title}, you can add Red Hat Enterprise Linux (RHEL) compute, or
|
||||
worker, machines to a user-provisioned infrastructure cluster. You can use RHEL
|
||||
as the operating system on only compute machines.
|
||||
In {product-title}, you can add Red Hat Enterprise Linux (RHEL) compute, or worker, machines to a user-provisioned infrastructure cluster. You can use RHEL as the operating system on only compute machines.
|
||||
|
||||
include::modules/rhel-compute-overview.adoc[leveloffset=+1]
|
||||
|
||||
@@ -23,4 +21,4 @@ include::modules/installation-approve-csrs.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/rhel-ansible-parameters.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/rhel-removing-rhcos.adoc[leveloffset=+2]
|
||||
include::modules/rhel-removing-rhcos.adoc[leveloffset=+2]
|
||||
|
||||
@@ -5,8 +5,7 @@ include::modules/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
You can add more compute machines to your {product-title} cluster on VMware
|
||||
vSphere.
|
||||
You can add more compute machines to your {product-title} cluster on VMware vSphere.
|
||||
|
||||
== Prerequisites
|
||||
|
||||
|
||||
@@ -5,8 +5,7 @@ include::modules/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
If your {product-title} cluster already includes Red Hat Enterprise Linux (RHEL)
|
||||
compute machines, which are also known as worker machines, you can add more RHEL compute machines to it.
|
||||
If your {product-title} cluster already includes Red Hat Enterprise Linux (RHEL) compute machines, which are also known as worker machines, you can add more RHEL compute machines to it.
|
||||
|
||||
include::modules/rhel-compute-overview.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
@@ -4,85 +4,45 @@
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="cluster-autoscaler-about_{context}"]
|
||||
= About the ClusterAutoscaler
|
||||
= About the cluster autoscaler
|
||||
|
||||
The ClusterAutoscaler adjusts the size of an {product-title} cluster to meet
|
||||
its current deployment needs. It uses declarative, Kubernetes-style arguments to
|
||||
provide infrastructure management that does not rely on objects of a specific
|
||||
cloud provider. The ClusterAutoscaler has a cluster scope, and is not associated
|
||||
with a particular namespace.
|
||||
The cluster autoscaler adjusts the size of an {product-title} cluster to meet its current deployment needs. It uses declarative, Kubernetes-style arguments to provide infrastructure management that does not rely on objects of a specific cloud provider. The cluster autoscaler has a cluster scope, and is not associated with a particular namespace.
|
||||
|
||||
The ClusterAutoscaler increases the size of the cluster when there are pods
|
||||
that failed to schedule on any of the current nodes due to insufficient
|
||||
resources or when another node is necessary to meet deployment needs. The
|
||||
ClusterAutoscaler does not increase the cluster resources beyond the limits
|
||||
that you specify.
|
||||
The cluster autoscaler increases the size of the cluster when there are pods that failed to schedule on any of the current nodes due to insufficient resources or when another node is necessary to meet deployment needs. The cluster autoscaler does not increase the cluster resources beyond the limits that you specify.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
Ensure that the `maxNodesTotal` value in the `ClusterAutoscaler` definition that you create is large enough to account for the total possible number of machines in your cluster. This value must encompass the number of control plane machines and the possible number of compute machines that you might scale to.
|
||||
Ensure that the `maxNodesTotal` value in the `ClusterAutoscaler` resource definition that you create is large enough to account for the total possible number of machines in your cluster. This value must encompass the number of control plane machines and the possible number of compute machines that you might scale to.
|
||||
====
|
||||
|
||||
The ClusterAutoscaler decreases the size of the cluster when some nodes are
|
||||
consistently not needed for a significant period, such as when it has low
|
||||
resource use and all of its important pods can fit on other nodes.
|
||||
The cluster autoscaler decreases the size of the cluster when some nodes are consistently not needed for a significant period, such as when it has low resource use and all of its important pods can fit on other nodes.
|
||||
|
||||
If the following types of pods are present on a node, the ClusterAutoscaler
|
||||
will not remove the node:
|
||||
If the following types of pods are present on a node, the cluster autoscaler will not remove the node:
|
||||
|
||||
* Pods with restrictive PodDisruptionBudgets (PDBs).
|
||||
* Pods with restrictive pod disruption budgets (PDBs).
|
||||
* Kube-system pods that do not run on the node by default.
|
||||
* Kube-system pods that do not have a PDB or have a PDB that is too restrictive.
|
||||
* Pods that are not backed by a controller object such as a deployment,
|
||||
replica set, or stateful set.
|
||||
* Pods that are not backed by a controller object such as a deployment, replica set, or stateful set.
|
||||
* Pods with local storage.
|
||||
* Pods that cannot be moved elsewhere because of a lack of resources,
|
||||
incompatible node selectors or affinity, matching anti-affinity, and so on.
|
||||
* Unless they also have a `"cluster-autoscaler.kubernetes.io/safe-to-evict": "true"`
|
||||
annotation, pods that have a `"cluster-autoscaler.kubernetes.io/safe-to-evict": "false"`
|
||||
annotation.
|
||||
* Pods that cannot be moved elsewhere because of a lack of resources, incompatible node selectors or affinity, matching anti-affinity, and so on.
|
||||
* Unless they also have a `"cluster-autoscaler.kubernetes.io/safe-to-evict": "true"` annotation, pods that have a `"cluster-autoscaler.kubernetes.io/safe-to-evict": "false"` annotation.
|
||||
|
||||
If you configure the ClusterAutoscaler, additional usage restrictions apply:
|
||||
If you configure the cluster autoscaler, additional usage restrictions apply:
|
||||
|
||||
* Do not modify the nodes that are in autoscaled node groups directly. All nodes
|
||||
within the same node group have the same capacity and labels and run the same
|
||||
system pods.
|
||||
* Do not modify the nodes that are in autoscaled node groups directly. All nodes within the same node group have the same capacity and labels and run the same system pods.
|
||||
* Specify requests for your pods.
|
||||
* If you have to prevent pods from being deleted too quickly, configure
|
||||
appropriate PDBs.
|
||||
* Confirm that your cloud provider quota is large enough to support the
|
||||
maximum node pools that you configure.
|
||||
* Do not run additional node group autoscalers, especially the ones offered by
|
||||
your cloud provider.
|
||||
* If you have to prevent pods from being deleted too quickly, configure appropriate PDBs.
|
||||
* Confirm that your cloud provider quota is large enough to support the maximum node pools that you configure.
|
||||
* Do not run additional node group autoscalers, especially the ones offered by your cloud provider.
|
||||
|
||||
|
||||
The horizontal pod autoscaler (HPA) and the ClusterAutoscaler modify cluster
|
||||
resources in different ways. The HPA changes the deployment's or ReplicaSet's
|
||||
number of replicas based on the current CPU load.
|
||||
If the load increases, the HPA creates new replicas, regardless of the amount
|
||||
of resources available to the cluster.
|
||||
If there are not enough resources, the ClusterAutoscaler adds resources so that
|
||||
the HPA-created pods can run.
|
||||
If the load decreases, the HPA stops some replicas. If this action causes some
|
||||
nodes to be underutilized or completely empty, the ClusterAutoscaler deletes
|
||||
the unnecessary nodes.
|
||||
The horizontal pod autoscaler (HPA) and the cluster autoscaler modify cluster resources in different ways. The HPA changes the deployment's or replica set's number of replicas based on the current CPU load. If the load increases, the HPA creates new replicas, regardless of the amount of resources available to the cluster. If there are not enough resources, the cluster autoscaler adds resources so that the HPA-created pods can run. If the load decreases, the HPA stops some replicas. If this action causes some nodes to be underutilized or completely empty, the cluster autoscaler deletes the unnecessary nodes.
|
||||
|
||||
|
||||
The ClusterAutoscaler takes pod priorities into account. The Pod Priority and
|
||||
Preemption feature enables scheduling pods based on priorities if the cluster
|
||||
does not have enough resources, but the ClusterAutoscaler ensures that the
|
||||
cluster has resources to run all pods. To honor the intention of both features,
|
||||
the ClusterAutoscaler inclues a priority cutoff function. You can use this cutoff to
|
||||
schedule "best-effort" pods, which do not cause the ClusterAutoscaler to
|
||||
increase resources but instead run only when spare resources are available.
|
||||
The cluster autoscaler takes pod priorities into account. The Pod Priority and Preemption feature enables scheduling pods based on priorities if the cluster does not have enough resources, but the cluster autoscaler ensures that the cluster has resources to run all pods. To honor the intention of both features, the cluster autoscaler includes a priority cutoff function. You can use this cutoff to schedule "best-effort" pods, which do not cause the cluster autoscaler to increase resources but instead run only when spare resources are available.
|
||||
|
||||
Pods with priority lower than the cutoff value do not cause the cluster to scale
|
||||
up or prevent the cluster from scaling down. No new nodes are added to run the
|
||||
pods, and nodes running these pods might be deleted to free resources.
|
||||
Pods with priority lower than the cutoff value do not cause the cluster to scale up or prevent the cluster from scaling down. No new nodes are added to run the pods, and nodes running these pods might be deleted to free resources.
|
||||
|
||||
////
|
||||
Default priority cutoff is 0. It can be changed using `--expendable-pods-priority-cutoff` flag,
|
||||
but we discourage it.
|
||||
ClusterAutoscaler also doesn't trigger scale-up if an unschedulable Pod is already waiting for a lower
|
||||
priority Pod preemption.
|
||||
Default priority cutoff is 0. It can be changed using `--expendable-pods-priority-cutoff` flag, but we discourage it. cluster autoscaler also doesn't trigger scale-up if an unschedulable Pod is already waiting for a lower priority Pod preemption.
|
||||
////
|
||||
|
||||
@@ -4,10 +4,9 @@
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="cluster-autoscaler-cr_{context}"]
|
||||
= ClusterAutoscaler resource definition
|
||||
= `ClusterAutoscaler` resource definition
|
||||
|
||||
This `ClusterAutoscaler` resource definition shows the parameters and sample
|
||||
values for the ClusterAutoscaler.
|
||||
This `ClusterAutoscaler` resource definition shows the parameters and sample values for the cluster autoscaler.
|
||||
|
||||
|
||||
[source,yaml]
|
||||
@@ -40,32 +39,18 @@ spec:
|
||||
delayAfterFailure: 30s <14>
|
||||
unneededTime: 60s <15>
|
||||
----
|
||||
<1> Specify the priority that a pod must exceed to cause the ClusterAutoscaler
|
||||
to deploy additional nodes. Enter a 32-bit integer value. The
|
||||
`podPriorityThreshold` value is compared to the value of the `PriorityClass` that
|
||||
you assign to each pod.
|
||||
<1> Specify the priority that a pod must exceed to cause the cluster autoscaler to deploy additional nodes. Enter a 32-bit integer value. The `podPriorityThreshold` value is compared to the value of the `PriorityClass` that you assign to each pod.
|
||||
<2> Specify the maximum number of nodes to deploy. This value is the total number of machines that are deployed in your cluster, not just the ones that the autoscaler controls. Ensure that this value is large enough to account for all of your control plane and compute machines and the total number of replicas that you specify in your `MachineAutoscaler` resources.
|
||||
<3> Specify the minimum number of cores to deploy.
|
||||
<4> Specify the maximum number of cores to deploy.
|
||||
<5> Specify the minimum amount of memory, in GiB, per node.
|
||||
<6> Specify the maximum amount of memory, in GiB, per node.
|
||||
<7> Optionally, specify the type of GPU node to deploy. Only
|
||||
`nvidia.com/gpu` and `amd.com/gpu` are valid types.
|
||||
<7> Optionally, specify the type of GPU node to deploy. Only `nvidia.com/gpu` and `amd.com/gpu` are valid types.
|
||||
<8> Specify the minimum number of GPUs to deploy.
|
||||
<9> Specify the maximum number of GPUs to deploy.
|
||||
<10> In this section, you can specify the period to wait for each action by
|
||||
using any valid
|
||||
link:https://golang.org/pkg/time/#ParseDuration[ParseDuration] interval, including
|
||||
`ns`, `us`, `ms`, `s`, `m`, and `h`.
|
||||
<11> Specify whether the ClusterAutoscaler can remove unnecessary nodes.
|
||||
<12> Optionally, specify the period to wait before deleting a node after
|
||||
a node has recently been _added_. If you do not specify a value, the default
|
||||
value of `10m` is used.
|
||||
<13> Specify the period to wait before deleting a node after
|
||||
a node has recently been _deleted_. If you do not specify a value, the default
|
||||
value of `10s` is used.
|
||||
<14> Specify the period to wait before deleting a node after
|
||||
a scale down failure occurred. If you do not specify a value, the default
|
||||
value of `3m` is used.
|
||||
<15> Specify the period before an unnecessary node is eligible
|
||||
for deletion. If you do not specify a value, the default value of `10m` is used.
|
||||
<10> In this section, you can specify the period to wait for each action by using any valid link:https://golang.org/pkg/time/#ParseDuration[ParseDuration] interval, including `ns`, `us`, `ms`, `s`, `m`, and `h`.
|
||||
<11> Specify whether the cluster autoscaler can remove unnecessary nodes.
|
||||
<12> Optionally, specify the period to wait before deleting a node after a node has recently been _added_. If you do not specify a value, the default value of `10m` is used.
|
||||
<13> Specify the period to wait before deleting a node after a node has recently been _deleted_. If you do not specify a value, the default value of `10s` is used.
|
||||
<14> Specify the period to wait before deleting a node after a scale down failure occurred. If you do not specify a value, the default value of `3m` is used.
|
||||
<15> Specify the period before an unnecessary node is eligible for deletion. If you do not specify a value, the default value of `10m` is used.
|
||||
|
||||
@@ -12,13 +12,11 @@
|
||||
[id="{FeatureName}-deploying_{context}"]
|
||||
= Deploying the {FeatureName}
|
||||
|
||||
To deploy the {FeatureName}, you create an instance of the `{FeatureName}`
|
||||
resource.
|
||||
To deploy the {FeatureName}, you create an instance of the `{FeatureName}` resource.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create a YAML file for the `{FeatureName}` resource that contains the
|
||||
customized resource definition.
|
||||
. Create a YAML file for the `{FeatureName}` resource that contains the customized resource definition.
|
||||
|
||||
. Create the resource in the cluster:
|
||||
+
|
||||
|
||||
@@ -15,5 +15,4 @@ The following {product-title} components are infrastructure components:
|
||||
* Cluster aggregated logging
|
||||
* Service brokers
|
||||
|
||||
Any node that runs any other container, pod, or component is a worker node that
|
||||
your subscription must cover.
|
||||
Any node that runs any other container, pod, or component is a worker node that your subscription must cover.
|
||||
|
||||
@@ -6,16 +6,13 @@
|
||||
[id="infrastructure-moving-logging_{context}"]
|
||||
= Moving the cluster logging resources
|
||||
|
||||
You can configure the Cluster Logging Operator to deploy the pods
|
||||
for any or all of the Cluster Logging components, Elasticsearch, Kibana, and Curator to different nodes.
|
||||
You cannot move the Cluster Logging Operator pod from its installed location.
|
||||
You can configure the Cluster Logging Operator to deploy the pods for any or all of the Cluster Logging components, Elasticsearch, Kibana, and Curator to different nodes. You cannot move the Cluster Logging Operator pod from its installed location.
|
||||
|
||||
For example, you can move the Elasticsearch pods to a separate node because of
|
||||
high CPU, memory, and disk requirements.
|
||||
For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
You should set your MachineSet to use at least 6 replicas.
|
||||
You should set your machine set to use at least 6 replicas.
|
||||
====
|
||||
|
||||
.Prerequisites
|
||||
@@ -24,7 +21,7 @@ You should set your MachineSet to use at least 6 replicas.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Edit the Cluster Logging Custom Resource in the `openshift-logging` project:
|
||||
. Edit the `ClusterLogging` custom resource (CR) in the `openshift-logging` project:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -146,7 +143,7 @@ metadata:
|
||||
....
|
||||
----
|
||||
|
||||
* To move the Kibana pod, edit the Cluster Logging CR to add a node selector:
|
||||
* To move the Kibana pod, edit the `ClusterLogging` CR to add a node selector:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
|
||||
@@ -21,11 +21,9 @@ ifeval::["{context}" == "installing-ibm-z"]
|
||||
endif::[]
|
||||
|
||||
[id="installation-approve-csrs_{context}"]
|
||||
= Approving the CSRs for your machines
|
||||
= Approving the certificate signing requests for your machines
|
||||
|
||||
When you add machines to a cluster, two pending certificate signing requests
|
||||
(CSRs) are generated for each machine that you added. You must confirm that
|
||||
these CSRs are approved or, if necessary, approve them yourself.
|
||||
When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
@@ -53,9 +51,7 @@ worker-1 NotReady worker 70s v1.19.0
|
||||
+
|
||||
The output lists all of the machines that you created.
|
||||
|
||||
. Review the pending CSRs and ensure that
|
||||
you see a client and server request with the `Pending` or `Approved` status for
|
||||
each machine that you added to the cluster:
|
||||
. Review the pending CSRs and ensure that you see a client and server request with the `Pending` or `Approved` status for each machine that you added to the cluster:
|
||||
+
|
||||
ifndef::ibm-z[]
|
||||
[source,terminal]
|
||||
@@ -76,8 +72,7 @@ csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal
|
||||
<1> A client request CSR.
|
||||
<2> A server request CSR.
|
||||
+
|
||||
In this example, two machines are joining the cluster. You might see more
|
||||
approved CSRs in the list.
|
||||
In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
|
||||
endif::ibm-z[]
|
||||
ifdef::ibm-z[]
|
||||
[source,terminal]
|
||||
@@ -94,22 +89,14 @@ csr-z5rln 16m system:node:worker-21.example.com Approved,Issued
|
||||
----
|
||||
endif::ibm-z[]
|
||||
|
||||
. If the CSRs were not approved, after all of the pending CSRs for the machines
|
||||
you added are in `Pending` status, approve the CSRs for your cluster machines:
|
||||
. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in `Pending` status, approve the CSRs for your cluster machines:
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
Because the CSRs rotate automatically, approve your CSRs within an hour
|
||||
of adding the machines to the cluster. If you do not approve them within an
|
||||
hour, the certificates will rotate, and more than two certificates will be
|
||||
present for each node. You must approve all of these certificates. After you
|
||||
approve the initial CSRs, the subsequent node client CSRs are automatically
|
||||
approved by the cluster `kube-controller-manager`. You must implement a method
|
||||
of automatically approving the kubelet serving certificate requests.
|
||||
Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After you approve the initial CSRs, the subsequent node client CSRs are automatically approved by the cluster `kube-controller-manager`. You must implement a method of automatically approving the kubelet serving certificate requests.
|
||||
====
|
||||
|
||||
** To approve them individually, run the following command for each valid
|
||||
CSR:
|
||||
** To approve them individually, run the following command for each valid CSR:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -11,55 +11,28 @@
|
||||
[id="machine-api-overview_{context}"]
|
||||
= Machine API overview
|
||||
|
||||
The Machine API is a combination of primary resources that are based on the
|
||||
upstream Cluster API project and custom {product-title} resources.
|
||||
The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom {product-title} resources.
|
||||
|
||||
For {product-title} {product-version} clusters, the Machine API performs all node
|
||||
host provisioning management actions after the cluster installation finishes.
|
||||
Because of this system, {product-title} {product-version} offers an elastic,
|
||||
dynamic provisioning method on top of public or private cloud infrastructure.
|
||||
For {product-title} {product-version} clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, {product-title} {product-version} offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure.
|
||||
|
||||
The two primary resources are:
|
||||
|
||||
Machines:: A fundamental unit that describes the host for a Node. A machine has a
|
||||
providerSpec, which describes the types of compute nodes that are offered for different
|
||||
cloud platforms. For example, a machine type for a worker node on Amazon Web
|
||||
Services (AWS) might define a specific machine type and required metadata.
|
||||
MachineSets:: Groups of machines. MachineSets are to machines as
|
||||
ReplicaSets are to pods. If you need more machines or must scale them down,
|
||||
you change the *replicas* field on the MachineSet to meet your compute need.
|
||||
Machines:: A fundamental unit that describes the host for a Node. A machine has a `providerSpec` specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a worker node on Amazon Web Services (AWS) might define a specific machine type and required metadata.
|
||||
|
||||
Machine sets:: `MachineSet` resources are groups of machines. Machine sets are to machines as replica sets are to pods. If you need more machines or must scale them down, you change the *replicas* field on the machine set to meet your compute need.
|
||||
|
||||
The following custom resources add more capabilities to your cluster:
|
||||
|
||||
MachineAutoscaler:: This resource automatically scales machines in
|
||||
a cloud. You can set the minimum and maximum scaling boundaries for nodes in a
|
||||
specified MachineSet, and the MachineAutoscaler maintains that range of nodes.
|
||||
The MachineAutoscaler object takes effect after a ClusterAutoscaler object
|
||||
exists. Both ClusterAutoscaler and MachineAutoscaler resources are made
|
||||
available by the ClusterAutoscalerOperator.
|
||||
Machine autoscaler:: The `MachineAutoscaler` resource automatically scales machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified machine set, and the machine autoscaler maintains that range of nodes. The `MachineAutoscaler` object takes effect after a `ClusterAutoscaler` object exists. Both `ClusterAutoscaler` and `MachineAutoscaler` resources are made available by the `ClusterAutoscalerOperator` object.
|
||||
|
||||
ClusterAutoscaler:: This resource is based on the upstream ClusterAutoscaler
|
||||
project. In the {product-title} implementation, it is integrated with the
|
||||
Machine API by extending the MachineSet API. You can set cluster-wide
|
||||
scaling limits for resources such as cores, nodes, memory, GPU,
|
||||
and so on. You can set the priority so that the cluster prioritizes pods so that
|
||||
new nodes are not brought online for less important pods. You can also set the
|
||||
ScalingPolicy so you can scale up nodes but not scale them down.
|
||||
Cluster autoscaler:: This resource is based on the upstream cluster autoscaler project. In the {product-title} implementation, it is integrated with the Machine API by extending the machine set API. You can set cluster-wide scaling limits for resources such as cores, nodes, memory, GPU, and so on. You can set the priority so that the cluster prioritizes pods so that new nodes are not brought online for less important pods. You can also set the scaling policy so that you can scale up nodes but not scale them down.
|
||||
|
||||
MachineHealthCheck:: This resource detects when a machine is unhealthy,
|
||||
deletes it, and, on supported platforms, makes a new machine.
|
||||
Machine health check:: The `MachineHealthCheck` resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
In version {product-version}, MachineHealthChecks is a Technology Preview
|
||||
feature
|
||||
In version {product-version}, machine health check is a Technology Preview
|
||||
feature.
|
||||
====
|
||||
|
||||
In {product-title} version 3.11, you could not roll out a multi-zone
|
||||
architecture easily because the cluster did not manage machine provisioning.
|
||||
Beginning with {product-title} version 4.1, this process is easier. Each MachineSet is scoped to a
|
||||
single zone, so the installation program sends out MachineSets across
|
||||
availability zones on your behalf. And then because your compute is dynamic, and
|
||||
in the face of a zone failure, you always have a zone for when you must
|
||||
rebalance your machines. The autoscaler provides best-effort balancing over the
|
||||
life of a cluster.
|
||||
In {product-title} version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with {product-title} version 4.1, this process is easier. Each machine set is scoped to a single zone, so the installation program sends out machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. The autoscaler provides best-effort balancing over the life of a cluster.
|
||||
|
||||
@@ -4,21 +4,11 @@
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="machine-autoscaler-about_{context}"]
|
||||
= About the MachineAutoscaler
|
||||
= About the machine autoscaler
|
||||
|
||||
The MachineAutoscaler adjusts the number of Machines in the MachineSets that you
|
||||
deploy in an {product-title} cluster. You can scale both the default `worker`
|
||||
MachineSet and any other MachineSets that you create. The MachineAutoscaler
|
||||
makes more Machines when the cluster runs out of resources to support more
|
||||
deployments. Any changes to the values in MachineAutoscaler resources, such as
|
||||
the minimum or maximum number of instances, are immediately applied
|
||||
to the MachineSet they target.
|
||||
The machine autoscaler adjusts the number of Machines in the machine sets that you deploy in an {product-title} cluster. You can scale both the default `worker` machine set and any other machine sets that you create. The machine autoscaler makes more Machines when the cluster runs out of resources to support more deployments. Any changes to the values in `MachineAutoscaler` resources, such as the minimum or maximum number of instances, are immediately applied to the machine set they target.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
You must deploy a MachineAutoscaler for the ClusterAutoscaler to scale your
|
||||
machines. The ClusterAutoscaler uses the annotations on MachineSets that the
|
||||
MachineAutoscaler sets to determine the resources that it can scale. If you
|
||||
define a ClusterAutoscaler without also defining MachineAutoscalers, the
|
||||
ClusterAutoscaler will never scale your cluster.
|
||||
You must deploy a machine autoscaler for the cluster autoscaler to scale your machines. The cluster autoscaler uses the annotations on machine sets that the machine autoscaler sets to determine the resources that it can scale. If you define a cluster autoscaler without also defining machine autoscalers, the cluster autoscaler will never scale your cluster.
|
||||
====
|
||||
|
||||
@@ -4,10 +4,9 @@
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="machine-autoscaler-cr_{context}"]
|
||||
= MachineAutoscaler resource definition
|
||||
= `MachineAutoscaler` resource definition
|
||||
|
||||
This MachineAutoscaler resource definition shows the parameters and sample
|
||||
values for the MachineAutoscaler.
|
||||
This `MachineAutoscaler` resource definition shows the parameters and sample values for the machine autoscaler.
|
||||
|
||||
|
||||
[source,yaml]
|
||||
@@ -25,16 +24,9 @@ spec:
|
||||
kind: MachineSet <5>
|
||||
name: worker-us-east-1a <6>
|
||||
----
|
||||
<1> Specify the `MachineAutoscaler` name. To make it easier to identify
|
||||
which MachineSet this MachineAutoscaler scales, specify or include the name of
|
||||
the MachineSet to scale. The MachineSet name takes the following form:
|
||||
`<clusterid>-<machineset>-<aws-region-az>`
|
||||
<2> Specify the minimum number Machines of the specified type that must remain in the
|
||||
specified zone after the ClusterAutoscaler initiates cluster scaling. If running in AWS, GCP, or Azure, this value can be set to `0`. For other providers, do not set this value to `0`.
|
||||
<3> Specify the maximum number Machines of the specified type that the ClusterAutoscaler can deploy in the
|
||||
specified AWS zone after it initiates cluster scaling. Ensure that the `maxNodesTotal` value in the `ClusterAutoscaler` definition is large enough to allow the MachineAutoScaler to deploy this number of machines.
|
||||
<4> In this section, provide values that describe the existing MachineSet to
|
||||
scale.
|
||||
<1> Specify the machine autoscaler name. To make it easier to identify which machine set this machine autoscaler scales, specify or include the name of the machine set to scale. The machine set name takes the following form: `<clusterid>-<machineset>-<aws-region-az>`
|
||||
<2> Specify the minimum number machines of the specified type that must remain in the specified zone after the cluster autoscaler initiates cluster scaling. If running in AWS, GCP, or Azure, this value can be set to `0`. For other providers, do not set this value to `0`.
|
||||
<3> Specify the maximum number machines of the specified type that the cluster autoscaler can deploy in the specified AWS zone after it initiates cluster scaling. Ensure that the `maxNodesTotal` value in the `ClusterAutoscaler` resource definition is large enough to allow the machine autoscaler to deploy this number of machines.
|
||||
<4> In this section, provide values that describe the existing machine set to scale.
|
||||
<5> The `kind` parameter value is always `MachineSet`.
|
||||
<6> The `name` value must match the name of an existing MachineSet, as shown
|
||||
in the `metadata.name` parameter value.
|
||||
<6> The `name` value must match the name of an existing machine set, as shown in the `metadata.name` parameter value.
|
||||
|
||||
@@ -16,16 +16,16 @@ You can delete a specific machine.
|
||||
|
||||
.Procedure
|
||||
|
||||
. View the Machines that are in the cluster and identify the one to delete:
|
||||
. View the machines that are in the cluster and identify the one to delete:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get machine -n openshift-machine-api
|
||||
----
|
||||
+
|
||||
The command output contains a list of Machines in the `<clusterid>-worker-<cloud_region>` format.
|
||||
The command output contains a list of machines in the `<clusterid>-worker-<cloud_region>` format.
|
||||
|
||||
. Delete the Machine:
|
||||
. Delete the machine:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -35,6 +35,5 @@ $ oc delete machine <machine> -n openshift-machine-api
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
By default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured Pod disruption budget, the drain operation might not be able to succeed in preventing the machine from being deleted. You can skip draining the node by annotating "machine.openshift.io/exclude-node-draining" in a specific machine.
|
||||
If the machine being deleted belongs to a MachineSet, a new machine is immediately created to satisfy the specified number of replicas.
|
||||
By default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured pod disruption budget, the drain operation might not be able to succeed in preventing the machine from being deleted. You can skip draining the node by annotating "machine.openshift.io/exclude-node-draining" in a specific machine. If the machine being deleted belongs to a machine set, a new machine is immediately created to satisfy the specified number of replicas.
|
||||
====
|
||||
|
||||
@@ -5,20 +5,16 @@
|
||||
[id="machine-user-infra-machines-iso_{context}"]
|
||||
= Creating more {op-system-first} machines using an ISO image
|
||||
|
||||
You can create more compute machines for your bare metal cluster by using an
|
||||
ISO image to create the machines.
|
||||
You can create more compute machines for your bare metal cluster by using an ISO image to create the machines.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Obtain the URL of the Ignition config file for the compute machines for your
|
||||
cluster. You uploaded this file to your HTTP server during installation.
|
||||
* Obtain the URL of the BIOS or UEFI {op-system} image file that you uploaded
|
||||
to your HTTP server during cluster installation.
|
||||
* Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation.
|
||||
* Obtain the URL of the BIOS or UEFI {op-system} image file that you uploaded to your HTTP server during cluster installation.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Use the ISO file to install {op-system} on more compute machines. Use the same
|
||||
method that you used when you created machines before you installed the cluster:
|
||||
. Use the ISO file to install {op-system} on more compute machines. Use the same method that you used when you created machines before you installed the cluster:
|
||||
** Burn the ISO image to a disk and boot it directly.
|
||||
** Use ISO redirection with a LOM interface.
|
||||
|
||||
@@ -36,8 +32,6 @@ coreos.inst.ignition_url=http://example.com/worker.ign <3>
|
||||
<2> Specify the URL of the UEFI or BIOS image that you uploaded to your server.
|
||||
<3> Specify the URL of the compute Ignition config file.
|
||||
|
||||
. Press `Enter` to complete the installation. After {op-system} installs, the system
|
||||
reboots. After the system reboots, it applies the Ignition config file that you
|
||||
specified.
|
||||
. Press `Enter` to complete the installation. After {op-system} installs, the system reboots. After the system reboots, it applies the Ignition config file that you specified.
|
||||
|
||||
. Continue to create more compute machines for your cluster.
|
||||
|
||||
@@ -5,26 +5,18 @@
|
||||
[id="machine-user-infra-machines-pxe_{context}"]
|
||||
= Creating more {op-system-first} machines by PXE or iPXE booting
|
||||
|
||||
You can create more compute machines for your bare metal cluster by using
|
||||
PXE or iPXE booting.
|
||||
You can create more compute machines for your bare metal cluster by using PXE or iPXE booting.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Obtain the URL of the Ignition config file for the compute machines for your
|
||||
cluster. You uploaded this file to your HTTP server during installation.
|
||||
* Obtain the URLs of the {op-system} ISO image, compressed metal BIOS, `kernel`,
|
||||
and `initramfs` files that you uploaded to your HTTP server during cluster
|
||||
installation.
|
||||
* You have access to the PXE booting infrastructure that you used to create the machines
|
||||
for your {product-title} cluster during installation. The machines must boot
|
||||
from their local disks after {op-system} is installed on them.
|
||||
* If you use UEFI, you have access to the `grub.conf` file that you modified
|
||||
during {product-title} installation.
|
||||
* Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation.
|
||||
* Obtain the URLs of the {op-system} ISO image, compressed metal BIOS, `kernel`, and `initramfs` files that you uploaded to your HTTP server during cluster installation.
|
||||
* You have access to the PXE booting infrastructure that you used to create the machines for your {product-title} cluster during installation. The machines must boot from their local disks after {op-system} is installed on them.
|
||||
* If you use UEFI, you have access to the `grub.conf` file that you modified during {product-title} installation.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Confirm that your PXE or iPXE installation for the {op-system} images is
|
||||
correct.
|
||||
. Confirm that your PXE or iPXE installation for the {op-system} images is correct.
|
||||
|
||||
** For PXE:
|
||||
+
|
||||
@@ -36,13 +28,8 @@ LABEL pxeboot
|
||||
KERNEL http://<HTTP_server>/rhcos-<version>-installer-live-kernel-<architecture> <1>
|
||||
APPEND initrd=http://<HTTP_server>/rhcos-<version>-installer-live-initramfs.<architecture>.img console=ttyS0 console=tty0 coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-installer-live-rootfs.<architecture>.img <2>
|
||||
----
|
||||
<1> Specify the location of the live `kernel` file that you uploaded to your HTTP
|
||||
server.
|
||||
<2> Specify locations of the {op-system} files that you uploaded to your HTTP
|
||||
server. The `initrd` parameter value is the location of the live `initramfs`
|
||||
file, the `coreos.inst.ignition_url` parameter value is the location of the
|
||||
worker Ignition config file, and the `coreos.live.rootfs_url` parameter value is
|
||||
the location of the live `rootfs` file.
|
||||
<1> Specify the location of the live `kernel` file that you uploaded to your HTTP server.
|
||||
<2> Specify locations of the {op-system} files that you uploaded to your HTTP server. The `initrd` parameter value is the location of the live `initramfs` file, the `coreos.inst.ignition_url` parameter value is the location of the worker Ignition config file, and the `coreos.live.rootfs_url` parameter value is the location of the live `rootfs` file. The `coreos.inst.ignition_url` and `coreos.live.rootfs_url` parameters only support HTTP and HTTPS.
|
||||
|
||||
** For iPXE:
|
||||
+
|
||||
@@ -50,13 +37,7 @@ the location of the live `rootfs` file.
|
||||
kernel http://<HTTP_server>/rhcos-<version>-installer-kernel-<architecture> console=ttyS0 console=tty0 coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-installer-live-rootfs.<architecture>.img <1>
|
||||
initrd=http://<HTTP_server>/rhcos-<version>-installer-live-initramfs.<architecture>.img <2>
|
||||
----
|
||||
<1> Specify locations of the {op-system} files that you uploaded to your
|
||||
HTTP server. The `kernel` parameter value is the location of the `kernel` file,
|
||||
the `coreos.inst.ignition_url` parameter value is the location of the worker
|
||||
Ignition config file, and the `coreos.live.rootfs_url` parameter value is
|
||||
the location of the live `rootfs` file.
|
||||
<2> Specify the location of the `initramfs` file that you uploaded to your HTTP
|
||||
server.
|
||||
<1> Specify locations of the {op-system} files that you uploaded to your HTTP server. The `kernel` parameter value is the location of the `kernel` file, the `coreos.inst.ignition_url` parameter value is the location of the worker Ignition config file, and the `coreos.live.rootfs_url` parameter value is the location of the live `rootfs` file. The `coreos.inst.ignition_url` and `coreos.live.rootfs_url` parameters only support HTTP and HTTPS.
|
||||
<2> Specify the location of the `initramfs` file that you uploaded to your HTTP server.
|
||||
|
||||
. Use the PXE or iPXE infrastructure to create the required compute machines for your
|
||||
cluster.
|
||||
. Use the PXE or iPXE infrastructure to create the required compute machines for your cluster.
|
||||
|
||||
@@ -9,6 +9,5 @@
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
This process is not applicable to clusters where you manually provisioned the machines yourself. You
|
||||
can use the advanced machine management and scaling capabilities only in clusters where the machine API is operational.
|
||||
This process is not applicable to clusters where you manually provisioned the machines yourself. You can use the advanced machine management and scaling capabilities only in clusters where the machine API is operational.
|
||||
====
|
||||
|
||||
@@ -5,8 +5,7 @@
|
||||
[id="machine-vsphere-machines_{context}"]
|
||||
= Creating more {op-system-first} machines in vSphere
|
||||
|
||||
You can create more compute machines for your cluster that uses user-provisioned
|
||||
infrastructure on VMware vSphere.
|
||||
You can create more compute machines for your cluster that uses user-provisioned infrastructure on VMware vSphere.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
@@ -17,28 +16,18 @@ infrastructure on VMware vSphere.
|
||||
|
||||
. After the template deploys, deploy a VM for a machine in the cluster.
|
||||
.. Right-click the template's name and click *Clone* -> *Clone to Virtual Machine*.
|
||||
.. On the *Select a name and folder* tab, specify a name for the VM. You might
|
||||
include the machine type in the name, such as `compute-1`.
|
||||
.. On the *Select a name and folder* tab, select the name of the folder that
|
||||
you created for the cluster.
|
||||
.. On the *Select a compute resource* tab, select the name of a host in your
|
||||
datacenter.
|
||||
.. On the *Select a name and folder* tab, specify a name for the VM. You might include the machine type in the name, such as `compute-1`.
|
||||
.. On the *Select a name and folder* tab, select the name of the folder that you created for the cluster.
|
||||
.. On the *Select a compute resource* tab, select the name of a host in your datacenter.
|
||||
.. Optional: On the *Select storage* tab, customize the storage options.
|
||||
.. On the *Select clone options*, select
|
||||
*Customize this virtual machine's hardware*.
|
||||
.. On the *Select clone options*, select *Customize this virtual machine's hardware*.
|
||||
.. On the *Customize hardware* tab, click *VM Options* -> *Advanced*.
|
||||
*** From the *Latency Sensitivity* list, select *High*.
|
||||
*** Click *Edit Configuration*, and on the *Configuration Parameters* window,
|
||||
click *Add Configuration Params*. Define the following parameter names and values:
|
||||
**** `guestinfo.ignition.config.data`: Paste the contents of the base64-encoded
|
||||
compute Ignition config file for this machine type.
|
||||
*** Click *Edit Configuration*, and on the *Configuration Parameters* window, click *Add Configuration Params*. Define the following parameter names and values:
|
||||
**** `guestinfo.ignition.config.data`: Paste the contents of the base64-encoded compute Ignition config file for this machine type.
|
||||
**** `guestinfo.ignition.config.data.encoding`: Specify `base64`.
|
||||
**** `disk.EnableUUID`: Specify `TRUE`.
|
||||
.. In the *Virtual Hardware* panel of the
|
||||
*Customize hardware* tab, modify the specified values as required. Ensure that
|
||||
the amount of RAM, CPU, and disk storage meets the minimum requirements for the
|
||||
machine type. Also, make sure to select the correct network under *Add network adapter*
|
||||
if there are multiple networks available.
|
||||
.. In the *Virtual Hardware* panel of the *Customize hardware* tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Also, make sure to select the correct network under *Add network adapter* if there are multiple networks available.
|
||||
.. Complete the configuration and power on the VM.
|
||||
|
||||
. Continue to create more compute machines for your cluster.
|
||||
|
||||
@@ -15,13 +15,13 @@ ifeval::["{context}" == "creating-machineset-gcp"]
|
||||
endif::[]
|
||||
|
||||
[id="machineset-creating-non-guaranteed-instance_{context}"]
|
||||
ifdef::aws[= Creating Spot Instances by using MachineSets]
|
||||
ifdef::azure[= Creating Spot VMs by using MachineSets]
|
||||
ifdef::gcp[= Creating preemptible VM instances by using MachineSets]
|
||||
ifdef::aws[= Creating Spot Instances by using machine sets]
|
||||
ifdef::azure[= Creating Spot VMs by using machine sets]
|
||||
ifdef::gcp[= Creating preemptible VM instances by using machine sets]
|
||||
|
||||
ifdef::aws[You can launch a Spot Instance on AWS by adding `spotMarketOptions` to your MachineSet YAML file.]
|
||||
ifdef::azure[You can launch a Spot VM on Azure by adding `spotVMOptions` to your MachineSet YAML file.]
|
||||
ifdef::gcp[You can launch a preemptible VM instance on GCP by adding `preemptible` to your MachineSet YAML file.]
|
||||
ifdef::aws[You can launch a Spot Instance on AWS by adding `spotMarketOptions` to your machine set YAML file.]
|
||||
ifdef::azure[You can launch a Spot VM on Azure by adding `spotVMOptions` to your machine set YAML file.]
|
||||
ifdef::gcp[You can launch a preemptible VM instance on GCP by adding `preemptible` to your machine set YAML file.]
|
||||
|
||||
.Procedure
|
||||
* Add the following line under the `providerSpec` field:
|
||||
@@ -51,11 +51,9 @@ providerSpec:
|
||||
spotVMOptions: {}
|
||||
----
|
||||
+
|
||||
You can optionally set the `spotVMOptions.maxPrice` field to limit the cost of the Spot VM. For example you can set `maxPrice: '0.98765'`.
|
||||
If the `maxPrice` is set, this value is used as the hourly maximum spot price. If it is not set, the maximum price defaults to `-1` and charges up to the standard VM price.
|
||||
You can optionally set the `spotVMOptions.maxPrice` field to limit the cost of the Spot VM. For example you can set `maxPrice: '0.98765'`. If the `maxPrice` is set, this value is used as the hourly maximum spot price. If it is not set, the maximum price defaults to `-1` and charges up to the standard VM price.
|
||||
+
|
||||
Azure caps Spot VM prices at the standard price. Azure will not evict an instance due to pricing if the instance is set with the default `maxPrice`.
|
||||
However, an instance can still be evicted due to capacity restrictions.
|
||||
Azure caps Spot VM prices at the standard price. Azure will not evict an instance due to pricing if the instance is set with the default `maxPrice`. However, an instance can still be evicted due to capacity restrictions.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
|
||||
@@ -22,11 +22,9 @@ ifeval::["{context}" == "creating-machineset-vsphere"]
|
||||
endif::[]
|
||||
|
||||
[id="machineset-creating_{context}"]
|
||||
= Creating a MachineSet
|
||||
= Creating a machine set
|
||||
|
||||
In addition to the ones created by the installation program, you can create
|
||||
your own MachineSets to dynamically manage the machine compute resources for
|
||||
specific workloads of your choice.
|
||||
In addition to the ones created by the installation program, you can create your own machine sets to dynamically manage the machine compute resources for specific workloads of your choice.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
@@ -34,19 +32,17 @@ specific workloads of your choice.
|
||||
* Install the OpenShift CLI (`oc`).
|
||||
* Log in to `oc` as a user with `cluster-admin` permission.
|
||||
ifdef::vsphere[]
|
||||
* Create a tag inside your vCenter instance based on the cluster API name. This tag is utilized by the MachineSet to associate the {product-title} nodes to the provisioned virtual machines (VM). For directions on creating tags in vCenter, see the VMware documentation for link:https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.vcenterhost.doc/GUID-E8E854DD-AA97-4E0C-8419-CE84F93C4058.html[vSphere Tags and Attributes].
|
||||
* Create a tag inside your vCenter instance based on the cluster API name. This tag is utilized by the machine set to associate the {product-title} nodes to the provisioned virtual machines (VM). For directions on creating tags in vCenter, see the VMware documentation for link:https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.vcenterhost.doc/GUID-E8E854DD-AA97-4E0C-8419-CE84F93C4058.html[vSphere Tags and Attributes].
|
||||
* Have the necessary permissions to deploy VMs in your vCenter instance and have the required access to the datastore specified.
|
||||
endif::vsphere[]
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create a new YAML file that contains the MachineSet Custom Resource sample,
|
||||
as shown, and is named `<file_name>.yaml`.
|
||||
. Create a new YAML file that contains the machine set custom resource (CR) sample, as shown, and is named `<file_name>.yaml`.
|
||||
+
|
||||
Ensure that you set the `<clusterID>` and `<role>` parameter values.
|
||||
|
||||
.. If you are not sure about which value to set for a specific field, you can
|
||||
check an existing MachineSet from your cluster.
|
||||
.. If you are not sure about which value to set for a specific field, you can check an existing machine set from your cluster.
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -65,7 +61,7 @@ agl030519-vplxk-worker-us-east-1e 0 0 55m
|
||||
agl030519-vplxk-worker-us-east-1f 0 0 55m
|
||||
----
|
||||
|
||||
.. Check values of a specific MachineSet:
|
||||
.. Check values of a specific machine set:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -88,14 +84,14 @@ template:
|
||||
<1> The cluster ID.
|
||||
<2> A default node label.
|
||||
|
||||
. Create the new `MachineSet`:
|
||||
. Create the new `MachineSet` CR:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f <file_name>.yaml
|
||||
----
|
||||
|
||||
. View the list of MachineSets:
|
||||
. View the list of machine sets:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -127,11 +123,9 @@ agl030519-vplxk-worker-us-east-1f 0 0 55m
|
||||
endif::win[]
|
||||
----
|
||||
+
|
||||
When the new MachineSet is available, the `DESIRED` and `CURRENT` values match.
|
||||
If the MachineSet is not available, wait a few minutes and run the command again.
|
||||
When the new machine set is available, the `DESIRED` and `CURRENT` values match. If the machine set is not available, wait a few minutes and run the command again.
|
||||
|
||||
. After the new MachineSet is available, check status of the machine and the
|
||||
node that it references:
|
||||
. After the new machine set is available, check status of the machine and the node that it references:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -180,27 +174,22 @@ status:
|
||||
kind: AWSMachineProviderStatus
|
||||
----
|
||||
|
||||
. View the new node and confirm that the new node has the label that you
|
||||
specified:
|
||||
. View the new node and confirm that the new node has the label that you specified:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get node <node_name> --show-labels
|
||||
----
|
||||
+
|
||||
Review the command output and confirm that `node-role.kubernetes.io/<your_label>`
|
||||
is in the `LABELS` list.
|
||||
Review the command output and confirm that `node-role.kubernetes.io/<your_label>` is in the `LABELS` list.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Any change to a MachineSet is not applied to existing machines owned by the MachineSet.
|
||||
For example, labels edited or added to an existing MachineSet are not propagated to existing machines and Nodes
|
||||
associated with the MachineSet.
|
||||
Any change to a machine set is not applied to existing machines owned by the machine set. For example, labels edited or added to an existing machine set are not propagated to existing machines and nodes associated with the machine set.
|
||||
====
|
||||
|
||||
.Next steps
|
||||
If you need MachineSets in other availability zones, repeat this
|
||||
process to create more MachineSets.
|
||||
If you need machine sets in other availability zones, repeat this process to create more machine sets.
|
||||
|
||||
ifeval::["{context}" == "creating-machineset-vsphere"]
|
||||
:!vsphere:
|
||||
|
||||
@@ -4,9 +4,9 @@
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="machineset-delete-policy_{context}"]
|
||||
= The MachineSet deletion policy
|
||||
= The machine set deletion policy
|
||||
|
||||
`Random`, `Newest`, and `Oldest` are the three supported deletion options. The default is `Random`, meaning that random machines are chosen and deleted when scaling MachineSets down. The deletion policy can be set according to the use case by modifying the particular MachineSet:
|
||||
`Random`, `Newest`, and `Oldest` are the three supported deletion options. The default is `Random`, meaning that random machines are chosen and deleted when scaling machine sets down. The deletion policy can be set according to the use case by modifying the particular machine set:
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
@@ -19,10 +19,10 @@ Specific machines can also be prioritized for deletion by adding the annotation
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
By default, the {product-title} router pods are deployed on workers. Because the router is required to access some cluster resources, including the web console, do not scale the worker MachineSet to `0` unless you first relocate the router pods.
|
||||
By default, the {product-title} router pods are deployed on workers. Because the router is required to access some cluster resources, including the web console, do not scale the worker machine set to `0` unless you first relocate the router pods.
|
||||
====
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Custom MachineSets can be used for use cases requiring that services run on specific nodes and that those services are ignored by the controller when the worker MachineSets are scaling down. This prevents service disruption.
|
||||
Custom machine sets can be used for use cases requiring that services run on specific nodes and that those services are ignored by the controller when the worker machine sets are scaling down. This prevents service disruption.
|
||||
====
|
||||
|
||||
@@ -5,14 +5,11 @@
|
||||
// * windows_containers/scheduling-windows-workloads.adoc
|
||||
|
||||
[id="machineset-manually-scaling_{context}"]
|
||||
= Scaling a MachineSet manually
|
||||
= Scaling a machine set manually
|
||||
|
||||
If you must add or remove an instance of a machine in a MachineSet, you can
|
||||
manually scale the MachineSet.
|
||||
If you must add or remove an instance of a machine in a machine set, you can manually scale the machine set.
|
||||
|
||||
This guidance is relevant to fully automated, installer provisioned
|
||||
infrastructure installations. Customized, user provisioned infrastructure
|
||||
installations does not have MachineSets.
|
||||
This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations does not have machine sets.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
@@ -21,16 +18,16 @@ installations does not have MachineSets.
|
||||
|
||||
.Procedure
|
||||
|
||||
. View the MachineSets that are in the cluster:
|
||||
. View the machine sets that are in the cluster:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get machinesets -n openshift-machine-api
|
||||
----
|
||||
+
|
||||
The MachineSets are listed in the form of `<clusterid>-worker-<aws-region-az>`.
|
||||
The machine sets are listed in the form of `<clusterid>-worker-<aws-region-az>`.
|
||||
|
||||
. Scale the MachineSet:
|
||||
. Scale the machine set:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -44,5 +41,4 @@ Or:
|
||||
$ oc edit machineset <machineset> -n openshift-machine-api
|
||||
----
|
||||
+
|
||||
You can scale the MachineSet up or down. It takes several minutes for the new
|
||||
machines to be available.
|
||||
You can scale the machine set up or down. It takes several minutes for the new machines to be available.
|
||||
|
||||
@@ -15,26 +15,17 @@ ifeval::["{context}" == "creating-machineset-gcp"]
|
||||
endif::[]
|
||||
|
||||
[id="machineset-non-guaranteed-instance_{context}"]
|
||||
ifdef::aws[= MachineSets that deploy machines as Spot Instances]
|
||||
ifdef::azure[= MachineSets that deploy machines as Spot VMs]
|
||||
ifdef::gcp[= MachineSets that deploy machines as preemptible VM instances]
|
||||
ifdef::aws[= Machine sets that deploy machines as Spot Instances]
|
||||
ifdef::azure[= Machine sets that deploy machines as Spot VMs]
|
||||
ifdef::gcp[= Machine sets that deploy machines as preemptible VM instances]
|
||||
ifdef::aws[]
|
||||
You can save on costs by creating a MachineSet running on AWS that deploys machines as non-guaranteed Spot Instances.
|
||||
Spot Instances utilize unused AWS EC2 capacity and are less expensive than On-Demand Instances.
|
||||
You can use Spot Instances for workloads that can tolerate interruptions, such as batch or stateless,
|
||||
horizontally scalable workloads.
|
||||
You can save on costs by creating a machine set running on AWS that deploys machines as non-guaranteed Spot Instances. Spot Instances utilize unused AWS EC2 capacity and are less expensive than On-Demand Instances. You can use Spot Instances for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads.
|
||||
endif::aws[]
|
||||
ifdef::azure[]
|
||||
You can save on costs by creating a MachineSet running on Azure that deploys machines as non-guaranteed Spot VMs.
|
||||
Spot VMs utilize unused Azure capacity and are less expensive than standard VMs.
|
||||
You can use Spot VMs for workloads that can tolerate interruptions, such as batch or stateless,
|
||||
horizontally scalable workloads.
|
||||
You can save on costs by creating a machine set running on Azure that deploys machines as non-guaranteed Spot VMs. Spot VMs utilize unused Azure capacity and are less expensive than standard VMs. You can use Spot VMs for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads.
|
||||
endif::azure[]
|
||||
ifdef::gcp[]
|
||||
You can save on costs by creating a MachineSet running on GCP that deploys machines as non-guaranteed preemptible VM instances.
|
||||
Preemptible VM instances utilize excess Compute Engine capacity and are less expensive than normal instances.
|
||||
You can use preemptible VM instances for workloads that can tolerate interruptions, such as batch or stateless,
|
||||
horizontally scalable workloads.
|
||||
You can save on costs by creating a machine set running on GCP that deploys machines as non-guaranteed preemptible VM instances. Preemptible VM instances utilize excess Compute Engine capacity and are less expensive than normal instances. You can use preemptible VM instances for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads.
|
||||
endif::gcp[]
|
||||
|
||||
[IMPORTANT]
|
||||
@@ -43,14 +34,11 @@ It is strongly recommended that control plane machines are not created on
|
||||
ifdef::aws[Spot Instances]
|
||||
ifdef::azure[Spot VMs]
|
||||
ifdef::gcp[preemptible VM instances]
|
||||
due to the increased likelihood of the instance being terminated. Manual intervention is
|
||||
required to replace a terminated control plane node.
|
||||
due to the increased likelihood of the instance being terminated. Manual intervention is required to replace a terminated control plane node.
|
||||
====
|
||||
|
||||
ifdef::aws[]
|
||||
AWS EC2 can terminate a Spot Instance at any time. AWS gives a two-minute warning to
|
||||
the user when an interruption occurs. {product-title} begins to remove the workloads
|
||||
from the affected instances when AWS issues the termination warning.
|
||||
AWS EC2 can terminate a Spot Instance at any time. AWS gives a two-minute warning to the user when an interruption occurs. {product-title} begins to remove the workloads from the affected instances when AWS issues the termination warning.
|
||||
|
||||
Interruptions can occur when using Spot Instances for the following reasons:
|
||||
|
||||
@@ -58,14 +46,10 @@ Interruptions can occur when using Spot Instances for the following reasons:
|
||||
* The demand for Spot Instances increases
|
||||
* The supply of Spot Instances decreases
|
||||
|
||||
When AWS terminates an instance, a termination handler running on the Spot Instance
|
||||
node deletes the machine resource. To satisfy the MachineSet `replicas` quantity, the
|
||||
MachineSet creates a machine that requests a Spot Instance.
|
||||
When AWS terminates an instance, a termination handler running on the Spot Instance node deletes the machine resource. To satisfy the machine set `replicas` quantity, the machine set creates a machine that requests a Spot Instance.
|
||||
endif::aws[]
|
||||
ifdef::azure[]
|
||||
Azure can terminate a Spot VM at any time. Azure gives a 30-second warning to
|
||||
the user when an interruption occurs. {product-title} begins to remove the workloads
|
||||
from the affected instances when Azure issues the termination warning.
|
||||
Azure can terminate a Spot VM at any time. Azure gives a 30-second warning to the user when an interruption occurs. {product-title} begins to remove the workloads from the affected instances when Azure issues the termination warning.
|
||||
|
||||
Interruptions can occur when using Spot VMs for the following reasons:
|
||||
|
||||
@@ -74,14 +58,10 @@ Interruptions can occur when using Spot VMs for the following reasons:
|
||||
* Azure needs capacity back
|
||||
|
||||
|
||||
When Azure terminates an instance, a termination handler running on the Spot VM
|
||||
node deletes the machine resource. To satisfy the MachineSet `replicas` quantity, the
|
||||
MachineSet creates a machine that requests a Spot VM.
|
||||
When Azure terminates an instance, a termination handler running on the Spot VM node deletes the machine resource. To satisfy the machine set `replicas` quantity, the machine set creates a machine that requests a Spot VM.
|
||||
endif::azure[]
|
||||
ifdef::gcp[]
|
||||
GCP Compute Engine can terminate a preemptible VM instance at any time. Compute Engine sends a preemption notice to the user indicating that an interruption will occur in 30 seconds.
|
||||
{product-title} begins to remove the workloads from the affected instances when Compute Engine issues the preemption notice. An ACPI G3 Mechanical Off signal is sent to the operating
|
||||
system after 30 seconds if the instance is not stopped. The preemptible VM instance is then transitioned to a `TERMINATED` state by Compute Engine.
|
||||
GCP Compute Engine can terminate a preemptible VM instance at any time. Compute Engine sends a preemption notice to the user indicating that an interruption will occur in 30 seconds. {product-title} begins to remove the workloads from the affected instances when Compute Engine issues the preemption notice. An ACPI G3 Mechanical Off signal is sent to the operating system after 30 seconds if the instance is not stopped. The preemptible VM instance is then transitioned to a `TERMINATED` state by Compute Engine.
|
||||
|
||||
Interruptions can occur when using preemptible VM instances for the following reasons:
|
||||
|
||||
@@ -89,7 +69,5 @@ Interruptions can occur when using preemptible VM instances for the following re
|
||||
* The supply of preemptible VM instances decreases
|
||||
* The instance reaches the end of the allotted 24-hour period for preemptible VM instances
|
||||
|
||||
When GCP terminates an instance, a termination handler running on the preemptible VM instance
|
||||
node deletes the machine resource. To satisfy the MachineSet `replicas` quantity, the
|
||||
MachineSet creates a machine that requests a preemptible VM instance.
|
||||
When GCP terminates an instance, a termination handler running on the preemptible VM instance node deletes the machine resource. To satisfy the machine set `replicas` quantity, the machine set creates a machine that requests a preemptible VM instance.
|
||||
endif::gcp[]
|
||||
|
||||
@@ -5,15 +5,11 @@
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="machineset-yaml-aws_{context}"]
|
||||
= Sample YAML for a MachineSet Custom Resource on AWS
|
||||
= Sample YAML for a machine set custom resource on AWS
|
||||
|
||||
This sample YAML defines a MachineSet that runs in the `us-east-1a`
|
||||
Amazon Web Services (AWS) zone and creates nodes that are labeled with
|
||||
`node-role.kubernetes.io/<role>: ""`
|
||||
This sample YAML defines a machine set that runs in the `us-east-1a` Amazon Web Services (AWS) zone and creates nodes that are labeled with `node-role.kubernetes.io/<role>: ""`
|
||||
|
||||
In this sample, `<infrastructureID>` is the infrastructure ID label that is
|
||||
based on the cluster ID that you set when you provisioned
|
||||
the cluster, and `<role>` is the node label to add.
|
||||
In this sample, `<infrastructureID>` is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and `<role>` is the node label to add.
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
@@ -77,10 +73,7 @@ spec:
|
||||
userDataSecret:
|
||||
name: worker-user-data
|
||||
----
|
||||
<1> Specify the infrastructure ID that is based on the cluster ID that
|
||||
you set when you provisioned the cluster. If you have the OpenShift CLI and `jq`
|
||||
package installed, you can obtain the infrastructure ID by running the following
|
||||
command:
|
||||
<1> Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI and `jq` package installed, you can obtain the infrastructure ID by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -5,15 +5,11 @@
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="machineset-yaml-azure_{context}"]
|
||||
= Sample YAML for a MachineSet Custom Resource on Azure
|
||||
= Sample YAML for a machine set custom resource on Azure
|
||||
|
||||
This sample YAML defines a MachineSet that runs in the `1` Microsoft Azure zone
|
||||
in the `centralus` region and creates nodes that are labeled with
|
||||
`node-role.kubernetes.io/<role>: ""`
|
||||
This sample YAML defines a machine set that runs in the `1` Microsoft Azure zone in the `centralus` region and creates nodes that are labeled with `node-role.kubernetes.io/<role>: ""`
|
||||
|
||||
In this sample, `<infrastructureID>` is the infrastructure ID label that is
|
||||
based on the cluster ID that you set when you provisioned
|
||||
the cluster, and `<role>` is the node label to add.
|
||||
In this sample, `<infrastructureID>` is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and `<role>` is the node label to add.
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
@@ -82,10 +78,7 @@ spec:
|
||||
vnet: <infrastructureID>-vnet <1>
|
||||
zone: "1" <4>
|
||||
----
|
||||
<1> Specify the infrastructure ID that is based on the cluster ID that
|
||||
you set when you provisioned the cluster. If you have the OpenShift CLI and `jq`
|
||||
package installed, you can obtain the infrastructure ID by running the following
|
||||
command:
|
||||
<1> Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI and `jq` package installed, you can obtain the infrastructure ID by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -93,5 +86,4 @@ $ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
|
||||
----
|
||||
<2> Specify the node label to add.
|
||||
<3> Specify the infrastructure ID, node label, and region.
|
||||
<4> Specify the zone within your region to place Machines on. Be sure that your
|
||||
region supports the zone that you specify.
|
||||
<4> Specify the zone within your region to place Machines on. Be sure that your region supports the zone that you specify.
|
||||
|
||||
@@ -5,14 +5,11 @@
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="machineset-yaml-gcp_{context}"]
|
||||
= Sample YAML for a MachineSet Custom Resource on GCP
|
||||
= Sample YAML for a machine set custom resource on GCP
|
||||
|
||||
This sample YAML defines a MachineSet that runs in Google Cloud Platform (GCP)
|
||||
and creates nodes that are labeled with `node-role.kubernetes.io/<role>: ""`
|
||||
This sample YAML defines a machine set that runs in Google Cloud Platform (GCP) and creates nodes that are labeled with `node-role.kubernetes.io/<role>: ""`
|
||||
|
||||
In this sample, `<infrastructureID>` is the infrastructure ID label that is
|
||||
based on the cluster ID that you set when you provisioned
|
||||
the cluster, and `<role>` is the node label to add.
|
||||
In this sample, `<infrastructureID>` is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and `<role>` is the node label to add.
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
@@ -74,10 +71,7 @@ spec:
|
||||
name: worker-user-data
|
||||
zone: us-central1-a
|
||||
----
|
||||
<1> Specify the infrastructure ID that is based on the cluster ID that
|
||||
you set when you provisioned the cluster. If you have the OpenShift CLI and `jq`
|
||||
package installed, you can obtain the infrastructure ID by running the following
|
||||
command:
|
||||
<1> Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI and `jq` package installed, you can obtain the infrastructure ID by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -4,15 +4,11 @@
|
||||
// * machine_management/creating_machinesets/creating-machineset-osp.adoc
|
||||
|
||||
[id="machineset-yaml-osp_{context}"]
|
||||
= Sample YAML for a MachineSet Custom Resource on {rh-openstack}
|
||||
= Sample YAML for a machine set custom resource on {rh-openstack}
|
||||
|
||||
This sample YAML defines a MachineSet that runs on
|
||||
{rh-openstack-first} and creates nodes that are labeled with
|
||||
`node-role.openshift.io/<node_role>: ""`
|
||||
This sample YAML defines a machine set that runs on {rh-openstack-first} and creates nodes that are labeled with `node-role.openshift.io/<node_role>: ""`
|
||||
|
||||
In this sample, `infrastructure_ID` is the infrastructure ID label that is
|
||||
based on the cluster ID that you set when you provisioned
|
||||
the cluster, and `node_role` is the node label to add.
|
||||
In this sample, `infrastructure_ID` is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and `node_role` is the node label to add.
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
@@ -69,10 +65,7 @@ spec:
|
||||
name: <node_role>-user-data <2>
|
||||
availabilityZone: <optional_openstack_availability_zone>
|
||||
----
|
||||
<1> Specify the infrastructure ID that is based on the cluster ID that
|
||||
you set when you provisioned the cluster. If you have the OpenShift CLI and `jq`
|
||||
package installed, you can obtain the infrastructure ID by running the following
|
||||
command:
|
||||
<1> Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI and `jq` package installed, you can obtain the infrastructure ID by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -80,5 +73,4 @@ $ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
|
||||
----
|
||||
<2> Specify the node label to add.
|
||||
<3> Specify the infrastructure ID and node label.
|
||||
<4> To set a server group policy for the MachineSet, enter the value that is returned from
|
||||
link:https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html/command_line_interface_reference/server#server_group_create[creating a server group]. For most deployments, `anti-affinity` or `soft-anti-affinity` policies are recommended.
|
||||
<4> To set a server group policy for the machine set, enter the value that is returned from link:https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html/command_line_interface_reference/server#server_group_create[creating a server group]. For most deployments, `anti-affinity` or `soft-anti-affinity` policies are recommended.
|
||||
|
||||
@@ -5,9 +5,9 @@
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="machineset-yaml-vsphere_{context}"]
|
||||
= Sample YAML for a MachineSet Custom Resource on vSphere
|
||||
= Sample YAML for a machine set custom resource on vSphere
|
||||
|
||||
This sample YAML defines a MachineSet that runs on VMware vSphere and creates nodes that are labeled with `node-role.kubernetes.io/<role>: ""`.
|
||||
This sample YAML defines a machine set that runs on VMware vSphere and creates nodes that are labeled with `node-role.kubernetes.io/<role>: ""`.
|
||||
|
||||
In this sample, `<infrastructure_id>` is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and `<role>` is the node label to add.
|
||||
|
||||
@@ -74,10 +74,10 @@ $ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
|
||||
----
|
||||
<2> Specify the infrastructure ID and node label.
|
||||
<3> Specify the node label to add.
|
||||
<4> Specify the vSphere VM network to deploy the MachineSet to.
|
||||
<4> Specify the vSphere VM network to deploy the machine set to.
|
||||
<5> Specify the vSphere VM template to use, such as `user-5ddjd-rhcos`.
|
||||
<6> Specify the vCenter Datacenter to deploy the MachineSet on.
|
||||
<7> Specify the vCenter Datastore to deploy the MachineSet on.
|
||||
<6> Specify the vCenter Datacenter to deploy the machine set on.
|
||||
<7> Specify the vCenter Datastore to deploy the machine set on.
|
||||
<8> Specify the path to the vSphere VM folder in vCenter, such as `/dc1/vm/user-inst-5ddjd`.
|
||||
<9> Specify the vSphere resource pool for your VMs.
|
||||
<10> Specify the vCenter server IP or fully qualified domain name.
|
||||
|
||||
@@ -5,38 +5,27 @@
|
||||
[id="rhel-adding-more-nodes_{context}"]
|
||||
= Adding more RHEL compute machines to your cluster
|
||||
|
||||
You can add more compute machines that use Red Hat Enterprise Linux as the operating
|
||||
system to an {product-title} {product-version} cluster.
|
||||
You can add more compute machines that use Red Hat Enterprise Linux (RHEL) as the operating system to an {product-title} {product-version} cluster.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Your {product-title} cluster already contains RHEL compute nodes.
|
||||
* The `hosts` file
|
||||
that you used to add the first RHEL compute machines to your cluster is on the
|
||||
machine that you use the run the playbook.
|
||||
* The machine that you run the playbook on must be able to access all of the
|
||||
RHEL hosts. You can use any method that your company allows, including a
|
||||
bastion with an SSH proxy or a VPN.
|
||||
* The `kubeconfig` file for the cluster and the installation program that you
|
||||
used to install the cluster are on the machine that you use the run the playbook.
|
||||
* The `hosts` file that you used to add the first RHEL compute machines to your cluster is on the machine that you use the run the playbook.
|
||||
* The machine that you run the playbook on must be able to access all of the RHEL hosts. You can use any method that your company allows, including a bastion with an SSH proxy or a VPN.
|
||||
* The `kubeconfig` file for the cluster and the installation program that you used to install the cluster are on the machine that you use the run the playbook.
|
||||
* You must prepare the RHEL hosts for installation.
|
||||
* Configure a user on the machine that you run the playbook on that has SSH
|
||||
access to all of the RHEL hosts.
|
||||
* If you use SSH key-based authentication, you must manage the key with an
|
||||
SSH agent.
|
||||
* Install the OpenShift CLI (`oc`)
|
||||
on the machine that you run the playbook on.
|
||||
* Configure a user on the machine that you run the playbook on that has SSH access to all of the RHEL hosts.
|
||||
* If you use SSH key-based authentication, you must manage the key with an SSH agent.
|
||||
* Install the OpenShift CLI (`oc`) on the machine that you run the playbook on.
|
||||
|
||||
|
||||
.Procedure
|
||||
|
||||
. Open the Ansible inventory file at `/<path>/inventory/hosts` that defines your
|
||||
compute machine hosts and required variables.
|
||||
. Open the Ansible inventory file at `/<path>/inventory/hosts` that defines your compute machine hosts and required variables.
|
||||
|
||||
. Rename the `[new_workers]` section of the file to `[workers]`.
|
||||
|
||||
. Add a `[new_workers]` section to the file and define the fully-qualified
|
||||
domain names for each new host. The file resembles the following example:
|
||||
. Add a `[new_workers]` section to the file and define the fully-qualified domain names for each new host. The file resembles the following example:
|
||||
+
|
||||
----
|
||||
[all:vars]
|
||||
@@ -54,9 +43,7 @@ mycluster-rhel7-2.example.com
|
||||
mycluster-rhel7-3.example.com
|
||||
----
|
||||
+
|
||||
In this example, the `mycluster-rhel7-0.example.com` and
|
||||
`mycluster-rhel7-1.example.com` machines are in the cluster and you add the
|
||||
`mycluster-rhel7-2.example.com` and `mycluster-rhel7-3.example.com` machines.
|
||||
In this example, the `mycluster-rhel7-0.example.com` and `mycluster-rhel7-1.example.com` machines are in the cluster and you add the `mycluster-rhel7-2.example.com` and `mycluster-rhel7-3.example.com` machines.
|
||||
|
||||
. Navigate to the Ansible playbook directory:
|
||||
+
|
||||
@@ -71,5 +58,4 @@ $ cd /usr/share/ansible/openshift-ansible
|
||||
----
|
||||
$ ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml <1>
|
||||
----
|
||||
<1> For `<path>`, specify the path to the Ansible inventory file
|
||||
that you created.
|
||||
<1> For `<path>`, specify the path to the Ansible inventory file that you created.
|
||||
|
||||
@@ -6,22 +6,18 @@
|
||||
[id="rhel-adding-node_{context}"]
|
||||
= Adding a RHEL compute machine to your cluster
|
||||
|
||||
You can add compute machines that use Red Hat Enterprise Linux as the operating
|
||||
system to an {product-title} {product-version} cluster.
|
||||
You can add compute machines that use Red Hat Enterprise Linux as the operating system to an {product-title} {product-version} cluster.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You installed the required packages and performed the necessary configuration
|
||||
on the machine that you run the playbook on.
|
||||
* You installed the required packages and performed the necessary configuration on the machine that you run the playbook on.
|
||||
* You prepared the RHEL hosts for installation.
|
||||
|
||||
.Procedure
|
||||
|
||||
Perform the following steps on the machine that you prepared to run the
|
||||
playbook:
|
||||
Perform the following steps on the machine that you prepared to run the playbook:
|
||||
|
||||
. Create an Ansible inventory file that is named `/<path>/inventory/hosts` that
|
||||
defines your compute machine hosts and required variables:
|
||||
. Create an Ansible inventory file that is named `/<path>/inventory/hosts` that defines your compute machine hosts and required variables:
|
||||
+
|
||||
----
|
||||
[all:vars]
|
||||
@@ -34,15 +30,10 @@ openshift_kubeconfig_path="~/.kube/config" <3>
|
||||
mycluster-rhel7-0.example.com
|
||||
mycluster-rhel7-1.example.com
|
||||
----
|
||||
<1> Specify the user name that runs the Ansible tasks on the remote compute
|
||||
machines.
|
||||
<2> If you do not specify `root` for the `ansible_user`, you must set `ansible_become`
|
||||
to `True` and assign the user sudo permissions.
|
||||
<1> Specify the user name that runs the Ansible tasks on the remote compute machines.
|
||||
<2> If you do not specify `root` for the `ansible_user`, you must set `ansible_become` to `True` and assign the user sudo permissions.
|
||||
<3> Specify the path and file name of the `kubeconfig` file for your cluster.
|
||||
<4> List each RHEL machine to add to your cluster. You must provide the
|
||||
fully-qualified domain name for each host. This name is the host name that the
|
||||
cluster uses to access the machine, so set the correct public or private name
|
||||
to access the machine.
|
||||
<4> List each RHEL machine to add to your cluster. You must provide the fully-qualified domain name for each host. This name is the host name that the cluster uses to access the machine, so set the correct public or private name to access the machine.
|
||||
|
||||
. Navigate to the Ansible playbook directory:
|
||||
+
|
||||
@@ -57,5 +48,4 @@ $ cd /usr/share/ansible/openshift-ansible
|
||||
----
|
||||
$ ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml <1>
|
||||
----
|
||||
<1> For `<path>`, specify the path to the Ansible inventory file
|
||||
that you created.
|
||||
<1> For `<path>`, specify the path to the Ansible inventory file that you created.
|
||||
|
||||
@@ -7,28 +7,22 @@
|
||||
[id="rhel-ansible-parameters_{context}"]
|
||||
= Required parameters for the Ansible hosts file
|
||||
|
||||
You must define the following parameters in the Ansible hosts file before you
|
||||
add Red Hat Enterprise Linux (RHEL) compute machines to your cluster.
|
||||
You must define the following parameters in the Ansible hosts file before you add Red Hat Enterprise Linux (RHEL) compute machines to your cluster.
|
||||
|
||||
[cols="1,2,2",options="header"]
|
||||
|===
|
||||
|Paramter |Description |Values
|
||||
|
||||
|`ansible_user`
|
||||
|The SSH user that allows SSH-based authentication without requiring a password.
|
||||
If you use SSH key-based authentication, then you must manage the key with an
|
||||
SSH agent.
|
||||
|The SSH user that allows SSH-based authentication without requiring a password. If you use SSH key-based authentication, then you must manage the key with an SSH agent.
|
||||
|A user name on the system. The default value is `root`.
|
||||
|
||||
|`ansible_become`
|
||||
|If the values of `ansible_user` is not root, you must set `ansible_become`
|
||||
to `True`, and the user that you specify as the `ansible_user` must be
|
||||
configured for passwordless sudo access.
|
||||
|If the values of `ansible_user` is not root, you must set `ansible_become` to `True`, and the user that you specify as the `ansible_user` must be configured for passwordless sudo access.
|
||||
|`True`. If the value is not `True`, do not specify and define this parameter.
|
||||
|
||||
|`openshift_kubeconfig_path`
|
||||
|Specifies a path and file name to a local directory that contains the `kubeconfig` file for
|
||||
your cluster.
|
||||
|Specifies a path and file name to a local directory that contains the `kubeconfig` file for your cluster.
|
||||
|The path and name of the configuration file.
|
||||
|
||||
|===
|
||||
|
||||
@@ -8,34 +8,22 @@
|
||||
[id="rhel-compute-requirements_{context}"]
|
||||
= System requirements for RHEL compute nodes
|
||||
|
||||
The Red Hat Enterprise Linux (RHEL) compute machine hosts, which are also known as worker machine hosts, in your
|
||||
{product-title} environment must meet the following minimum hardware
|
||||
specifications and system-level requirements.
|
||||
The Red Hat Enterprise Linux (RHEL) compute machine hosts, which are also known as worker machine hosts, in your {product-title} environment must meet the following minimum hardware specifications and system-level requirements.
|
||||
|
||||
* You must have an active {product-title} subscription on your Red Hat
|
||||
account. If you do not, contact your sales representative for more information.
|
||||
* You must have an active {product-title} subscription on your Red Hat account. If you do not, contact your sales representative for more information.
|
||||
|
||||
* Production environments must provide compute machines to support your expected
|
||||
workloads. As a cluster administrator, you must calculate
|
||||
the expected workload and add about 10 percent for overhead. For production
|
||||
environments, allocate enough resources so that a node host failure does not
|
||||
affect your maximum capacity.
|
||||
* Production environments must provide compute machines to support your expected workloads. As a cluster administrator, you must calculate the expected workload and add about 10 percent for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity.
|
||||
* Each system must meet the following hardware requirements:
|
||||
** Physical or virtual system, or an instance running on a public or private IaaS.
|
||||
ifdef::openshift-origin[]
|
||||
** Base OS: Fedora 21, CentOS 7.4, or
|
||||
link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/installation_guide/index[RHEL 7.7-7.8]
|
||||
with "Minimal" installation option.
|
||||
** Base OS: Fedora 21, CentOS 7.4, or link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/installation_guide/index[RHEL 7.7-7.8] with "Minimal" installation option.
|
||||
endif::[]
|
||||
ifdef::openshift-enterprise,openshift-webscale[]
|
||||
** Base OS:
|
||||
link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/installation_guide/index[RHEL 7.7-7.8]
|
||||
with "Minimal" installation option.
|
||||
** Base OS: link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/installation_guide/index[RHEL 7.7-7.8] with "Minimal" installation option.
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
Only RHEL 7.7-7.8 is supported in {product-title} {product-version}. You must not
|
||||
upgrade your compute machines to RHEL 8.
|
||||
Only RHEL 7.7-7.8 is supported in {product-title} {product-version}. You must not upgrade your compute machines to RHEL 8.
|
||||
====
|
||||
** If you deployed {product-title} in FIPS mode, you must enable FIPS on the RHEL machine before you boot it. See link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/security_guide/chap-federal_standards_and_regulations#sec-Enabling-FIPS-Mode[Enabling FIPS Mode] in the RHEL 7 documentation.
|
||||
endif::[]
|
||||
@@ -44,28 +32,12 @@ endif::[]
|
||||
** Minimum 8 GB RAM.
|
||||
** Minimum 15 GB hard disk space for the file system containing `/var/`.
|
||||
** Minimum 1 GB hard disk space for the file system containing `/usr/local/bin/`.
|
||||
** Minimum 1 GB hard disk space for the file system containing the system's
|
||||
temporary directory. The system’s temporary directory is determined according to
|
||||
the rules defined in the tempfile module in Python’s standard library.
|
||||
* Each system must meet any additional requirements for your system provider. For
|
||||
example, if you installed your cluster on VMware vSphere, your disks must
|
||||
be configured according to its
|
||||
link:https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/index.html[storage guidelines]
|
||||
and the `disk.enableUUID=true` attribute must be set.
|
||||
** Minimum 1 GB hard disk space for the file system containing the system's temporary directory. The system’s temporary directory is determined according to the rules defined in the tempfile module in Python’s standard library.
|
||||
* Each system must meet any additional requirements for your system provider. For example, if you installed your cluster on VMware vSphere, your disks must be configured according to its link:https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/index.html[storage guidelines] and the `disk.enableUUID=true` attribute must be set.
|
||||
|
||||
* Each system must be able to access the cluster's API
|
||||
endpoints by using DNS-resolvable host names. Any network security access control that is in place must allow the system access to the
|
||||
cluster's API service endpoints.
|
||||
* Each system must be able to access the cluster's API endpoints by using DNS-resolvable host names. Any network security access control that is in place must allow the system access to the cluster's API service endpoints.
|
||||
|
||||
[id="csr-management-rhel_{context}"]
|
||||
== Certificate signing requests management
|
||||
|
||||
Because your cluster has limited access to automatic machine management when you
|
||||
use infrastructure that you provision, you must provide a mechanism for approving
|
||||
cluster certificate signing requests (CSRs) after installation. The
|
||||
`kube-controller-manager` only approves the kubelet client CSRs. The
|
||||
`machine-approver` cannot guarantee the validity of a serving certificate
|
||||
that is requested by using kubelet credentials because it cannot confirm that
|
||||
the correct machine issued the request. You must determine and implement a
|
||||
method of verifying the validity of the kubelet serving certificate requests
|
||||
and approving them.
|
||||
Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The `kube-controller-manager` only approves the kubelet client CSRs. The `machine-approver` cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them.
|
||||
|
||||
@@ -7,10 +7,7 @@
|
||||
[id="rhel-preparing-node_{context}"]
|
||||
= Preparing a RHEL compute node
|
||||
|
||||
Before you add a Red Hat Enterprise Linux (RHEL) machine to your {product-title}
|
||||
cluster, you must register each host with Red Hat
|
||||
Subscription Manager (RHSM), attach an active {product-title} subscription, and
|
||||
enable the required repositories.
|
||||
Before you add a Red Hat Enterprise Linux (RHEL) machine to your {product-title} cluster, you must register each host with Red Hat Subscription Manager (RHSM), attach an active {product-title} subscription, and enable the required repositories.
|
||||
|
||||
. On each host, register with RHSM:
|
||||
+
|
||||
|
||||
@@ -6,38 +6,27 @@
|
||||
[id="rhel-preparing-playbook-machine_{context}"]
|
||||
= Preparing the machine to run the playbook
|
||||
|
||||
Before you can add compute machines that use Red Hat Enterprise Linux as the
|
||||
operating system to an {product-title} {product-version} cluster, you must
|
||||
prepare a machine to run the playbook from. This machine is not part of the
|
||||
cluster but must be able to access it.
|
||||
Before you can add compute machines that use Red Hat Enterprise Linux as the operating system to an {product-title} {product-version} cluster, you must prepare a machine to run the playbook from. This machine is not part of the cluster but must be able to access it.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Install the OpenShift CLI (`oc`)
|
||||
on the machine that you run the playbook on.
|
||||
* Install the OpenShift CLI (`oc`) on the machine that you run the playbook on.
|
||||
* Log in as a user with `cluster-admin` permission.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Ensure that the `kubeconfig` file for the cluster and the installation program
|
||||
that you used to install the cluster are on the machine. One way to accomplish
|
||||
this is to use the same machine that you used to install the cluster.
|
||||
. Ensure that the `kubeconfig` file for the cluster and the installation program that you used to install the cluster are on the machine. One way to accomplish this is to use the same machine that you used to install the cluster.
|
||||
|
||||
. Configure the machine to access all of the RHEL hosts that you plan to use as
|
||||
compute machines. You can use any method that your company allows, including a
|
||||
bastion with an SSH proxy or a VPN.
|
||||
. Configure the machine to access all of the RHEL hosts that you plan to use as compute machines. You can use any method that your company allows, including a bastion with an SSH proxy or a VPN.
|
||||
|
||||
. Configure a user on the machine that you run the playbook on that has SSH
|
||||
access to all of the RHEL hosts.
|
||||
. Configure a user on the machine that you run the playbook on that has SSH access to all of the RHEL hosts.
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
If you use SSH key-based authentication, you must manage the key with an
|
||||
SSH agent.
|
||||
If you use SSH key-based authentication, you must manage the key with an SSH agent.
|
||||
====
|
||||
|
||||
. If you have not already done so, register the machine with RHSM and attach
|
||||
a pool with an `OpenShift` subscription to it:
|
||||
. If you have not already done so, register the machine with RHSM and attach a pool with an `OpenShift` subscription to it:
|
||||
.. Register the machine with RHSM:
|
||||
+
|
||||
[source,terminal]
|
||||
@@ -59,8 +48,7 @@ a pool with an `OpenShift` subscription to it:
|
||||
# subscription-manager list --available --matches '*OpenShift*'
|
||||
----
|
||||
|
||||
.. In the output for the previous command, find the pool ID for an
|
||||
{product-title} subscription and attach it:
|
||||
.. In the output for the previous command, find the pool ID for an {product-title} subscription and attach it:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -85,9 +73,4 @@ a pool with an `OpenShift` subscription to it:
|
||||
# yum install openshift-ansible openshift-clients jq
|
||||
----
|
||||
+
|
||||
The `openshift-ansible` package provides installation program utilities and
|
||||
pulls in other
|
||||
packages that you require to add a RHEL compute node to your cluster, such as
|
||||
Ansible, playbooks, and related configuration files. The `openshift-clients`
|
||||
provides the `oc` CLI, and the `jq` package improves the display of JSON output
|
||||
on your command line.
|
||||
The `openshift-ansible` package provides installation program utilities and pulls in other packages that you require to add a RHEL compute node to your cluster, such as Ansible, playbooks, and related configuration files. The `openshift-clients` provides the `oc` CLI, and the `jq` package improves the display of JSON output on your command line.
|
||||
|
||||
@@ -6,8 +6,7 @@
|
||||
[id="rhel-removing-rhcos_{context}"]
|
||||
= Optional: Removing RHCOS compute machines from a cluster
|
||||
|
||||
After you add the Red Hat Enterprise Linux (RHEL) compute machines to your
|
||||
cluster, you can optionally remove the {op-system-first} compute machines to free up resources.
|
||||
After you add the Red Hat Enterprise Linux (RHEL) compute machines to your cluster, you can optionally remove the {op-system-first} compute machines to free up resources.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
@@ -15,8 +14,7 @@ cluster, you can optionally remove the {op-system-first} compute machines to fre
|
||||
|
||||
.Procedure
|
||||
|
||||
. View the list of machines and record the node names of the {op-system} compute
|
||||
machines:
|
||||
. View the list of machines and record the node names of the {op-system} compute machines:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -55,6 +53,4 @@ $ oc delete nodes <node_name> <1>
|
||||
$ oc get nodes -o wide
|
||||
----
|
||||
|
||||
. Remove the {op-system} machines from the load balancer for your cluster's compute
|
||||
machines. You can delete the virtual machines or reimage the physical hardware
|
||||
for the {op-system} compute machines.
|
||||
. Remove the {op-system} machines from the load balancer for your cluster's compute machines. You can delete the virtual machines or reimage the physical hardware for the {op-system} compute machines.
|
||||
|
||||
Reference in New Issue
Block a user