mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 03:47:04 +01:00
OSDOCS-17046: MACH-6 CQA CPMSO config YAML ref
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
eb39dc899a
commit
af2107520f
@@ -4,13 +4,18 @@
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="cpmso-yaml-failure-domain-aws_{context}"]
|
||||
= Sample AWS failure domain configuration
|
||||
= Sample {aws-short} failure domain configuration
|
||||
|
||||
The control plane machine set concept of a failure domain is analogous to existing AWS concept of an link:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-availability-zones[_Availability Zone (AZ)_]. The `ControlPlaneMachineSet` CR spreads control plane machines across multiple failure domains when possible.
|
||||
[role="_abstract"]
|
||||
To prevent downtime for your application due to the failure of a single {aws-first} region, you can configure failure domains in the control plane machine set.
|
||||
To use failure domains, you configure appropriate values in the `failureDomains` section of the `ControlPlaneMachineSet` custom resource (CR).
|
||||
|
||||
When configuring AWS failure domains in the control plane machine set, you must specify the availability zone name and the subnet to use.
|
||||
The control plane machine set concept of a failure domain is analogous to the {aws-short} concept of an link:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-availability-zones[_Availability Zone (AZ)_].
|
||||
The `ControlPlaneMachineSet` CR spreads control plane machines across more than one failure domain when possible.
|
||||
|
||||
.Sample AWS failure domain values
|
||||
When configuring {aws-short} failure domains in the control plane machine set, you must specify the availability zone name and the subnet to use.
|
||||
|
||||
.Sample {aws-short} failure domain values
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: machine.openshift.io/v1
|
||||
@@ -26,28 +31,41 @@ spec:
|
||||
failureDomains:
|
||||
aws:
|
||||
- placement:
|
||||
availabilityZone: <aws_zone_a> <1>
|
||||
subnet: <2>
|
||||
filters:
|
||||
- name: tag:Name
|
||||
values:
|
||||
- <cluster_id>-private-<aws_zone_a> <3>
|
||||
type: Filters <4>
|
||||
- placement:
|
||||
availabilityZone: <aws_zone_b> <5>
|
||||
availabilityZone: <aws_zone_a>
|
||||
subnet:
|
||||
filters:
|
||||
- name: tag:Name
|
||||
values:
|
||||
- <cluster_id>-private-<aws_zone_b> <6>
|
||||
- <cluster_id>-subnet-private-<aws_zone_a>
|
||||
type: Filters
|
||||
platform: AWS <7>
|
||||
- placement:
|
||||
availabilityZone: <aws_zone_b>
|
||||
subnet:
|
||||
filters:
|
||||
- name: tag:Name
|
||||
values:
|
||||
- <cluster_id>-subnet-private-<aws_zone_b>
|
||||
type: Filters
|
||||
platform: AWS
|
||||
# ...
|
||||
----
|
||||
<1> Specifies an AWS availability zone for the first failure domain.
|
||||
<2> Specifies a subnet configuration. In this example, the subnet type is `Filters`, so there is a `filters` stanza.
|
||||
<3> Specifies the subnet name for the first failure domain, using the infrastructure ID and the AWS availability zone.
|
||||
<4> Specifies the subnet type. The allowed values are: `ARN`, `Filters` and `ID`. The default value is `Filters`.
|
||||
<5> Specifies the subnet name for an additional failure domain, using the infrastructure ID and the AWS availability zone.
|
||||
<6> Specifies the cluster's infrastructure ID and the AWS availability zone for the additional failure domain.
|
||||
<7> Specifies the cloud provider platform name. Do not change this value.
|
||||
where:
|
||||
|
||||
`spec.template.machines_v1beta1_machine_openshift_io.failureDomains.aws.placement.availabilityZone: <aws_zone_a>`::
|
||||
Specifies an {aws-short} availability zone for the first failure domain.
|
||||
`spec.template.machines_v1beta1_machine_openshift_io.failureDomains.aws.subnet`::
|
||||
Specifies a subnet configuration.
|
||||
In this example, the subnet type is `Filters`, so there is a `filters` stanza.
|
||||
`spec.template.machines_v1beta1_machine_openshift_io.failureDomains.aws.subnet.filters.values: <cluster_id>-subnet-private-<aws_zone_a>`::
|
||||
Specifies the subnet name for the first failure domain, using the infrastructure ID and the {aws-short} availability zone.
|
||||
`spec.template.machines_v1beta1_machine_openshift_io.failureDomains.aws.subnet.type`::
|
||||
Specifies the subnet type.
|
||||
The following values are valid: `ARN`, `Filters` and `ID`.
|
||||
The default value is `Filters`.
|
||||
`spec.template.machines_v1beta1_machine_openshift_io.failureDomains.aws.placement.availabilityZone: <aws_zone_b>`::
|
||||
Specifies an {aws-short} availability zone for an additional failure domain.
|
||||
`spec.template.machines_v1beta1_machine_openshift_io.failureDomains.aws.subnet.filters.values: <cluster_id>-subnet-private-<aws_zone_b>`::
|
||||
Specifies the subnet name for the additional failure domain, using the infrastructure ID and the {aws-short} availability zone.
|
||||
`spec.template.machines_v1beta1_machine_openshift_io.failureDomains.platform`::
|
||||
Specifies the cloud provider platform name.
|
||||
Do not change this value.
|
||||
|
||||
@@ -7,9 +7,11 @@
|
||||
= Sample {azure-short} failure domain configuration
|
||||
|
||||
[role="_abstract"]
|
||||
You can configure your {azure-first} control plane machine set to spread control plane machines across more than one failure domain when possible.
|
||||
To prevent downtime for your application due to the failure of a single {azure-first} region, you can configure failure domains in the control plane machine set.
|
||||
To use failure domains, you configure appropriate values in the `failureDomains` section of the `ControlPlaneMachineSet` custom resource (CR).
|
||||
|
||||
The control plane machine set concept of a failure domain is analogous to existing {azure-short} concept of an link:https://learn.microsoft.com/en-us/azure/azure-web-pubsub/concept-availability-zones[_Azure availability zone_].
|
||||
The control plane machine set concept of a failure domain is analogous to the {azure-short} concept of an link:https://learn.microsoft.com/en-us/azure/azure-web-pubsub/concept-availability-zones[_Azure availability zone_].
|
||||
The `ControlPlaneMachineSet` CR spreads control plane machines across more than one failure domain when possible.
|
||||
|
||||
When configuring {azure-short} failure domains in the control plane machine set, you must specify the availability zone name.
|
||||
An {azure-short} cluster can use the following configurations:
|
||||
|
||||
@@ -6,7 +6,12 @@
|
||||
[id="cpmso-yaml-failure-domain-gcp_{context}"]
|
||||
= Sample {gcp-short} failure domain configuration
|
||||
|
||||
The control plane machine set concept of a failure domain is analogous to the existing {gcp-short} concept of a link:https://cloud.google.com/compute/docs/regions-zones[_zone_]. The `ControlPlaneMachineSet` CR spreads control plane machines across multiple failure domains when possible.
|
||||
[role="_abstract"]
|
||||
To prevent downtime for your application due to the failure of a single {gcp-first} region, you can configure failure domains in the control plane machine set.
|
||||
To use failure domains, you configure appropriate values in the `failureDomains` section of the `ControlPlaneMachineSet` custom resource (CR).
|
||||
|
||||
The control plane machine set concept of a failure domain is analogous to the existing {gcp-short} concept of a link:https://cloud.google.com/compute/docs/regions-zones[_zone_].
|
||||
The `ControlPlaneMachineSet` CR spreads control plane machines across more than one failure domain when possible.
|
||||
|
||||
When configuring {gcp-short} failure domains in the control plane machine set, you must specify the zone name to use.
|
||||
|
||||
@@ -25,13 +30,17 @@ spec:
|
||||
machines_v1beta1_machine_openshift_io:
|
||||
failureDomains:
|
||||
gcp:
|
||||
- zone: <gcp_zone_a> <1>
|
||||
- zone: <gcp_zone_b> <2>
|
||||
- zone: <gcp_zone_a>
|
||||
- zone: <gcp_zone_b>
|
||||
- zone: <gcp_zone_c>
|
||||
- zone: <gcp_zone_d>
|
||||
platform: GCP <3>
|
||||
platform: GCP
|
||||
# ...
|
||||
----
|
||||
<1> Specifies a {gcp-short} zone for the first failure domain.
|
||||
<2> Specifies an additional failure domain. Further failure domains are added the same way.
|
||||
<3> Specifies the cloud provider platform name. Do not change this value.
|
||||
where:
|
||||
|
||||
`spec.template.machines_v1beta1_machine_openshift_io.failureDomains.gcp.zone`::
|
||||
Each instance of `zone` specifies a {gcp-short} zone for a failure domain.
|
||||
`spec.template.machines_v1beta1_machine_openshift_io.failureDomains.platform`::
|
||||
Specifies the cloud provider platform name.
|
||||
Do not change this value.
|
||||
|
||||
@@ -6,10 +6,13 @@
|
||||
[id="cpmso-yaml-failure-domain-openstack_{context}"]
|
||||
= Sample {rh-openstack} failure domain configuration
|
||||
|
||||
// TODO: Replace that link.
|
||||
The control plane machine set concept of a failure domain is analogous to the existing {rh-openstack-first} concept of an link:https://docs.openstack.org/nova/latest/admin/availability-zones.html[availability zone]. The `ControlPlaneMachineSet` CR spreads control plane machines across multiple failure domains when possible.
|
||||
[role="_abstract"]
|
||||
To prevent downtime for your application due to the failure of a single {rh-openstack-first} region, you can configure failure domains in the control plane machine set.
|
||||
To use failure domains, you configure appropriate values in the `failureDomains` section of the `ControlPlaneMachineSet` custom resource (CR).
|
||||
|
||||
The following example demonstrates the use of multiple Nova availability zones as well as Cinder availability zones.
|
||||
// TODO: Replace that link.
|
||||
The control plane machine set concept of a failure domain is analogous to the existing {rh-openstack} concept of an link:https://docs.openstack.org/nova/latest/admin/availability-zones.html[availability zone].
|
||||
The `ControlPlaneMachineSet` CR spreads control plane machines across more than one failure domain when possible.
|
||||
|
||||
.Sample OpenStack failure domain values
|
||||
[source,yaml]
|
||||
@@ -25,7 +28,6 @@ spec:
|
||||
# ...
|
||||
machines_v1beta1_machine_openshift_io:
|
||||
failureDomains:
|
||||
platform: OpenStack
|
||||
openstack:
|
||||
- availabilityZone: nova-az0
|
||||
rootVolume:
|
||||
@@ -36,5 +38,14 @@ spec:
|
||||
- availabilityZone: nova-az2
|
||||
rootVolume:
|
||||
availabilityZone: cinder-az2
|
||||
platform: OpenStack
|
||||
# ...
|
||||
----
|
||||
----
|
||||
where:
|
||||
|
||||
`spec.template.machines_v1beta1_machine_openshift_io.failureDomains.openstack`::
|
||||
Specifies the availability zones for the failure domains.
|
||||
This example demonstrates the use of more than one Nova availability zone and corresponding Cinder availability zones.
|
||||
`spec.template.machines_v1beta1_machine_openshift_io.failureDomains.platform`::
|
||||
Specifies the cloud provider platform name.
|
||||
Do not change this value.
|
||||
|
||||
@@ -6,16 +6,20 @@
|
||||
[id="cpmso-yaml-failure-domain-vsphere_{context}"]
|
||||
= Sample {vmw-full} failure domain configuration
|
||||
|
||||
On {vmw-full} infrastructure, the cluster-wide infrastructure Custom Resource Definition (CRD), `infrastructures.config.openshift.io`, defines failure domains for your cluster.
|
||||
The `providerSpec` in the `ControlPlaneMachineSet` custom resource (CR) specifies names for failure domains that the control plane machine set uses to ensure control plane nodes are deployed to the appropriate failure domain.
|
||||
A failure domain is an infrastructure resource made up of a control plane machine set, a vCenter data center, vCenter datastore, and a network.
|
||||
[role="_abstract"]
|
||||
To prevent downtime for your application due to the failure of a single {vmw-first} region, you can configure failure domains in the control plane machine set.
|
||||
To use failure domains, you configure appropriate values in the `failureDomains` section of the `ControlPlaneMachineSet` custom resource (CR).
|
||||
|
||||
By using a failure domain resource, you can use a control plane machine set to deploy control plane machines on separate clusters or data centers.
|
||||
A control plane machine set also balances control plane machines across defined failure domains to provide fault tolerance capabilities to your infrastructure.
|
||||
On {vmw-short} infrastructure, the cluster-wide infrastructure custom resource definition (CRD), `infrastructures.config.openshift.io`, defines failure domains for your cluster.
|
||||
A failure domain is an infrastructure resource made up of a control plane machine set, a vCenter data center, vCenter datastore, and a network.
|
||||
The `providerSpec` in the `ControlPlaneMachineSet` custom resource (CR) specifies names for failure domains that the control plane machine set uses to ensure control plane nodes deploy on the appropriate failure domain.
|
||||
|
||||
By using a failure domain resource, you can use a control plane machine set to deploy control plane machines on separate clusters or data centers.
|
||||
A control plane machine set also balances control plane machines across defined failure domains to improve fault tolerance capabilities for your infrastructure.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
If you modify the `ProviderSpec` configuration in the `ControlPlaneMachineSet` CR, the control plane machine set updates all control plane machines deployed on the primary infrastructure and each failure domain infrastructure.
|
||||
If you change the `ProviderSpec` configuration in the `ControlPlaneMachineSet` CR, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy.
|
||||
====
|
||||
|
||||
.Sample {vmw-full} failure domain values
|
||||
@@ -31,19 +35,21 @@ spec:
|
||||
template:
|
||||
# ...
|
||||
machines_v1beta1_machine_openshift_io:
|
||||
failureDomains: # <1>
|
||||
platform: VSphere
|
||||
vsphere: # <2>
|
||||
failureDomains:
|
||||
vsphere:
|
||||
- name: <failure_domain_name_1>
|
||||
- name: <failure_domain_name_2>
|
||||
platform: VSphere
|
||||
# ...
|
||||
----
|
||||
<1> Specifies the vCenter location for {product-title} cluster nodes.
|
||||
<2> Specifies failure domains by name for the control plane machine set.
|
||||
where:
|
||||
|
||||
`spec.template.machines_v1beta1_machine_openshift_io.failureDomains.vsphere.name`::
|
||||
Each instance of `name` specifies a failure domain.
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
Each `name` field value in this section must match the corresponding value in the `failureDomains.name` field of the cluster-wide infrastructure CRD.
|
||||
Each `name` field value in the stanza must match the corresponding value in the `failureDomains.name` field of the cluster-wide infrastructure CRD.
|
||||
You can find the value of the `failureDomains.name` field by running the following command:
|
||||
|
||||
[source,terminal]
|
||||
@@ -53,5 +59,9 @@ $ oc get infrastructure cluster -o=jsonpath={.spec.platformSpec.vsphere.failureD
|
||||
|
||||
The `name` field is the only supported failure domain field that you can specify in the `ControlPlaneMachineSet` CR.
|
||||
====
|
||||
+
|
||||
For an example of a cluster-wide infrastructure CRD that defines resources for each failure domain, see "Specifying multiple regions and zones for your cluster on {vmw-short}."
|
||||
|
||||
For an example of a cluster-wide infrastructure CRD that defines resources for each failure domain, see "Specifying multiple regions and zones for your cluster on {vmw-short}."
|
||||
`spec.template.machines_v1beta1_machine_openshift_io.failureDomains.platform`::
|
||||
Specifies the cloud provider platform name.
|
||||
Do not change this value.
|
||||
|
||||
@@ -4,16 +4,18 @@
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="cpmso-yaml-provider-spec-aws_{context}"]
|
||||
= Sample AWS provider specification
|
||||
= Sample {aws-short} provider specification
|
||||
|
||||
When you create a control plane machine set for an existing cluster, the provider specification must match the `providerSpec` configuration in the control plane machine custom resource (CR) that the installation program creates. You can omit any field that is set in the failure domain section of the CR.
|
||||
[role="_abstract"]
|
||||
You can update your control plane machines to reflect changes in your underlying infrastructure by editing values in the control plane machine set provider specification.
|
||||
|
||||
In the following example, `<cluster_id>` is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
|
||||
The following example YAML illustrates a valid configuration for an {aws-first} cluster.
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
|
||||
----
|
||||
include::snippets/cpmso-new-providerspec-match-install.adoc[]
|
||||
|
||||
You can omit any field that has a value set in the failure domain section of the CR.
|
||||
|
||||
include::snippets/cluster-id-explanation-oc-get.adoc[]
|
||||
|
||||
.Sample AWS `providerSpec` values
|
||||
[source,yaml]
|
||||
@@ -31,10 +33,10 @@ spec:
|
||||
providerSpec:
|
||||
value:
|
||||
ami:
|
||||
id: ami-<ami_id_string> <1>
|
||||
id: ami-<ami_id_string>
|
||||
apiVersion: machine.openshift.io/v1beta1
|
||||
blockDevices:
|
||||
- ebs: <2>
|
||||
- ebs:
|
||||
encrypted: true
|
||||
iops: 0
|
||||
kmsKey:
|
||||
@@ -42,13 +44,13 @@ spec:
|
||||
volumeSize: 120
|
||||
volumeType: gp3
|
||||
credentialsSecret:
|
||||
name: aws-cloud-credentials <3>
|
||||
name: aws-cloud-credentials
|
||||
deviceIndex: 0
|
||||
iamInstanceProfile:
|
||||
id: <cluster_id>-master-profile <4>
|
||||
instanceType: m6i.xlarge <5>
|
||||
kind: AWSMachineProviderConfig <6>
|
||||
loadBalancers: <7>
|
||||
id: <cluster_id>-master-profile
|
||||
instanceType: m6i.xlarge
|
||||
kind: AWSMachineProviderConfig
|
||||
loadBalancers:
|
||||
- name: <cluster_id>-int
|
||||
type: network
|
||||
- name: <cluster_id>-ext
|
||||
@@ -56,45 +58,100 @@ spec:
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
metadataServiceOptions: {}
|
||||
placement: <8>
|
||||
region: <region> <9>
|
||||
availabilityZone: "" <10>
|
||||
tenancy: <11>
|
||||
placement:
|
||||
region: <region>
|
||||
availabilityZone: ""
|
||||
tenancy:
|
||||
securityGroups:
|
||||
- filters:
|
||||
- name: tag:Name
|
||||
values:
|
||||
- <cluster_id>-master-sg <12>
|
||||
subnet: {} <13>
|
||||
- filters:
|
||||
- name: tag:Name
|
||||
values:
|
||||
- <cluster_id>-node
|
||||
- filters:
|
||||
- name: tag:Name
|
||||
values:
|
||||
- <cluster_id>-lb
|
||||
- filters:
|
||||
- name: tag:Name
|
||||
values:
|
||||
- <cluster_id>-controlplane
|
||||
subnet: {}
|
||||
userDataSecret:
|
||||
name: master-user-data <14>
|
||||
name: master-user-data
|
||||
----
|
||||
<1> Specifies the {op-system-first} Amazon Machine Images (AMI) ID for the cluster. The AMI must belong to the same region as the cluster. If you want to use an AWS Marketplace image, you must complete the {product-title} subscription from the link:https://aws.amazon.com/marketplace/fulfillment?productId=59ead7de-2540-4653-a8b0-fa7926d5c845[AWS Marketplace] to obtain an AMI ID for your region.
|
||||
<2> Specifies the configuration of an encrypted EBS volume.
|
||||
<3> Specifies the secret name for the cluster. Do not change this value.
|
||||
<4> Specifies the AWS Identity and Access Management (IAM) instance profile. Do not change this value.
|
||||
<5> Specifies the AWS instance type for the control plane.
|
||||
<6> Specifies the cloud provider platform type. Do not change this value.
|
||||
<7> Specifies the internal (`int`) and external (`ext`) load balancers for the cluster.
|
||||
where:
|
||||
|
||||
`<ami_id_string>`::
|
||||
Specifies the {op-system-first} Amazon Machine Images (AMI) ID for the cluster.
|
||||
The AMI must belong to the same region as the cluster.
|
||||
If you want to use an AWS Marketplace image, you must complete the {product-title} subscription from the link:https://aws.amazon.com/marketplace/fulfillment?productId=59ead7de-2540-4653-a8b0-fa7926d5c845[AWS Marketplace] to obtain an AMI ID for your region.
|
||||
|
||||
`spec.template.spec.providerSpec.value.blockDevices.ebs`::
|
||||
Specifies the configuration of an encrypted Amazon Elastic Block Store (Amazon EBS) volume.
|
||||
|
||||
`spec.template.spec.providerSpec.value.credentialsSecret.name`::
|
||||
Specifies the secret name for the cluster.
|
||||
Do not change this value.
|
||||
|
||||
`spec.template.spec.providerSpec.value.iamInstanceProfile`::
|
||||
Specifies the AWS Identity and Access Management (IAM) instance profile.
|
||||
Do not change this value.
|
||||
|
||||
`spec.template.spec.providerSpec.value.instanceType`::
|
||||
Specifies the AWS instance type for the control plane.
|
||||
|
||||
`spec.template.spec.providerSpec.value.kind`::
|
||||
Specifies the cloud provider platform type.
|
||||
Do not change this value.
|
||||
|
||||
`spec.template.spec.providerSpec.value.loadBalancers`::
|
||||
Specifies the internal (`int`) and external (`ext`) load balancers for the cluster.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
You can omit the external (`ext`) load balancer parameters on private {product-title} clusters.
|
||||
====
|
||||
<8> Specifies where to create the control plane instance in AWS.
|
||||
<9> Specifies the AWS region for the cluster.
|
||||
<10> This parameter is configured in the failure domain and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Control Plane Machine Set Operator overwrites it with the value in the failure domain.
|
||||
<11> Specifies the AWS Dedicated Instance configuration for the control plane. For more information, see AWS documentation about link:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-instance.html[Dedicated Instances]. The following values are valid:
|
||||
|
||||
`spec.template.spec.providerSpec.value.placement`::
|
||||
Specifies where to create the control plane instance in AWS.
|
||||
The following keys in this stanza specify additional details:
|
||||
+
|
||||
--
|
||||
`region`::
|
||||
Specifies the AWS region for the cluster.
|
||||
`availabilityZone`::
|
||||
This parameter is in the failure domain configuration and has an empty value here.
|
||||
--
|
||||
+
|
||||
--
|
||||
include::snippets/cpmso-failure-domain-param-precedence.adoc[]
|
||||
--
|
||||
|
||||
`tenancy`::
|
||||
Specifies the AWS Dedicated Instance configuration for the control plane.
|
||||
For more information, see AWS documentation about link:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-instance.html[Dedicated Instances].
|
||||
The following values are valid:
|
||||
+
|
||||
--
|
||||
* `default`: The Dedicated Instance runs on shared hardware.
|
||||
* `dedicated`: The Dedicated Instance runs on single-tenant hardware.
|
||||
* `host`: The Dedicated Instance runs on a Dedicated Host, which is an isolated server with configurations that you can control.
|
||||
<12> Specifies the control plane machines security group.
|
||||
<13> This parameter is configured in the failure domain and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Control Plane Machine Set Operator overwrites it with the value in the failure domain.
|
||||
--
|
||||
|
||||
`spec.template.spec.providerSpec.value.securityGroups`::
|
||||
Specifies the control plane machines security group.
|
||||
|
||||
`spec.template.spec.providerSpec.value.subnet`::
|
||||
This parameter is in the failure domain configuration and has an empty value here.
|
||||
+
|
||||
--
|
||||
include::snippets/cpmso-failure-domain-param-precedence.adoc[]
|
||||
--
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
If the failure domain configuration does not specify a value, the value in the provider specification is used.
|
||||
Configuring a subnet in the failure domain overwrites the subnet value in the provider specification.
|
||||
If the failure domain configuration does not specify a value, the control plane machines use the value in the provider specification.
|
||||
====
|
||||
//TODO: clarify with dev about this one in 4.16+
|
||||
<14> Specifies the control plane user data secret. Do not change this value.
|
||||
|
||||
`spec.template.spec.providerSpec.value.userDataSecret`::
|
||||
Specifies the control plane user data secret. Do not change this value.
|
||||
@@ -4,16 +4,18 @@
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="cpmso-yaml-provider-spec-azure_{context}"]
|
||||
= Sample Azure provider specification
|
||||
= Sample {azure-short} provider specification
|
||||
|
||||
When you create a control plane machine set for an existing cluster, the provider specification must match the `providerSpec` configuration in the control plane `Machine` CR that is created by the installation program. You can omit any field that is set in the failure domain section of the CR.
|
||||
[role="_abstract"]
|
||||
You can update your control plane machines to reflect changes in your underlying infrastructure by editing values in the control plane machine set provider specification.
|
||||
|
||||
In the following example, `<cluster_id>` is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
|
||||
The following example YAML illustrates a valid configuration for a {azure-first} cluster.
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
|
||||
----
|
||||
include::snippets/cpmso-new-providerspec-match-install.adoc[]
|
||||
|
||||
You can omit any field that has a value set in the failure domain section of the CR.
|
||||
|
||||
include::snippets/cluster-id-explanation-oc-get.adoc[]
|
||||
|
||||
.Sample Azure `providerSpec` values
|
||||
[source,yaml]
|
||||
@@ -33,58 +35,78 @@ spec:
|
||||
acceleratedNetworking: true
|
||||
apiVersion: machine.openshift.io/v1beta1
|
||||
credentialsSecret:
|
||||
name: azure-cloud-credentials <1>
|
||||
name: azure-cloud-credential
|
||||
namespace: openshift-machine-api
|
||||
diagnostics: {}
|
||||
image: <2>
|
||||
image:
|
||||
offer: ""
|
||||
publisher: ""
|
||||
resourceID: /resourceGroups/<cluster_id>-rg/providers/Microsoft.Compute/galleries/gallery_<cluster_id>/images/<cluster_id>-gen2/versions/412.86.20220930 <3>
|
||||
resourceID: /resourceGroups/<cluster_id>-rg/providers/Microsoft.Compute/galleries/gallery_<cluster_id>/images/<cluster_id>-gen2/versions/412.86.20220930
|
||||
sku: ""
|
||||
version: ""
|
||||
internalLoadBalancer: <cluster_id>-internal <4>
|
||||
kind: AzureMachineProviderSpec <5>
|
||||
location: <region> <6>
|
||||
internalLoadBalancer: <cluster_id>-internal
|
||||
kind: AzureMachineProviderSpec
|
||||
location: <region>
|
||||
managedIdentity: <cluster_id>-identity
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: <cluster_id>
|
||||
networkResourceGroup: <cluster_id>-rg
|
||||
osDisk: <7>
|
||||
osDisk:
|
||||
diskSettings: {}
|
||||
diskSizeGB: 1024
|
||||
managedDisk:
|
||||
storageAccountType: Premium_LRS
|
||||
osType: Linux
|
||||
publicIP: false
|
||||
publicLoadBalancer: <cluster_id> <8>
|
||||
publicLoadBalancer: <cluster_id>
|
||||
resourceGroup: <cluster_id>-rg
|
||||
subnet: <cluster_id>-master-subnet <9>
|
||||
subnet: <cluster_id>-master-subnet
|
||||
userDataSecret:
|
||||
name: master-user-data <10>
|
||||
name: master-user-data
|
||||
vmSize: Standard_D8s_v3
|
||||
vnet: <cluster_id>-vnet
|
||||
zone: "1" <11>
|
||||
zone: "1"
|
||||
----
|
||||
<1> Specifies the secret name for the cluster. Do not change this value.
|
||||
<2> Specifies the image details for your control plane machine set.
|
||||
<3> Specifies an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a `-gen2` suffix, while V1 images have the same name without the suffix.
|
||||
<4> Specifies the internal load balancer for the control plane. This field might not be preconfigured but is required in both the `ControlPlaneMachineSet` and control plane `Machine` CRs.
|
||||
<5> Specifies the cloud provider platform type. Do not change this value.
|
||||
<6> Specifies the region to place control plane machines on.
|
||||
<7> Specifies the disk configuration for the control plane.
|
||||
<8> Specifies the public load balancer for the control plane.
|
||||
where:
|
||||
|
||||
`spec.template.spec.providerSpec.value.credentialsSecret.name`::
|
||||
Specifies the secret name for the cluster.
|
||||
Do not change this value.
|
||||
`spec.template.spec.providerSpec.value.image`::
|
||||
Specifies the image details for your control plane machine set.
|
||||
`spec.template.spec.providerSpec.value.image.resourceID`::
|
||||
Specifies an image that is compatible with your instance type.
|
||||
The Hyper-V generation V2 images created by the installation program have a `-gen2` suffix, while V1 images have the same name without the suffix.
|
||||
`spec.template.spec.providerSpec.value.internalLoadBalancer`::
|
||||
Specifies the internal load balancer for the control plane.
|
||||
The `ControlPlaneMachineSet` and control plane `Machine` CRs require this field.
|
||||
If the field is empty, specify a valid value.
|
||||
`spec.template.spec.providerSpec.value.kind`::
|
||||
Specifies the cloud provider platform type.
|
||||
Do not change this value.
|
||||
`spec.template.spec.providerSpec.value.location`::
|
||||
Specifies the region to place control plane machines on.
|
||||
`spec.template.spec.providerSpec.value.osDisk`::
|
||||
Specifies the disk configuration for the control plane.
|
||||
`spec.template.spec.providerSpec.value.publicLoadBalancer`::
|
||||
Specifies the public load balancer for the control plane.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
You can omit the `publicLoadBalancer` parameter on private {product-title} clusters that have user-defined outbound routing.
|
||||
====
|
||||
<9> Specifies the subnet for the control plane.
|
||||
<10> Specifies the control plane user data secret. Do not change this value.
|
||||
<11> Specifies the zone configuration for clusters that use a single zone for all failure domains.
|
||||
|
||||
`spec.template.spec.providerSpec.value.subnet`::
|
||||
Specifies the subnet for the control plane.
|
||||
`spec.template.spec.providerSpec.value.userDataSecret`::
|
||||
Specifies the control plane user data secret.
|
||||
Do not change this value.
|
||||
`spec.template.spec.providerSpec.value.zone`::
|
||||
Specifies the zone configuration for clusters that use a single zone for all failure domains.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
If the cluster is configured to use a different zone for each failure domain, this parameter is configured in the failure domain.
|
||||
If you specify this value in the provider specification when using different zones for each failure domain, the Control Plane Machine Set Operator ignores it.
|
||||
If the cluster uses a different zone for each failure domain, configure this parameter in the failure domain.
|
||||
If you specify this value in the provider specification when using different zones for each failure domain, the Control Plane Machine Set Operator ignores it and uses the value in the failure domain.
|
||||
====
|
||||
@@ -6,22 +6,28 @@
|
||||
[id="cpmso-yaml-provider-spec-gcp_{context}"]
|
||||
= Sample {gcp-short} provider specification
|
||||
|
||||
When you create a control plane machine set for an existing cluster, the provider specification must match the `providerSpec` configuration in the control plane machine custom resource (CR) that the installation program creates. You can omit any field that is set in the failure domain section of the CR.
|
||||
[role="_abstract"]
|
||||
You can update your control plane machines to reflect changes in your underlying infrastructure by editing values in the control plane machine set provider specification.
|
||||
|
||||
The following example YAML illustrates a valid configuration for an {gcp-first} cluster.
|
||||
|
||||
[id="cpmso-yaml-provider-spec-gcp-oc_{context}"]
|
||||
== Values obtained by using the OpenShift CLI
|
||||
include::snippets/cpmso-new-providerspec-match-install.adoc[]
|
||||
|
||||
In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI.
|
||||
You can omit any field that has a value set in the failure domain section of the CR.
|
||||
|
||||
Infrastructure ID:: The `<cluster_id>` string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
|
||||
In the following example, you can obtain some of the values for your cluster by using the {oc-first}.
|
||||
|
||||
Infrastructure ID:: The `<cluster_id>` string is the infrastructure ID.
|
||||
The infrastructure ID matches the cluster ID that the installation program used during cluster provisioning.
|
||||
If you have `oc` installed, you can obtain the infrastructure ID by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
|
||||
----
|
||||
|
||||
Image path:: The `<path_to_image>` string is the path to the image that was used to create the disk. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command:
|
||||
Image path:: The `<path_to_image>` string is the path to the source image for the disk.
|
||||
If you have `oc` installed, you can obtain the path to the image by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -48,16 +54,16 @@ spec:
|
||||
apiVersion: machine.openshift.io/v1beta1
|
||||
canIPForward: false
|
||||
credentialsSecret:
|
||||
name: gcp-cloud-credentials <1>
|
||||
name: gcp-cloud-credentials
|
||||
deletionProtection: false
|
||||
disks:
|
||||
- autoDelete: true
|
||||
boot: true
|
||||
image: <path_to_image> <2>
|
||||
image: <path_to_image>
|
||||
labels: null
|
||||
sizeGb: 200
|
||||
type: pd-ssd
|
||||
kind: GCPMachineProviderSpec <3>
|
||||
kind: GCPMachineProviderSpec
|
||||
machineType: e2-standard-4
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
@@ -65,9 +71,9 @@ spec:
|
||||
networkInterfaces:
|
||||
- network: <cluster_id>-network
|
||||
subnetwork: <cluster_id>-master-subnet
|
||||
projectID: <project_name> <4>
|
||||
region: <region> <5>
|
||||
serviceAccounts: <6>
|
||||
projectID: <project_name>
|
||||
region: <region>
|
||||
serviceAccounts:
|
||||
- email: <cluster_id>-m@<project_name>.iam.gserviceaccount.com
|
||||
scopes:
|
||||
- https://www.googleapis.com/auth/cloud-platform
|
||||
@@ -77,11 +83,17 @@ spec:
|
||||
targetPools:
|
||||
- <cluster_id>-api
|
||||
userDataSecret:
|
||||
name: master-user-data <7>
|
||||
zone: "" <8>
|
||||
name: master-user-data
|
||||
zone: ""
|
||||
----
|
||||
<1> Specifies the secret name for the cluster. Do not change this value.
|
||||
<2> Specifies the path to the image that was used to create the disk.
|
||||
where:
|
||||
|
||||
`spec.template.spec.providerSpec.value.credentialsSecret.name`::
|
||||
Specifies the secret name for the cluster.
|
||||
Do not change this value.
|
||||
|
||||
`spec.template.spec.providerSpec.value.disk.image`::
|
||||
Specifies the path to the source image for the disk.
|
||||
+
|
||||
To use a {gcp-short} Marketplace image, specify the offer to use:
|
||||
+
|
||||
@@ -90,9 +102,28 @@ To use a {gcp-short} Marketplace image, specify the offer to use:
|
||||
* {opp}: `\https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736`
|
||||
* {oke}: `\https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736`
|
||||
--
|
||||
<3> Specifies the cloud provider platform type. Do not change this value.
|
||||
<4> Specifies the name of the {gcp-short} project that you use for your cluster.
|
||||
<5> Specifies the {gcp-short} region for the cluster.
|
||||
<6> Specifies a single service account. Multiple service accounts are not supported.
|
||||
<7> Specifies the control plane user data secret. Do not change this value.
|
||||
<8> This parameter is configured in the failure domain, and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Operator overwrites it with the value in the failure domain.
|
||||
|
||||
`spec.template.spec.providerSpec.value.kind`::
|
||||
Specifies the cloud provider platform type.
|
||||
Do not change this value.
|
||||
|
||||
`spec.template.spec.providerSpec.value.projectID`::
|
||||
Specifies the name of the {gcp-short} project that you use for your cluster.
|
||||
|
||||
`spec.template.spec.providerSpec.value.projectID.region`::
|
||||
Specifies the {gcp-short} region for the cluster.
|
||||
|
||||
`spec.template.spec.providerSpec.value.serviceAccounts`::
|
||||
Specifies a single service account.
|
||||
Specifying more than one service account is not supported.
|
||||
|
||||
`spec.template.spec.providerSpec.value.userDataSecret`::
|
||||
Specifies the control plane user data secret.
|
||||
Do not change this value.
|
||||
|
||||
`spec.template.spec.providerSpec.value.zone`::
|
||||
This parameter is in the failure domain configuration and has an empty value here.
|
||||
+
|
||||
--
|
||||
include::snippets/cpmso-failure-domain-param-precedence.adoc[]
|
||||
--
|
||||
@@ -6,20 +6,14 @@
|
||||
[id="cpmso-yaml-provider-spec-nutanix_{context}"]
|
||||
= Sample Nutanix provider specification
|
||||
|
||||
When you create a control plane machine set for an existing cluster, the provider specification must match the `providerSpec` configuration in the control plane machine custom resource (CR) that the installation program creates.
|
||||
[role="_abstract"]
|
||||
You can update your control plane machines to reflect changes in your underlying infrastructure by editing values in the control plane machine set provider specification.
|
||||
|
||||
The following example YAML illustrates a valid configuration for a Nutanix cluster.
|
||||
|
||||
[id="cpmso-yaml-provider-spec-nutanix-oc_{context}"]
|
||||
== Values obtained by using the OpenShift CLI
|
||||
include::snippets/cpmso-new-providerspec-match-install.adoc[]
|
||||
|
||||
In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI.
|
||||
|
||||
Infrastructure ID:: The `<cluster_id>` string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
|
||||
----
|
||||
include::snippets/cluster-id-explanation-oc-get.adoc[]
|
||||
|
||||
.Sample Nutanix `providerSpec` values
|
||||
[source,yaml]
|
||||
@@ -37,84 +31,107 @@ spec:
|
||||
providerSpec:
|
||||
value:
|
||||
apiVersion: machine.openshift.io/v1
|
||||
bootType: "" <1>
|
||||
categories: <2>
|
||||
bootType: ""
|
||||
categories:
|
||||
- key: <category_name>
|
||||
value: <category_value>
|
||||
cluster: <3>
|
||||
cluster:
|
||||
type: uuid
|
||||
uuid: <cluster_uuid>
|
||||
credentialsSecret:
|
||||
name: nutanix-credentials <4>
|
||||
image: <5>
|
||||
name: nutanix-credentials
|
||||
image:
|
||||
name: <cluster_id>-rhcos
|
||||
type: name
|
||||
kind: NutanixMachineProviderConfig <6>
|
||||
memorySize: 16Gi <7>
|
||||
kind: NutanixMachineProviderConfig
|
||||
memorySize: 16Gi
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
project: <8>
|
||||
project:
|
||||
type: name
|
||||
name: <project_name>
|
||||
subnets: <9>
|
||||
subnets:
|
||||
- type: uuid
|
||||
uuid: <subnet_uuid>
|
||||
systemDiskSize: 120Gi <10>
|
||||
systemDiskSize: 120Gi
|
||||
userDataSecret:
|
||||
name: master-user-data <11>
|
||||
vcpuSockets: 8 <12>
|
||||
vcpusPerSocket: 1 <13>
|
||||
name: master-user-data
|
||||
vcpuSockets: 8
|
||||
vcpusPerSocket: 1
|
||||
----
|
||||
<1> Specifies the boot type that the control plane machines use. For more information about boot types, see link:https://portal.nutanix.com/page/documents/kbs/details?targetId=kA07V000000H3K9SAK[Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment]. Valid values are `Legacy`, `SecureBoot`, or `UEFI`. The default is `Legacy`.
|
||||
where:
|
||||
|
||||
`spec.template.spec.providerSpec.value.bootType`::
|
||||
Specifies the boot type that the control plane machines use.
|
||||
For more information about boot types, see link:https://portal.nutanix.com/page/documents/kbs/details?targetId=kA07V000000H3K9SAK[Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment (Nutanix documentation)].
|
||||
+
|
||||
Valid values are `Legacy`, `SecureBoot`, or `UEFI`.
|
||||
The default is `Legacy`.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
You must use the `Legacy` boot type in {product-title} {product-version}.
|
||||
====
|
||||
<2> Specifies one or more Nutanix Prism categories to apply to control plane machines. This stanza requires `key` and `value` parameters for a category key-value pair that exists in Prism Central. For more information about categories, see link:https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide-vpc_2022_6:ssp-ssp-categories-manage-pc-c.html[Category management].
|
||||
<3> Specifies a Nutanix Prism Element cluster configuration. In this example, the cluster type is `uuid`, so there is a `uuid` stanza.
|
||||
|
||||
`spec.template.spec.providerSpec.value.categories`::
|
||||
Specifies one or more Nutanix Prism categories to apply to control plane machines.
|
||||
This stanza requires `key` and `value` parameters for a category key-value pair that exists in Prism Central.
|
||||
For more information about categories, see link:https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide-vpc_2022_6:ssp-ssp-categories-manage-pc-c.html[Category management].
|
||||
|
||||
`spec.template.spec.providerSpec.value.cluster`::
|
||||
Specifies a Nutanix Prism Element cluster configuration.
|
||||
In this example, the cluster type is `uuid`, so there is a `uuid` stanza.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
Clusters that use {product-title} version 4.15 or later can use failure domain configurations.
|
||||
|
||||
If the cluster uses a failure domain, configure this parameter in the failure domain.
|
||||
If you specify this value in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores it.
|
||||
include::snippets/cpmso-failure-domain-param-precedence.adoc[]
|
||||
====
|
||||
<4> Specifies the secret name for the cluster. Do not change this value.
|
||||
<5> Specifies the image that was used to create the disk.
|
||||
<6> Specifies the cloud provider platform type. Do not change this value.
|
||||
<7> Specifies the memory allocated for the control plane machines.
|
||||
<8> Specifies the Nutanix project that you use for your cluster. In this example, the project type is `name`, so there is a `name` stanza.
|
||||
<9> Specify one or more Prism Element subnet objects.
|
||||
|
||||
`spec.template.spec.providerSpec.value.credentialsSecret`::
|
||||
Specifies the secret name for the cluster.
|
||||
Do not change this value.
|
||||
|
||||
`spec.template.spec.providerSpec.value.image`::
|
||||
Specifies the path to the source image for the disk.
|
||||
|
||||
`spec.template.spec.providerSpec.value.kind`::
|
||||
Specifies the cloud provider platform type.
|
||||
Do not change this value.
|
||||
|
||||
`spec.template.spec.providerSpec.value.memorySize`::
|
||||
Specifies the memory allocated for the control plane machines.
|
||||
|
||||
`spec.template.spec.providerSpec.value.project`::
|
||||
Specifies the Nutanix project that you use for your cluster.
|
||||
In this example, the project type is `name`, so there is a `name` stanza.
|
||||
|
||||
`spec.template.spec.providerSpec.value.subnets`::
|
||||
Specify one or more Prism Element subnet objects.
|
||||
In this example, the subnet type is `uuid`, so there is a `uuid` stanza.
|
||||
A maximum of 32 subnets for each Prism Element failure domain in the cluster is supported.
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
The following known issues with configuring multiple subnets for an existing Nutanix cluster by using a control plane machine set exist in {product-title} version 4.18:
|
||||
|
||||
* Adding subnets above the existing subnet in the `subnets` stanza causes a control plane node to become stuck in the `Deleting` state.
|
||||
As a workaround, only add subnets below the existing subnet in the `subnets` stanza.
|
||||
|
||||
* Sometimes, after adding a subnet, the updated control plane machines appear in the Nutanix console but the {product-title} cluster is unreachable.
|
||||
There is no workaround for this issue.
|
||||
|
||||
These issues occur on clusters that use a control plane machine set to configure subnets regardless of whether subnets are specified in a failure domain or the provider specification.
|
||||
For more information, see link:https://issues.redhat.com/browse/OCPBUGS-50904[*OCPBUGS-50904*].
|
||||
Do not remove the original subnet, which hosts the API server and ingress server, from the cluster.
|
||||
====
|
||||
+
|
||||
The CIDR IP address prefix for one of the specified subnets must contain the virtual IP addresses that the {product-title} cluster uses.
|
||||
The CIDR IP address prefix for one of the specified subnets must contain the virtual IP addresses that the {product-title} cluster uses.
|
||||
All subnet UUID values must be unique.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
Clusters that use {product-title} version 4.15 or later can use failure domain configurations.
|
||||
|
||||
If the cluster uses a failure domain, configure this parameter in the failure domain.
|
||||
If you specify this value in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores it.
|
||||
include::snippets/cpmso-failure-domain-param-precedence.adoc[]
|
||||
====
|
||||
<10> Specifies the VM disk size for the control plane machines.
|
||||
<11> Specifies the control plane user data secret. Do not change this value.
|
||||
<12> Specifies the number of vCPU sockets allocated for the control plane machines.
|
||||
<13> Specifies the number of vCPUs for each control plane vCPU socket.
|
||||
|
||||
`spec.template.spec.providerSpec.value.systemDiskSize`::
|
||||
Specifies the VM disk size for the control plane machines.
|
||||
|
||||
`spec.template.spec.providerSpec.value.userDataSecret`::
|
||||
Specifies the control plane user data secret.
|
||||
Do not change this value.
|
||||
|
||||
`spec.template.spec.providerSpec.value.vcpuSockets`::
|
||||
Specifies the number of vCPU sockets allocated for the control plane machines.
|
||||
|
||||
`spec.template.spec.providerSpec.value.vcpusPerSocket`::
|
||||
Specifies the number of vCPUs for each control plane vCPU socket.
|
||||
|
||||
@@ -6,7 +6,17 @@
|
||||
[id="cpmso-yaml-provider-spec-openstack_{context}"]
|
||||
= Sample {rh-openstack} provider specification
|
||||
|
||||
When you create a control plane machine set for an existing cluster, the provider specification must match the `providerSpec` configuration in the control plane machine custom resource (CR) that the installation program creates.
|
||||
[role="_abstract"]
|
||||
You can update your control plane machines to reflect changes in your underlying infrastructure by editing values in the control plane machine set provider specification.
|
||||
|
||||
The following example YAML illustrates a valid configuration for an {rh-openstack-first} cluster.
|
||||
|
||||
include::snippets/cpmso-new-providerspec-match-install.adoc[]
|
||||
|
||||
//True for OpenStack?
|
||||
You can omit any field that has a value set in the failure domain section of the CR.
|
||||
|
||||
include::snippets/cluster-id-explanation-oc-get.adoc[]
|
||||
|
||||
.Sample OpenStack `providerSpec` values
|
||||
[source,yaml]
|
||||
@@ -26,33 +36,44 @@ spec:
|
||||
apiVersion: machine.openshift.io/v1alpha1
|
||||
cloudName: openstack
|
||||
cloudsSecret:
|
||||
name: openstack-cloud-credentials <1>
|
||||
name: openstack-cloud-credentials
|
||||
namespace: openshift-machine-api
|
||||
flavor: m1.xlarge <2>
|
||||
image: ocp1-2g2xs-rhcos
|
||||
kind: OpenstackProviderSpec <3>
|
||||
flavor: m1.xlarge
|
||||
image: <cluster_id>-rhcos
|
||||
kind: OpenstackProviderSpec
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
networks:
|
||||
- filter: {}
|
||||
subnets:
|
||||
- filter:
|
||||
name: ocp1-2g2xs-nodes
|
||||
tags: openshiftClusterID=ocp1-2g2xs
|
||||
name: <cluster_id>-nodes
|
||||
tags: openshiftClusterID=<cluster_id>
|
||||
securityGroups:
|
||||
- filter: {}
|
||||
name: ocp1-2g2xs-master <4>
|
||||
serverGroupName: ocp1-2g2xs-master
|
||||
name: <cluster_id>-master
|
||||
serverGroupName: <cluster_id>-master
|
||||
serverMetadata:
|
||||
Name: ocp1-2g2xs-master
|
||||
openshiftClusterID: ocp1-2g2xs
|
||||
Name: <cluster_id>-master
|
||||
openshiftClusterID: <cluster_id>
|
||||
tags:
|
||||
- openshiftClusterID=ocp1-2g2xs
|
||||
- openshiftClusterID=<cluster_id>
|
||||
trunk: true
|
||||
userDataSecret:
|
||||
name: master-user-data
|
||||
----
|
||||
<1> The secret name for the cluster. Do not change this value.
|
||||
<2> The {rh-openstack} flavor type for the control plane.
|
||||
<3> The {rh-openstack} cloud provider platform type. Do not change this value.
|
||||
<4> The control plane machines security group.
|
||||
where:
|
||||
|
||||
`spec.template.spec.providerSpec.value.cloudsSecret.name`::
|
||||
Specifies the secret name for the cluster.
|
||||
Do not change this value.
|
||||
|
||||
`spec.template.spec.providerSpec.value.flavor`::
|
||||
Specifies the {rh-openstack} flavor type for the control plane.
|
||||
|
||||
`spec.template.spec.providerSpec.value.kind`::
|
||||
Specifies the cloud provider platform type.
|
||||
Do not change this value.
|
||||
|
||||
`spec.template.spec.providerSpec.value.securityGroups`::
|
||||
Specifies the control plane machines security group.
|
||||
@@ -4,9 +4,16 @@
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="cpmso-yaml-provider-spec-vsphere_{context}"]
|
||||
= Sample VMware vSphere provider specification
|
||||
= Sample {vmw-short} provider specification
|
||||
|
||||
When you create a control plane machine set for an existing cluster, the provider specification must match the `providerSpec` configuration in the control plane machine custom resource (CR) that the installation program creates.
|
||||
[role="_abstract"]
|
||||
You can update your control plane machines to reflect changes in your underlying infrastructure by editing values in the control plane machine set provider specification.
|
||||
|
||||
The following example YAML illustrates a valid configuration for a {vmw-first} cluster.
|
||||
|
||||
include::snippets/cpmso-new-providerspec-match-install.adoc[]
|
||||
|
||||
You can omit any field that has a value set in the failure domain section of the CR.
|
||||
|
||||
.Sample vSphere `providerSpec` values
|
||||
[source,yaml]
|
||||
@@ -25,64 +32,89 @@ spec:
|
||||
value:
|
||||
apiVersion: machine.openshift.io/v1beta1
|
||||
credentialsSecret:
|
||||
name: vsphere-cloud-credentials <1>
|
||||
dataDisks: <2>
|
||||
name: vsphere-cloud-credentials
|
||||
dataDisks:
|
||||
- name: "<disk_name>"
|
||||
provisioningMode: "<mode>"
|
||||
sizeGiB: 20
|
||||
diskGiB: 120 <3>
|
||||
kind: VSphereMachineProviderSpec <4>
|
||||
memoryMiB: 16384 <5>
|
||||
diskGiB: 120
|
||||
kind: VSphereMachineProviderSpec
|
||||
memoryMiB: 16384
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
network: <6>
|
||||
network:
|
||||
devices:
|
||||
- networkName: <vm_network_name>
|
||||
numCPUs: 4 <7>
|
||||
numCoresPerSocket: 4 <8>
|
||||
numCPUs: 4
|
||||
numCoresPerSocket: 4
|
||||
snapshot: ""
|
||||
template: <vm_template_name> <9>
|
||||
template: <vm_template_name>
|
||||
userDataSecret:
|
||||
name: master-user-data <10>
|
||||
workspace: <11>
|
||||
datacenter: <vcenter_data_center_name> <12>
|
||||
datastore: <vcenter_datastore_name> <13>
|
||||
folder: <path_to_vcenter_vm_folder> <14>
|
||||
resourcePool: <vsphere_resource_pool> <15>
|
||||
server: <vcenter_server_ip> <16>
|
||||
name: master-user-data
|
||||
workspace:
|
||||
datacenter: <vcenter_data_center_name>
|
||||
datastore: <vcenter_datastore_name>
|
||||
folder: <path_to_vcenter_vm_folder>
|
||||
resourcePool: <vsphere_resource_pool>
|
||||
server: <vcenter_server_ip>
|
||||
----
|
||||
<1> Specifies the secret name for the cluster. Do not change this value.
|
||||
<2> Specifies one or more data disk definitions.
|
||||
where:
|
||||
|
||||
`spec.template.spec.providerSpec.value.credentialsSecret`::
|
||||
Specifies the secret name for the cluster.
|
||||
Do not change this value.
|
||||
`spec.template.spec.providerSpec.value.dataDisks`::
|
||||
Specifies one or more data disk definitions.
|
||||
For more information, see "Configuring data disks by using machine sets".
|
||||
<3> Specifies the VM disk size for the control plane machines.
|
||||
<4> Specifies the cloud provider platform type. Do not change this value.
|
||||
<5> Specifies the memory allocated for the control plane machines.
|
||||
<6> Specifies the network on which the control plane is deployed.
|
||||
`spec.template.spec.providerSpec.value.diskGiB`::
|
||||
Specifies the VM disk size for the control plane machines.
|
||||
`spec.template.spec.providerSpec.value.kind`::
|
||||
Specifies the cloud provider platform type.
|
||||
Do not change this value.
|
||||
`spec.template.spec.providerSpec.value.memoryMiB`::
|
||||
Specifies the memory allocated for the control plane machines.
|
||||
`spec.template.spec.providerSpec.value.network`::
|
||||
Specifies the network on which to deploy the control plane.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
If the cluster is configured to use a failure domain, this parameter is configured in the failure domain.
|
||||
If you specify this value in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores it.
|
||||
include::snippets/cpmso-failure-domain-param-precedence.adoc[]
|
||||
====
|
||||
<7> Specifies the number of CPUs allocated for the control plane machines.
|
||||
<8> Specifies the number of cores for each control plane CPU.
|
||||
<9> Specifies the vSphere VM template to use, such as `user-5ddjd-rhcos`.
|
||||
|
||||
`spec.template.spec.providerSpec.value.numCPUs`::
|
||||
Specifies the number of CPUs allocated for the control plane machines.
|
||||
`spec.template.spec.providerSpec.value.numCoresPerSocket`::
|
||||
Specifies the number of cores for each control plane CPU.
|
||||
`spec.template.spec.providerSpec.value.template`::
|
||||
Specifies the vSphere VM template to use, such as `user-5ddjd-rhcos`.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
If the cluster is configured to use a failure domain, this parameter is configured in the failure domain.
|
||||
If you specify this value in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores it.
|
||||
include::snippets/cpmso-failure-domain-param-precedence.adoc[]
|
||||
====
|
||||
<10> Specifies the control plane user data secret. Do not change this value.
|
||||
<11> Specifies the workspace details for the control plane.
|
||||
|
||||
`spec.template.spec.providerSpec.value.userDataSecret`::
|
||||
Specifies the control plane user data secret.
|
||||
Do not change this value.
|
||||
`spec.template.spec.providerSpec.value.workspace`::
|
||||
Specifies the workspace details for the control plane.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
If the cluster is configured to use a failure domain, these parameters are configured in the failure domain.
|
||||
If you specify these values in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores them.
|
||||
include::snippets/cpmso-failure-domain-param-precedence.adoc[]
|
||||
====
|
||||
<12> Specifies the vCenter datacenter for the control plane.
|
||||
<13> Specifies the vCenter datastore for the control plane.
|
||||
<14> Specifies the path to the vSphere VM folder in vCenter, such as `/dc1/vm/user-inst-5ddjd`.
|
||||
<15> Specifies the vSphere resource pool for your VMs.
|
||||
<16> Specifies the vCenter server IP or fully qualified domain name.
|
||||
+
|
||||
The following keys in this stanza specify additional details:
|
||||
+
|
||||
--
|
||||
`datacenter`::
|
||||
Specifies the vCenter datacenter for the control plane.
|
||||
`datastore`::
|
||||
Specifies the vCenter datastore for the control plane.
|
||||
`folder`::
|
||||
Specifies the path to the vSphere VM folder in vCenter, such as `/dc1/vm/user-inst-5ddjd`.
|
||||
`resourcePool`::
|
||||
Specifies the vSphere resource pool for your VMs.
|
||||
`server`::
|
||||
Specifies the vCenter server IP or fully qualified domain name.
|
||||
--
|
||||
|
||||
17
snippets/cluster-id-explanation-oc-get.adoc
Normal file
17
snippets/cluster-id-explanation-oc-get.adoc
Normal file
@@ -0,0 +1,17 @@
|
||||
// Text snippet included in the following modules:
|
||||
//
|
||||
// * modules/cpmso-yaml-provider-spec-aws.adoc
|
||||
// * modules/cpmso-yaml-provider-spec-azure.adoc
|
||||
// * modules/cpmso-yaml-provider-spec-nutanix.adoc
|
||||
// * modules/cpmso-yaml-provider-spec-openstack.adoc
|
||||
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
In the following example, the `<cluster_id>` string is the infrastructure ID.
|
||||
The infrastructure ID matches the cluster ID that the installation program used during cluster provisioning.
|
||||
If you have the {oc-first} installed, you can obtain the infrastructure ID by running the following command:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
|
||||
----
|
||||
11
snippets/cpmso-failure-domain-param-precedence.adoc
Normal file
11
snippets/cpmso-failure-domain-param-precedence.adoc
Normal file
@@ -0,0 +1,11 @@
|
||||
// Text snippet included in the following modules:
|
||||
//
|
||||
// * modules/cpmso-yaml-provider-spec-aws.adoc
|
||||
// * modules/cpmso-yaml-provider-spec-gcp.adoc
|
||||
// * modules/cpmso-yaml-provider-spec-nutanix.adoc
|
||||
// * modules/cpmso-yaml-provider-spec-vsphere.adoc
|
||||
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
If the cluster uses a failure domain, configure this parameter in the failure domain.
|
||||
If you specify this value in the provider specification when using a failure domain, the Control Plane Machine Set Operator ignores it and uses the value in the failure domain.
|
||||
15
snippets/cpmso-new-providerspec-match-install.adoc
Normal file
15
snippets/cpmso-new-providerspec-match-install.adoc
Normal file
@@ -0,0 +1,15 @@
|
||||
// Text snippet included in the following modules:
|
||||
//
|
||||
// * modules/cpmso-yaml-provider-spec-aws.adoc
|
||||
// * modules/cpmso-yaml-provider-spec-azure.adoc
|
||||
// * modules/cpmso-yaml-provider-spec-gcp.adoc
|
||||
// * modules/cpmso-yaml-provider-spec-nutanix.adoc
|
||||
// * modules/cpmso-yaml-provider-spec-openstack.adoc
|
||||
// * modules/cpmso-yaml-provider-spec-vsphere.adoc
|
||||
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
When you create a control plane machine set for an existing cluster, the provider specification must match the `providerSpec` configuration in the control plane machine custom resource (CR) that the installation program creates.
|
||||
====
|
||||
Reference in New Issue
Block a user