mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
Machines
This provider a more organinc and meaningful oredering for new users: Create a set, scale it, autoscale it. Create a variation, play with healthchecks
This commit is contained in:
committed by
Kathryn Alexander
parent
665548bee4
commit
a0f7783600
@@ -55,9 +55,6 @@ Topics:
|
||||
Distros: openshift-enterprise,openshift-origin
|
||||
- Name: Operators in OpenShift Container Platform
|
||||
File: architecture-operators
|
||||
- Name: Machine API Overview
|
||||
File: architecture-machine-api-overview
|
||||
Distros: openshift-enterprise,openshift-origin
|
||||
- Name: Available cluster customizations
|
||||
File: customizations
|
||||
Distros: openshift-enterprise,openshift-origin
|
||||
@@ -418,20 +415,20 @@ Name: Machine management
|
||||
Dir: machine_management
|
||||
Distros: openshift-origin,openshift-enterprise
|
||||
Topics:
|
||||
- Name: Deploying machine health checks
|
||||
File: deploying-machine-health-checks
|
||||
- Name: Applying autoscaling to a cluster
|
||||
File: applying-autoscaling
|
||||
- Name: Manually scaling a MachineSet
|
||||
File: manually-scaling-machineset
|
||||
- Name: Creating a MachineSet
|
||||
File: creating-machineset
|
||||
- Name: Manually scaling a MachineSet
|
||||
File: manually-scaling-machineset
|
||||
- Name: Applying autoscaling to a cluster
|
||||
File: applying-autoscaling
|
||||
- Name: Creating infrastructure MachineSets
|
||||
File: creating-infrastructure-machinesets
|
||||
- Name: Adding a RHEL compute machine
|
||||
File: adding-rhel-compute
|
||||
- Name: Adding more RHEL compute machines
|
||||
File: more-rhel-compute
|
||||
- Name: Deploying machine health checks
|
||||
File: deploying-machine-health-checks
|
||||
---
|
||||
Name: Nodes
|
||||
Dir: nodes
|
||||
|
||||
@@ -1,13 +0,0 @@
|
||||
[id="architecture-machine-api-overview"]
|
||||
= Machine API overview
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: architecture-machine-api-overview
|
||||
toc::[]
|
||||
|
||||
|
||||
For {product-title} {product-version} clusters, the Machine API performs all node
|
||||
management actions after the cluster installation finishes. Because of this
|
||||
system, {product-title} {product-version} offers an elastic, dynamic provisioning
|
||||
method on top of public or private cloud infrastructure.
|
||||
|
||||
include::modules/machine-api-overview.adoc[leveloffset=+1]
|
||||
@@ -13,8 +13,6 @@ that are required to run the environment.
|
||||
|
||||
include::modules/infrastructure-components.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/machine-api-overview.adoc[leveloffset=+1]
|
||||
|
||||
[id="creating-infrastructure-machinesets-production"]
|
||||
== Creating infrastructure MachineSets for production environments
|
||||
|
||||
@@ -25,6 +23,8 @@ instances that are installed on different nodes. For high availability, install
|
||||
deploy these nodes to different availability zones. Since you need different
|
||||
MachineSets for each availability zone, create at least three MachineSets.
|
||||
|
||||
include::modules/machineset-yaml.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/machineset-creating.adoc[leveloffset=+2]
|
||||
|
||||
[id="moving-resources-to-infrastructure-machinesets"]
|
||||
|
||||
@@ -5,4 +5,8 @@ include::modules/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
include::modules/machine-api-overview.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/machineset-yaml.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/machineset-creating.adoc[leveloffset=+1]
|
||||
@@ -1,6 +1,6 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * architecture/architecture.adoc
|
||||
// * machine-management/creating-infrastructure-machinesets.adoc
|
||||
|
||||
[id="machine-api-overview_{context}"]
|
||||
= Machine API overview
|
||||
@@ -8,42 +8,52 @@
|
||||
The Machine API is a combination of primary resources that are based on the
|
||||
upstream Cluster API project and custom {product-title} resources.
|
||||
|
||||
For {product-title} {product-version} clusters, the Machine API performs all node
|
||||
host provisioning management actions after the cluster installation finishes.
|
||||
Because of this system, {product-title} {product-version} offers an elastic,
|
||||
dynamic provisioning method on top of public or private cloud infrastructure.
|
||||
|
||||
The two primary resources are:
|
||||
|
||||
`Machines`:: A fundamental unit that describes a `Node`. A `machine` has a
|
||||
Machines:: A fundamental unit that describes the host for a Node. A machine has a
|
||||
providerSpec, which describes the types of compute nodes that are offered for different
|
||||
cloud platforms. For example, a `machine` type for a worker node on Amazon Web
|
||||
cloud platforms. For example, a machine type for a worker node on Amazon Web
|
||||
Services (AWS) might define a specific machine type and required metadata.
|
||||
`MachineSets`:: Groups of machines. `MachineSets` are to `machines` as
|
||||
`ReplicaSets` are to `Pods`. If you need more `machines` or must scale them down,
|
||||
you change the *replicas* field on the `MachineSet` to meet your compute need.
|
||||
MachineSets:: Groups of machines. MachineSets are to machines as
|
||||
ReplicaSets are to Pods. If you need more machines or must scale them down,
|
||||
you change the *replicas* field on the MachineSet to meet your compute need.
|
||||
|
||||
The following custom resources add more capabilities to your cluster:
|
||||
|
||||
`MachineAutoscaler`:: This resource automatically scales `machines` in
|
||||
MachineAutoscaler:: This resource automatically scales machines in
|
||||
a cloud. You can set the minimum and maximum scaling boundaries for nodes in a
|
||||
specified `MachineSet`, and the `MachineAutoscaler` maintains that range of nodes.
|
||||
The `MachineAutoscaler` object takes effect after a `ClusterAutoscaler` object
|
||||
exists. Both `ClusterAutoscaler` and `MachineAutoscaler` resources are made
|
||||
available by the `ClusterAutoscalerOperator`.
|
||||
specified MachineSet, and the MachineAutoscaler maintains that range of nodes.
|
||||
The MachineAutoscaler object takes effect after a ClusterAutoscaler object
|
||||
exists. Both ClusterAutoscaler and MachineAutoscaler resources are made
|
||||
available by the ClusterAutoscalerOperator.
|
||||
|
||||
`ClusterAutoscaler`:: This resource is based on the upstream ClusterAutoscaler
|
||||
ClusterAutoscaler:: This resource is based on the upstream ClusterAutoscaler
|
||||
project. In the {product-title} implementation, it is integrated with the
|
||||
Cluster API by extending the `MachineSet` API. You can set cluster-wide
|
||||
Machine API by extending the MachineSet API. You can set cluster-wide
|
||||
scaling limits for resources such as cores, nodes, memory, GPU,
|
||||
and so on. You can set the priority so that the cluster prioritizes pods so that
|
||||
new nodes are not brought online for less important pods. You can also set the
|
||||
ScalingPolicy so you can scale up nodes but not scale them down.
|
||||
|
||||
`MachineHealthCheck`:: This resource detects when a machine is unhealthy,
|
||||
MachineHealthCheck:: This resource detects when a machine is unhealthy,
|
||||
deletes it, and, on supported platforms, makes a new machine.
|
||||
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
In version {product-version}, MachineHealthChecks is a Technology Preview
|
||||
feature
|
||||
====
|
||||
|
||||
In {product-title} version 3.11, you could not roll out a multi-zone
|
||||
architecture easily because the cluster did not manage machine provisioning.
|
||||
Beginning with 4.1 this process is easier. Each `MachineSet` is scoped to a
|
||||
single zone, so the installation program sends out `MachineSets` across
|
||||
Beginning with 4.1 this process is easier. Each MachineSet is scoped to a
|
||||
single zone, so the installation program sends out MachineSets across
|
||||
availability zones on your behalf. And then because your compute is dynamic, and
|
||||
in the face of a zone failure, you always have a zone for when you must
|
||||
in the face of a zone failure, you always have a zone for when you must
|
||||
rebalance your machines. The autoscaler provides best-effort balancing over the
|
||||
life of a cluster.
|
||||
|
||||
@@ -1,208 +1,51 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * machine-management/creating-infrastructure-machinesets.adoc
|
||||
// * machine-management/creating-machineset.adoc
|
||||
|
||||
[id="machineset-creating_{context}"]
|
||||
= Creating a MachineSet
|
||||
|
||||
You can create more MachineSets. Because the MachineSet definition contains
|
||||
details that are specific to the Amazon Web Services (AWS) region that the
|
||||
cluster is deployed in, you copy an existing MachineSet from your cluster and
|
||||
modify it.
|
||||
In addition to to the ones created by the installer, you can create your own
|
||||
MachineSets to dynamically manage the machine compute resources for specific
|
||||
workloads of your choice.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Deploy an {product-title} cluster.
|
||||
* Install the `oc` command line and log in as a user with `cluster-admin`
|
||||
permission.
|
||||
* Install the OpenShift Command-line Interface (CLI), commonly known as `oc`
|
||||
* Log in to `oc` as a user with `cluster-admin` permission.
|
||||
|
||||
.Procedure
|
||||
|
||||
. View the current MachineSets.
|
||||
. Create a new YAML file that contains the MachineSet Custom Resource sample,
|
||||
as shown, and is named `<file_name>.yaml`.
|
||||
+
|
||||
Ensure that you set the `<clusterID>` and `<role>` parameter values.
|
||||
|
||||
. If you are not sure about which value to set for an specific field, you can
|
||||
check an existing MachineSet from your cluster.
|
||||
|
||||
.. View the current MachineSets.
|
||||
+
|
||||
----
|
||||
$ oc get machinesets -n openshift-machine-api
|
||||
|
||||
NAME DESIRED CURRENT READY AVAILABLE AGE
|
||||
190125-3-worker-us-west-1b 2 2 2 2 3h
|
||||
190125-3-worker-us-west-1c 1 1 1 1 3h
|
||||
NAME DESIRED CURRENT READY AVAILABLE AGE
|
||||
agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m
|
||||
agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m
|
||||
agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m
|
||||
agl030519-vplxk-worker-us-east-1d 0 0 55m
|
||||
agl030519-vplxk-worker-us-east-1e 0 0 55m
|
||||
agl030519-vplxk-worker-us-east-1f 0 0 55m
|
||||
----
|
||||
|
||||
. Export the source of a MachineSet to a text file:
|
||||
.. Check values of an specific MachineSet:
|
||||
+
|
||||
----
|
||||
$ oc get machineset <machineset_name> -n \
|
||||
openshift-machine-api -o yaml > <file_name>.yaml
|
||||
openshift-machine-api -o yaml
|
||||
----
|
||||
+
|
||||
In this command, `<machineset_name>` is the name of the current MachineSet that
|
||||
is in the AWS region you want to place your new MachineSet in, such
|
||||
as `190125-3-worker-us-west-1c`, and `<file_name>` is the name of your new
|
||||
MachineSet definition.
|
||||
|
||||
. Update the `metadata` section of `<file_name>.yaml`:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
metadata:
|
||||
creationTimestamp: 2019-02-15T16:32:56Z <1>
|
||||
generation: 1 <1>
|
||||
labels:
|
||||
machine.openshift.io/cluster-api-cluster: <infrastructure_id> <2>
|
||||
name: <cluster_name>-<machine_label>-<AWS-availability-zone> <3>
|
||||
namespace: openshift-machine-api
|
||||
resourceVersion: "9249" <1>
|
||||
selfLink: /apis/machine.openshift.io/v1beta1/namespaces/openshift-machine-api/machinesets/ <infrastructure_id>-<machine_label>-<AWS-availability-zone> <1>
|
||||
uid: 59ba0425-313f-11e9-861e-0a18047f0a28 <1>
|
||||
----
|
||||
<1> Remove this line.
|
||||
<2> Do not change the `<infrastructure_id>`.
|
||||
<3> Ensure that the AWS availability zone is correct in each instance of the
|
||||
`<AWS-availability-zone>` parameter.
|
||||
+
|
||||
The `metadata` section resembles the following YAML:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
metadata:
|
||||
labels:
|
||||
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
|
||||
name: <cluster_name>-<new_machine_label>-<AWS-availability-zone>
|
||||
namespace: openshift-machine-api
|
||||
----
|
||||
|
||||
. In `<file_name>.yaml`, delete the `status` stanza:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
status:
|
||||
availableReplicas: 1
|
||||
fullyLabeledReplicas: 1
|
||||
observedGeneration: 1
|
||||
readyReplicas: 1
|
||||
replicas: 1
|
||||
----
|
||||
|
||||
. In `<file_name>.yaml`, update both instances of the `machine.openshift.io/cluster-api-machineset` parameter
|
||||
values in the `spec` section to match the `name` that you defined in the `metadata` section:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
|
||||
machine.openshift.io/cluster-api-machine-role: <machine_label>
|
||||
machine.openshift.io/cluster-api-machine-type: <machine_label>
|
||||
machine.openshift.io/cluster-api-machineset: <cluster_name>-<machine_label>-<AWS-availability-zone> <1>
|
||||
template:
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
|
||||
machine.openshift.io/cluster-api-machine-role: <machine_label>
|
||||
machine.openshift.io/cluster-api-machine-type: <machine_label>
|
||||
machine.openshift.io/cluster-api-machineset: <cluster_name>-<machine_label>-<AWS-availability-zone> <1>
|
||||
...
|
||||
----
|
||||
<1> Ensure that both the `machine.openshift.io/cluster-api-machineset` parameter values
|
||||
match the `name` that you defined in the `metadata` section.
|
||||
|
||||
. In `<file_name>.yaml`, add the node label definition to the spec. The label
|
||||
definition resembles the following stanza:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
spec:
|
||||
metadata:
|
||||
labels:
|
||||
node-role.kubernetes.io/<label_name>: "" <1>
|
||||
----
|
||||
<1> In this definition, `<label_name>` is the node label to add. For example, to
|
||||
add the `infra` label to the nodes, specify `node-role.kubernetes.io/infra`.
|
||||
+
|
||||
The updated `spec` section resembles this example:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
machine.openshift.io/cluster-api-cluster: <cluster_name>
|
||||
machine.openshift.io/cluster-api-machine-role: <machine_label>
|
||||
machine.openshift.io/cluster-api-machine-type: <machine_label>
|
||||
machine.openshift.io/cluster-api-machineset: <cluster_name>-<machine_label>-<AWS-availability-zone>
|
||||
template:
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
machine.openshift.io/cluster-api-cluster: <cluster_name>
|
||||
machine.openshift.io/cluster-api-machineset: <cluster_name>-<machine_label>-<AWS-availability-zone>
|
||||
spec: <1>
|
||||
metadata:
|
||||
labels:
|
||||
node-role.kubernetes.io/<label_name>: ""
|
||||
...
|
||||
----
|
||||
<1> Place the `spec` stanza here.
|
||||
|
||||
. Optionally, modify the EC2 instance type and modify the storage volumes.
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
Take care to modify only the parameters that describe the EC2 instance type
|
||||
and storage volumes. You must not change the other parameters value in the
|
||||
`providerSpec` section.
|
||||
====
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
providerSpec:
|
||||
value:
|
||||
ami:
|
||||
id: ami-0e2bcd33dfff9c73e <1>
|
||||
apiVersion: awsproviderconfig.k8s.io/v1beta1
|
||||
blockDevices: <2>
|
||||
- ebs:
|
||||
iops: 0
|
||||
volumeSize: 120
|
||||
volumeType: gp2
|
||||
credentialsSecret:
|
||||
name: aws-cloud-credentials
|
||||
deviceIndex: 0
|
||||
iamInstanceProfile:
|
||||
id: <cluster_name>-<original_machine_label>-profile <3>
|
||||
instanceType: m4.large <4>
|
||||
kind: AWSMachineProviderConfig
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
placement: <3>
|
||||
availabilityZone: <AWS-availability-zone>
|
||||
region: <AWS-region>
|
||||
publicIp: null
|
||||
securityGroups:
|
||||
- filters:
|
||||
- name: tag:Name
|
||||
values:
|
||||
- <cluster_name>-<machine_label>-sg
|
||||
subnet: <3>
|
||||
filters:
|
||||
- name: tag:Name
|
||||
values:
|
||||
- <cluster_name>-private-<AWS-availability-zone>
|
||||
tags:
|
||||
- name: kubernetes.io/cluster/<cluster_name>
|
||||
value: owned
|
||||
userDataSecret: <3>
|
||||
name: <machine_label>-user-data
|
||||
----
|
||||
<1> You can specify a different valid AMI.
|
||||
<2> You can customize the volume characteristics for the MachineSet. See the AWS
|
||||
documentation.
|
||||
<3> Do not modify this parameter value.
|
||||
<4> Specify a valid `instanceType` for the AMI that you specified.
|
||||
|
||||
. Create the new `MachineSet`:
|
||||
+
|
||||
@@ -216,30 +59,54 @@ $ oc create -f <file_name>.yaml
|
||||
$ oc get machineset -n openshift-machine-api
|
||||
|
||||
|
||||
NAME DESIRED CURRENT READY AVAILABLE AGE
|
||||
190125-3-worker-us-west-1b 2 2 2 2 4h
|
||||
190125-3-worker-us-west-1c 1 1 1 1 4h
|
||||
infrastructure-us-west-1b 1 1 4s
|
||||
NAME DESIRED CURRENT READY AVAILABLE AGE
|
||||
agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m
|
||||
agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m
|
||||
agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m
|
||||
agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m
|
||||
agl030519-vplxk-worker-us-east-1d 0 0 55m
|
||||
agl030519-vplxk-worker-us-east-1e 0 0 55m
|
||||
agl030519-vplxk-worker-us-east-1f 0 0 55m
|
||||
----
|
||||
+
|
||||
When the new MachineSet is available, the `DESIRED` and `CURRENT` values match.
|
||||
If the MachineSet is not available, wait a few minutes and run the command again.
|
||||
|
||||
. After the new MachineSet is available, check the machine status:
|
||||
. After the new MachineSet is available, check status of the machine and the
|
||||
node that it references:
|
||||
+
|
||||
----
|
||||
$ oc get machine -n openshift-machine-api
|
||||
|
||||
status:
|
||||
addresses:
|
||||
- address: 10.0.133.18
|
||||
type: InternalIP
|
||||
- address: ""
|
||||
type: ExternalDNS
|
||||
- address: ip-10-0-133-18.ec2.internal
|
||||
type: InternalDNS
|
||||
lastUpdated: "2019-05-03T10:38:17Z"
|
||||
nodeRef:
|
||||
kind: Node
|
||||
name: ip-10-0-133-18.ec2.internal
|
||||
uid: 71fb8d75-6d8f-11e9-9ff3-0e3f103c7cd8
|
||||
providerStatus:
|
||||
apiVersion: awsproviderconfig.openshift.io/v1beta1
|
||||
conditions:
|
||||
- lastProbeTime: "2019-05-03T10:34:31Z"
|
||||
lastTransitionTime: "2019-05-03T10:34:31Z"
|
||||
message: machine successfully created
|
||||
reason: MachineCreationSucceeded
|
||||
status: "True"
|
||||
type: MachineCreation
|
||||
instanceId: i-09ca0701454124294
|
||||
instanceState: running
|
||||
kind: AWSMachineProviderStatus
|
||||
----
|
||||
|
||||
. View the new node:
|
||||
+
|
||||
----
|
||||
$ oc get node
|
||||
----
|
||||
+
|
||||
The new node is the one with the lowest `AGE`. ip-10-0-128-138.us-west-1.compute.internal
|
||||
|
||||
. Confirm that the new node has the label that you specified:
|
||||
. View the new node and confirm that the new node has the label that you
|
||||
specified:
|
||||
+
|
||||
----
|
||||
$ oc get node <node_name> --show-labels
|
||||
|
||||
79
modules/machineset-yaml.adoc
Normal file
79
modules/machineset-yaml.adoc
Normal file
@@ -0,0 +1,79 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * machine-management/creating-infrastructure-machinesets.adoc
|
||||
// * machine-management/creating-machineset.adoc
|
||||
|
||||
[id="machineset-yaml_{context}"]
|
||||
= Sample YAML for a MachineSet Custom Resource
|
||||
|
||||
This sample YAML defines a MachineSet that runs in the `us-east-1a`
|
||||
Amazon Web Services (AWS) region and creates nodes that are labeled with
|
||||
`node-role.kubernetes.io/<role>: ""`
|
||||
|
||||
In this sample, `<clusterID>` is the cluster ID that you set when you provisioned
|
||||
the cluster and `<role>` is the node label to add.
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: machine.openshift.io/v1beta1
|
||||
kind: MachineSet
|
||||
metadata:
|
||||
labels:
|
||||
machine.openshift.io/cluster-api-cluster: <clusterID>
|
||||
name: <clusterID>-<role>-us-east-1a
|
||||
namespace: openshift-machine-api
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
machine.openshift.io/cluster-api-cluster: <clusterID>
|
||||
machine.openshift.io/cluster-api-machine-role: <role>
|
||||
machine.openshift.io/cluster-api-machine-type: <role>
|
||||
machine.openshift.io/cluster-api-machineset: <clusterID>-<role>-us-east-1a
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
machine.openshift.io/cluster-api-cluster: <clusterID>
|
||||
machine.openshift.io/cluster-api-machine-role: <role>
|
||||
machine.openshift.io/cluster-api-machine-type: <role>
|
||||
machine.openshift.io/cluster-api-machineset: <clusterID>-<role>-us-east-1a
|
||||
spec:
|
||||
metadata:
|
||||
labels:
|
||||
node-role.kubernetes.io/<role>: ""
|
||||
providerSpec:
|
||||
value:
|
||||
ami:
|
||||
id: <amiID>
|
||||
apiVersion: awsproviderconfig.openshift.io/v1beta1
|
||||
blockDevices:
|
||||
- ebs:
|
||||
iops: 0
|
||||
volumeSize: 120
|
||||
volumeType: gp2
|
||||
credentialsSecret:
|
||||
name: aws-cloud-credentials
|
||||
deviceIndex: 0
|
||||
iamInstanceProfile:
|
||||
id: <clusterID>-worker-profile
|
||||
instanceType: m4.large
|
||||
kind: AWSMachineProviderConfig
|
||||
placement:
|
||||
availabilityZone: us-east-1a
|
||||
region: us-east-1
|
||||
securityGroups:
|
||||
- filters:
|
||||
- name: tag:Name
|
||||
values:
|
||||
- <clusterID>-worker-sg
|
||||
subnet:
|
||||
filters:
|
||||
- name: tag:Name
|
||||
values:
|
||||
- <clusterID>-private-us-east-1a
|
||||
tags:
|
||||
- name: kubernetes.io/cluster/<clusterID>
|
||||
value: owned
|
||||
userDataSecret:
|
||||
name: worker-user-data
|
||||
----
|
||||
Reference in New Issue
Block a user