From c3d9e42a9c838798d5215cbc29f81819e628156c Mon Sep 17 00:00:00 2001 From: Jeana Routh Date: Tue, 6 Feb 2024 09:46:18 -0500 Subject: [PATCH] OSDOCS-9565: configuring AWS Outposts postinstallation --- _topic_maps/_topic_map.yml | 2 + .../installing-aws-outposts.adoc | 4 +- .../installing_aws/installing-aws-vpc.adoc | 1 + .../aws-outposts-environment-info-aws.adoc | 60 +++++ modules/aws-outposts-environment-info-oc.adoc | 66 ++++++ modules/aws-outposts-load-balancer-clb.adoc | 123 ++++++++++ modules/aws-outposts-machine-set.adoc | 219 ++++++++++++++++++ ...aws-outposts-requirements-limitations.adoc | 33 +++ modules/create-user-workloads-aws-edge.adoc | 135 +++++++++++ ...ation-cloudformation-subnet-localzone.adoc | 86 +++++-- ...llation-creating-aws-vpc-subnets-edge.adoc | 66 ++++-- .../nw-aws-load-balancer-with-outposts.adoc | 39 +++- modules/nw-cluster-mtu-change.adoc | 90 ++++--- .../configuring-aws-outposts.adoc | 84 +++++++ 14 files changed, 923 insertions(+), 85 deletions(-) create mode 100644 modules/aws-outposts-environment-info-aws.adoc create mode 100644 modules/aws-outposts-environment-info-oc.adoc create mode 100644 modules/aws-outposts-load-balancer-clb.adoc create mode 100644 modules/aws-outposts-machine-set.adoc create mode 100644 modules/aws-outposts-requirements-limitations.adoc create mode 100644 modules/create-user-workloads-aws-edge.adoc create mode 100644 post_installation_configuration/configuring-aws-outposts.adoc diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 0c695a6766..b97dbf4968 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -633,6 +633,8 @@ Topics: - Name: Adding failure domains to an existing Nutanix cluster File: adding-nutanix-failure-domains Distros: openshift-origin,openshift-enterprise +- Name: Extending an AWS VPC cluster into an AWS Outpost + File: configuring-aws-outposts --- Name: Updating clusters Dir: updating diff --git a/installing/installing_aws/installing-aws-outposts.adoc b/installing/installing_aws/installing-aws-outposts.adoc index 90f641c01b..6022ad9c21 100644 --- a/installing/installing_aws/installing-aws-outposts.adoc +++ b/installing/installing_aws/installing-aws-outposts.adoc @@ -8,4 +8,6 @@ toc::[] In {product-title} version 4.14, you could install a cluster on Amazon Web Services (AWS) with compute nodes running in AWS Outposts as a Technology Preview. As of {product-title} version 4.15, this installation method is no longer supported. -Instead, you can xref:../../installing/installing_aws/installing-aws-vpc.adoc#installing-aws-vpc[install a cluster on AWS into an existing VPC] and provision compute nodes on AWS Outposts as a postinstallation configuration task. \ No newline at end of file +Instead, you can xref:../../installing/installing_aws/installing-aws-vpc.adoc#installing-aws-vpc[install a cluster on AWS into an existing VPC] and provision compute nodes on AWS Outposts as a postinstallation configuration task. + +For more information, see xref:../../post_installation_configuration/configuring-aws-outposts.adoc#configuring-aws-outposts[Extending an AWS VPC cluster into an AWS Outpost] \ No newline at end of file diff --git a/installing/installing_aws/installing-aws-vpc.adoc b/installing/installing_aws/installing-aws-vpc.adoc index fbea0d3286..4f74e37948 100644 --- a/installing/installing_aws/installing-aws-vpc.adoc +++ b/installing/installing_aws/installing-aws-vpc.adoc @@ -121,3 +121,4 @@ include::modules/cluster-telemetry.adoc[leveloffset=+1] * xref:../../post_installation_configuration/cluster-tasks.adoc#available_cluster_customizations[Customize your cluster]. * If necessary, you can xref:../../support/remote_health_monitoring/opting-out-of-remote-health-reporting.adoc#opting-out-remote-health-reporting_opting-out-remote-health-reporting[opt out of remote health reporting]. * If necessary, you can xref:../../post_installation_configuration/cluster-tasks.adoc#manually-removing-cloud-creds_post-install-cluster-tasks[remove cloud provider credentials]. +* After installing a cluster on AWS into an existing VPC, you can xref:../../post_installation_configuration/configuring-aws-outposts.adoc#configuring-aws-outposts[extend the AWS VPC cluster into an AWS Outpost]. \ No newline at end of file diff --git a/modules/aws-outposts-environment-info-aws.adoc b/modules/aws-outposts-environment-info-aws.adoc new file mode 100644 index 0000000000..e4e072013a --- /dev/null +++ b/modules/aws-outposts-environment-info-aws.adoc @@ -0,0 +1,60 @@ +// Module included in the following assemblies: +// +// * post_installation_configuration/configuring-aws-outposts.adoc + +:_mod-docs-content-type: PROCEDURE +[id="aws-outposts-environment-info-aws_{context}"] += Obtaining information from your AWS account + +You can use the AWS CLI (`aws`) to obtain information from your AWS account. + +[TIP] +==== +You might find it convenient to store some or all of these values as environment variables by using the `export` command. +==== + +.Prerequisites + +* You have an AWS Outposts site with the required hardware setup complete. + +* Your Outpost is connected to your AWS account. + +* You have access to your AWS account by using the AWS CLI (`aws`) as a user with permissions to perform the required tasks. + +.Procedure + +. List the Outposts that are connected to your AWS account by running the following command: ++ +[source,terminal] +---- +$ aws outposts list-outposts +---- + +. Retain the following values from the output of the `aws outposts list-outposts` command: + +** The Outpost ID. + +** The Amazon Resource Name (ARN) for the Outpost. + +** The Outpost availability zone. ++ +[NOTE] +==== +The output of the `aws outposts list-outposts` command includes two values related to the availability zone: `AvailabilityZone` and `AvailabilityZoneId`. You use the `AvailablilityZone` value to configure a compute machine set that creates compute machines in your Outpost. +==== + +. Using the value of the Outpost ID, show the instance types that are available in your Outpost by running the following command. Retain the values of the available instance types. ++ +[source,terminal] +---- +$ aws outposts get-outpost-instance-types \ + --outpost-id +---- + +. Using the value of the Outpost ARN, show the subnet ID for the Outpost by running the following command. Retain this value. ++ +[source,terminal] +---- +$ aws ec2 describe-subnets \ + --filters Name=outpost-arn,Values= +---- \ No newline at end of file diff --git a/modules/aws-outposts-environment-info-oc.adoc b/modules/aws-outposts-environment-info-oc.adoc new file mode 100644 index 0000000000..b9d7d21347 --- /dev/null +++ b/modules/aws-outposts-environment-info-oc.adoc @@ -0,0 +1,66 @@ +// Module included in the following assemblies: +// +// * post_installation_configuration/configuring-aws-outposts.adoc + +:_mod-docs-content-type: PROCEDURE +[id="aws-outposts-environment-info-oc_{context}"] += Obtaining information from your {product-title} cluster + +You can use the {oc-first} to obtain information from your {product-title} cluster. + +[TIP] +==== +You might find it convenient to store some or all of these values as environment variables by using the `export` command. +==== + +.Prerequisites + +* You have installed an {product-title} cluster into a custom VPC on AWS. + +* You have access to the cluster using an account with `cluster-admin` permissions. + +* You have installed the {oc-first}. + +.Procedure + +. List the infrastructure ID for the cluster by running the following command. Retain this value. ++ +[source,terminal] +---- +$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructures.config.openshift.io cluster +---- + +. Obtain details about the compute machine sets that the installation program created by running the following commands: + +.. List the compute machine sets on your cluster: ++ +[source,terminal] +---- +$ oc get machinesets.machine.openshift.io -n openshift-machine-api +---- ++ +.Example output +[source,text] +---- +NAME DESIRED CURRENT READY AVAILABLE AGE + 1 1 1 1 55m + 1 1 1 1 55m +---- + +.. Display the Amazon Machine Image (AMI) ID for one of the listed compute machine sets. Retain this value. ++ +[source,terminal] +---- +$ oc get machinesets.machine.openshift.io \ + -n openshift-machine-api \ + -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}' +---- + +.. Display the subnet ID for the AWS VPC cluster. Retain this value. ++ +[source,terminal] +---- +$ oc get machinesets.machine.openshift.io \ + -n openshift-machine-api \ + -o jsonpath='{.spec.template.spec.providerSpec.value.subnet.id}' +---- \ No newline at end of file diff --git a/modules/aws-outposts-load-balancer-clb.adoc b/modules/aws-outposts-load-balancer-clb.adoc new file mode 100644 index 0000000000..e99ac49288 --- /dev/null +++ b/modules/aws-outposts-load-balancer-clb.adoc @@ -0,0 +1,123 @@ +// Module included in the following assemblies: +// +// * post_installation_configuration/configuring-aws-outposts.adoc + +:_mod-docs-content-type: PROCEDURE +[id="aws-outposts-load-balancer-clb_{context}"] += Using AWS Classic Load Balancers in an AWS VPC cluster extended into an Outpost + +AWS Outposts racks cannot run AWS Classic Load Balancers, but Classic Load Balancers in the AWS VPC cluster can target edge compute nodes in the Outpost if edge and cloud-based subnets are in the same availability zone. +As a result, Classic Load Balancers on the VPC cluster might schedule pods on either of these node types. + +Scheduling the workloads on edge compute nodes is supported, but can introduce latency. +If you want to prevent a Classic Load Balancer in the VPC cluster from targeting Outpost edge compute nodes, you can apply labels to the cloud-based compute nodes and configure the Classic Load Balancer to only schedule on nodes with the applied labels. + +[NOTE] +==== +If you do not need to prevent a Classic Load Balancer in the VPC cluster from targeting Outpost edge compute nodes, you do not need to complete these steps. +==== + +.Prerequisites + +* You have extended an AWS VPC cluster into an Outpost. + +* You have access to the cluster using an account with `cluster-admin` permissions. + +* You have installed the {oc-first}. + +* You have created a user workload in the Outpost with tolerations that match the taints for your edge compute machines. + +.Procedure + +. Optional: Verify that the edge compute nodes have the `location=outposts` label by running the following command and verifying that the output includes only the edge compute nodes in your Outpost: ++ +[source,terminal] +---- +$ oc get nodes -l location=outposts +---- + +. Label the cloud-based compute nodes in the VPC cluster with a key-value pair by running the following command: ++ +[source,terminal] +---- +$ for NODE in $(oc get node -l node-role.kubernetes.io/worker --no-headers | grep -v outposts | awk '{print$1}'); do oc label node $NODE =; done +---- ++ +where `=` is the label you want to use to distinguish cloud-based compute nodes. ++ +.Example output +[source,text] +---- +node1.example.com labeled +node2.example.com labeled +node3.example.com labeled +---- + +. Optional: Verify that the cloud-based compute nodes have the specified label by running the following command and confirming that the output includes all cloud-based compute nodes in your VPC cluster: ++ +[source,terminal] +---- +$ oc get nodes -l = +---- ++ +.Example output +[source,terminal] +---- +NAME STATUS ROLES AGE VERSION +node1.example.com Ready worker 7h v1.28.5 +node2.example.com Ready worker 7h v1.28.5 +node3.example.com Ready worker 7h v1.28.5 +---- + +. Configure the Classic Load Balancer service by adding the cloud-based subnet information to the `annotations` field of the `Service` manifest: ++ +.Example service configuration +[source,yaml] +---- +apiVersion: v1 +kind: Service +metadata: + labels: + app: + name: + namespace: + annotations: + service.beta.kubernetes.io/aws-load-balancer-subnets: # <1> + service.beta.kubernetes.io/aws-load-balancer-target-node-labels: = # <2> +spec: + ports: + - name: http + port: 80 + protocol: TCP + targetPort: 8080 + selector: + app: + type: LoadBalancer +---- +<1> Specify the subnet ID for the AWS VPC cluster. +<2> Specify the key-value pair that matches the pair in the node label. + +. Create the `Service` CR by running the following command: ++ +[source,terminal] +---- +$ oc create -f .yaml +---- + +.Verification + +. Verify the status of the `service` resource to show the host of the provisioned Classic Load Balancer by running the following command: ++ +[source,terminal] +---- +$ HOST=$(oc get service -n --template='{{(index .status.loadBalancer.ingress 0).hostname}}') +---- + +. Verify the status of the provisioned Classic Load Balancer host by running the following command: ++ +[source,terminal] +---- +$ curl $HOST +---- + +. In the AWS console, verify that only the labeled instances appear as the targeted instances for the load balancer. \ No newline at end of file diff --git a/modules/aws-outposts-machine-set.adoc b/modules/aws-outposts-machine-set.adoc new file mode 100644 index 0000000000..dfa7dfca0a --- /dev/null +++ b/modules/aws-outposts-machine-set.adoc @@ -0,0 +1,219 @@ +// Module included in the following assemblies: +// +// * post_installation_configuration/configuring-aws-outposts.adoc + +:_mod-docs-content-type: PROCEDURE +[id="aws-outposts-machine-set_{context}"] += Creating a compute machine set that deploys edge compute machines on an Outpost + +To create edge compute machines on AWS Outposts, you must create a new compute machine set with a compatible configuration. + +.Prerequisites + +* You have an AWS Outposts site. + +* You have installed an {product-title} cluster into a custom VPC on AWS. + +* You have access to the cluster using an account with `cluster-admin` permissions. + +* You have installed the {oc-first}. + +.Procedure + +. List the compute machine sets in your cluster by running the following command: ++ +[source,terminal] +---- +$ oc get machinesets.machine.openshift.io -n openshift-machine-api +---- ++ +.Example output +[source,text] +---- +NAME DESIRED CURRENT READY AVAILABLE AGE + 1 1 1 1 55m + 1 1 1 1 55m +---- + +. Record the names of the existing compute machine sets. + +. Create a YAML file that contains the values for a new compute machine set custom resource (CR) by using one of the following methods: + +** Copy an existing compute machine set configuration into a new file by running the following command: ++ +[source,terminal] +---- +$ oc get machinesets.machine.openshift.io \ + -n openshift-machine-api -o yaml > .yaml +---- ++ +You can edit this YAML file with your preferred text editor. + +** Create an empty YAML file named `.yaml` with your preferred text editor and include the required values for your new compute machine set. ++ +If you are not sure which value to set for a specific field, you can view values of an existing compute machine set CR by running the following command: ++ +[source,terminal] +---- +$ oc get machinesets.machine.openshift.io \ + -n openshift-machine-api -o yaml +---- ++ +-- +.Example output +[source,yaml] +---- +apiVersion: machine.openshift.io/v1beta1 +kind: MachineSet +metadata: + labels: + machine.openshift.io/cluster-api-cluster: # <1> + name: -- # <2> + namespace: openshift-machine-api +spec: + replicas: 1 + selector: + matchLabels: + machine.openshift.io/cluster-api-cluster: + machine.openshift.io/cluster-api-machineset: -- + template: + metadata: + labels: + machine.openshift.io/cluster-api-cluster: + machine.openshift.io/cluster-api-machine-role: + machine.openshift.io/cluster-api-machine-type: + machine.openshift.io/cluster-api-machineset: -- + spec: + providerSpec: # <3> +# ... +---- +<1> The cluster infrastructure ID. +<2> A default node label. For AWS Outposts, you use the `outposts` role. +<3> The omitted `providerSpec` section includes values that must be configured for your Outpost. +-- + +. Configure the new compute machine set to create edge compute machines in the Outpost by editing the `.yaml` file: ++ +-- +.Example compute machine set for AWS Outposts +[source,yaml] +---- +apiVersion: machine.openshift.io/v1beta1 +kind: MachineSet +metadata: + labels: + machine.openshift.io/cluster-api-cluster: # <1> + name: -outposts- # <2> + namespace: openshift-machine-api +spec: + replicas: 1 + selector: + matchLabels: + machine.openshift.io/cluster-api-cluster: + machine.openshift.io/cluster-api-machineset: -outposts- + template: + metadata: + labels: + machine.openshift.io/cluster-api-cluster: + machine.openshift.io/cluster-api-machine-role: outposts + machine.openshift.io/cluster-api-machine-type: outposts + machine.openshift.io/cluster-api-machineset: -outposts- + spec: + metadata: + labels: + node-role.kubernetes.io/outposts: "" + location: outposts + providerSpec: + value: + ami: + id: # <3> + apiVersion: machine.openshift.io/v1beta1 + blockDevices: + - ebs: + volumeSize: 120 + volumeType: gp2 # <4> + credentialsSecret: + name: aws-cloud-credentials + deviceIndex: 0 + iamInstanceProfile: + id: -worker-profile + instanceType: m5.xlarge # <5> + kind: AWSMachineProviderConfig + placement: + availabilityZone: + region: # <6> + securityGroups: + - filters: + - name: tag:Name + values: + - -worker-sg + subnet: + id: # <7> + tags: + - name: kubernetes.io/cluster/ + value: owned + userDataSecret: + name: worker-user-data + taints: # <8> + - key: node-role.kubernetes.io/outposts + effect: NoSchedule +---- +<1> Specifies the cluster infrastructure ID. +<2> Specifies the name of the compute machine set. The name is composed of the cluster infrastructure ID, the `outposts` role name, and the Outpost availability zone. +<3> Specifies the Amazon Machine Image (AMI) ID. +<4> Specifies the EBS volume type. AWS Outposts requires gp2 volumes. +<5> Specifies the AWS instance type. You must use an instance type that is configured in your Outpost. +<6> Specifies the AWS region in which the Outpost availability zone exists. +<7> Specifies the dedicated subnet for your Outpost. +<8> Specifies a taint to prevent user workloads from being scheduled on nodes that have the `node-role.kubernetes.io/outposts` label. +-- + +. Save your changes. + +. Create a compute machine set CR by running the following command: ++ +[source,terminal] +---- +$ oc create -f .yaml +---- + +.Verification + +* To verify that the compute machine set is created, list the compute machine sets in your cluster by running the following command: ++ +[source,terminal] +---- +$ oc get machinesets.machine.openshift.io -n openshift-machine-api +---- ++ +.Example output +[source,text] +---- +NAME DESIRED CURRENT READY AVAILABLE AGE + 1 1 1 1 4m12s + 1 1 1 1 55m + 1 1 1 1 55m +---- + +* To list the machines that are managed by the new compute machine set, run the following command: ++ +[source,terminal] +---- +$ oc get -n openshift-machine-api machines.machine.openshift.io \ + -l machine.openshift.io/cluster-api-machineset= +---- ++ +.Example output +[source,text] +---- +NAME PHASE TYPE REGION ZONE AGE + Provisioned m5.xlarge us-east-1 us-east-1a 25s + Provisioning m5.xlarge us-east-1 us-east-1a 25s +---- + +* To verify that a machine created by the new compute machine set has the correct configuration, examine the relevant fields in the CR for one of the new machines by running the following command: ++ +[source,terminal] +---- +$ oc describe machine -n openshift-machine-api +---- \ No newline at end of file diff --git a/modules/aws-outposts-requirements-limitations.adoc b/modules/aws-outposts-requirements-limitations.adoc new file mode 100644 index 0000000000..dfc222f01e --- /dev/null +++ b/modules/aws-outposts-requirements-limitations.adoc @@ -0,0 +1,33 @@ +// Module included in the following assemblies: +// +// * post_installation_configuration/configuring-aws-outposts.adoc + +:_mod-docs-content-type: REFERENCE +[id="aws-outposts-requirements-limitations_{context}"] += AWS Outposts on {product-title} requirements and limitations + +You can manage the resources on your Outpost similarly to those on a cloud-based AWS cluster if you configure your {product-title} cluster to accommodate the following requirements and limitations: + +* To extend an {product-title} cluster on AWS into an Outpost, you must have installed the cluster into an existing VPC. + +* {product-title} clusters on AWS include the `gp3-csi` and `gp2-csi` storage classes. +These classes correspond to Amazon Elastic Block Store (EBS) gp3 and gp2 volumes. +{product-title} clusters use the `gp3-csi` storage class by default, but AWS Outposts does not support EBS gp3 volumes. + +* An Outpost is an extension of an availability zone associated with an AWS region and has a dedicated subnet. +Edge compute machines deployed into an Outpost must use the Outpost availability zone and subnet. + +* AWS Outposts does not support AWS Network Load Balancers or Classic Load Balancers. +To manages Ingress objects for your edge compute resources, you must install the AWS Load Balancer Operator so that you can use AWS Application Load Balancers in the AWS Outposts environment. +If your cluster contains both edge and cloud-based compute instances that share workloads, additional configuration is required. + +* To create a volume in the Outpost, the CSI driver requires the Outpost Amazon Resource Name (ARN). +The driver uses the topology keys stored on the `CSINode` objects to determine the Outpost ARN. +To ensure that the driver uses the correct topology values, you must set the volume binding mode to `WaitForConsumer` and avoid setting allowed topologies on any new storage classes that you create. + +* When you extend an AWS VPC cluster into an Outpost, you have two types of compute resources. +The Outpost has edge compute nodes, while the VPC has cloud-based compute nodes. +The cloud-based AWS Elastic Block volume cannot attach to Outpost edge compute nodes, and the Outpost volumes cannot attach to cloud-based compute nodes. ++ +As a result, you cannot use CSI snapshots to migrate applications that use persistent storage from cloud-based compute nodes to edge compute nodes or directly use the original persistent volume. +To migrate persistent storage data for applications, you must perform a manual backup and restore operation. \ No newline at end of file diff --git a/modules/create-user-workloads-aws-edge.adoc b/modules/create-user-workloads-aws-edge.adoc new file mode 100644 index 0000000000..b41397a77c --- /dev/null +++ b/modules/create-user-workloads-aws-edge.adoc @@ -0,0 +1,135 @@ +// Module included in the following assemblies: +// +// * post_installation_configuration/configuring-aws-outposts.adoc + +//To-do: reintegrate installation-extend-edge-nodes-aws-local-zones.adoc with create-user-workloads-aws-edge.adoc. Requires global repo update of any xrefs/includes. + +:_mod-docs-content-type: PROCEDURE +[id="create-user-workloads-aws-edge_{context}"] += Creating user workloads in an Outpost + +After you extend an {product-title} in an AWS VPC cluster into an Outpost, you can use edge compute nodes with the label `node-role.kubernetes.io/outposts` to create user workloads in the Outpost. + +.Prerequisites + +* You have extended an AWS VPC cluster into an Outpost. + +* You have access to the cluster using an account with `cluster-admin` permissions. + +* You have installed the {oc-first}. + +* You have created a compute machine set that deploys edge compute machines compatible with the Outpost environment. + +.Procedure + +. Configure a `Deployment` resource file for an application that you want to deploy to the edge compute node in the edge subnet. ++ +.Example `Deployment` manifest +[source,yaml] +---- +kind: Namespace +apiVersion: v1 +metadata: + name: # <1> +--- +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: + namespace: # <2> +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi + storageClassName: gp2-csi # <3> + volumeMode: Filesystem +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: + namespace: +spec: + selector: + matchLabels: + app: + replicas: 1 + template: + metadata: + labels: + app: + location: outposts # <4> + spec: + securityContext: + seccompProfile: + type: RuntimeDefault + nodeSelector: # <5> + node-role.kubernetes.io/outpost: '' + tolerations: # <6> + - key: "node-role.kubernetes.io/outposts" + operator: "Equal" + value: "" + effect: "NoSchedule" + containers: + - image: openshift/origin-node + command: + - "/bin/socat" + args: + - TCP4-LISTEN:8080,reuseaddr,fork + - EXEC:'/bin/bash -c \"printf \\\"HTTP/1.0 200 OK\r\n\r\n\\\"; sed -e \\\"/^\r/q\\\"\"' + imagePullPolicy: Always + name: + ports: + - containerPort: 8080 + volumeMounts: + - mountPath: "/mnt/storage" + name: data + volumes: + - name: data + persistentVolumeClaim: + claimName: +---- +<1> Specify a name for your application. +<2> Specify a namespace for your application. The application namespace can be the same as the application name. +<3> Specify the storage class name. For an edge compute configuration, you must use the `gp2-csi` storage class. +<4> Specify a label to identify workloads deployed in the Outpost. +<5> Specify the node selector label that targets edge compute nodes. +<6> Specify tolerations that match the `key` and `effects` taints in the compute machine set for your edge compute machines. Set the `value` and `operator` tolerations as shown. + +. Create the `Deployment` resource by running the following command: ++ +[source,terminal] +---- +$ oc create -f .yaml +---- + +. Configure a `Service` object that exposes a pod from a targeted edge compute node to services that run inside your edge network. ++ +.Example `Service` manifest +[source,yaml] +---- +apiVersion: v1 +kind: Service # <1> +metadata: + name: + namespace: +spec: + ports: + - port: 80 + targetPort: 8080 + protocol: TCP + type: NodePort + selector: # <2> + app: +---- +<1> Defines the `service` resource. +<2> Specify the label type to apply to managed pods. + +. Create the `Service` CR by running the following command: ++ +[source,terminal] +---- +$ oc create -f .yaml +---- \ No newline at end of file diff --git a/modules/installation-cloudformation-subnet-localzone.adoc b/modules/installation-cloudformation-subnet-localzone.adoc index 0855839f20..a48dab92ae 100644 --- a/modules/installation-cloudformation-subnet-localzone.adoc +++ b/modules/installation-cloudformation-subnet-localzone.adoc @@ -1,14 +1,20 @@ // Module included in the following assemblies: // -// * installing/installing-aws-localzone.adoc (Installing a cluster on AWS with worker nodes on AWS Local Zones) -// * installing/installing-aws-wavelength-zone.adoc (Installing a cluster on AWS with compute nodes on AWS Wavelength Zones) +// * installing/installing-aws-localzone.adoc (Installing a cluster on AWS with worker nodes on AWS Local Zones) +// * installing/installing-aws-wavelength-zone.adoc (Installing a cluster on AWS with compute nodes on AWS Wavelength Zones) // * post_installation_configuration/aws-compute-edge-zone-tasks.adoc (AWS zone tasks) +// * post_installation_configuration/configuring-aws-outposts.adoc + +ifeval::["{context}" == "configuring-aws-outposts"] +:outposts: +endif::[] :_mod-docs-content-type: REFERENCE [id="installation-cloudformation-subnet-localzone_{context}"] -= CloudFormation template for the VPC Subnet += CloudFormation template for the VPC subnet -You can use the following CloudFormation template to deploy the private and public subnets in a zone on {zone-type} infrastructure. +ifndef::outposts[You can use the following CloudFormation template to deploy the private and public subnets in a zone on {zone-type} infrastructure.] +ifdef::outposts[You can use the following CloudFormation template to deploy the Outpost subnet.] .CloudFormation template for VPC subnets [%collapsible] @@ -20,43 +26,59 @@ Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: - Description: VPC ID that comprises all the target subnets + Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\b|(?:[0-9]{1,3}\.){3}[0-9]{1,3})$ ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: - Description: ClusterName or PrefixName prepends to the Name tag for each subnet + Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: ".+" - ConstraintDescription: ClusterName parameter must be specified + ConstraintDescription: ClusterName parameter must be specified. ZoneName: - Description: ZoneName that will be used to create the subnets, such as us-west-2-lax-1a + Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: ".+" - ConstraintDescription: ZoneName parameter must be specified + ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: - Description: The PublicRouteTableID that associates with the public subnet + Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: ".+" - ConstraintDescription: PublicRouteTableId parameter must be specified + ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))$ - ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24 + ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 - Description: CIDR block for public subnet + Description: CIDR block for public subnet. Type: String - PrivateRouteTableId: - Description: PublicRouteTableID that associates to the {zone-type} subnet + Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: ".+" - ConstraintDescription: PublicRouteTableId parameter must be specified + ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))$ - ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24 + ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 - Description: CIDR block for the public subnet + Description: CIDR block for private subnet. Type: String +ifdef::outposts[] + PrivateSubnetLabel: + Default: "private" + Description: Subnet label to be added when building the subnet name. + Type: String + PublicSubnetLabel: + Default: "public" + Description: Subnet label to be added when building the subnet name. + Type: String + OutpostArn: + Default: "" + Description: OutpostArn when creating subnets on AWS Outpost. + Type: String + +Conditions: + OutpostEnabled: !Not [!Equals [!Ref "OutpostArn", ""]] +endif::outposts[] Resources: PublicSubnet: @@ -65,9 +87,19 @@ Resources: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName +ifdef::outposts[] + OutpostArn: !If [ OutpostEnabled, !Ref OutpostArn, !Ref "AWS::NoValue"] +endif::outposts[] Tags: - Key: Name +ifndef::outposts[] Value: !Join ['-', [!Ref ClusterName, "public", !Ref ZoneName]] +endif::outposts[] +ifdef::outposts[] + Value: !Join ['-', [ !Ref ClusterName, !Ref PublicSubnetLabel, !Ref ZoneName]] + - Key: kubernetes.io/cluster/unmanaged + Value: true +endif::outposts[] PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" @@ -81,9 +113,19 @@ Resources: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName +ifdef::outposts[] + OutpostArn: !If [ OutpostEnabled, !Ref OutpostArn, !Ref "AWS::NoValue"] +endif::outposts[] Tags: - Key: Name +ifndef::outposts[] Value: !Join ['-', [!Ref ClusterName, "private", !Ref ZoneName]] +endif::outposts[] +ifdef::outposts[] + Value: !Join ['-', [!Ref ClusterName, !Ref PrivateSubnetLabel, !Ref ZoneName]] + - Key: kubernetes.io/cluster/unmanaged + Value: true +endif::outposts[] PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" @@ -93,13 +135,17 @@ Resources: Outputs: PublicSubnetId: - Description: Subnet ID for the public subnets + Description: Subnet ID of the public subnets. Value: !Join ["", [!Ref PublicSubnet]] PrivateSubnetId: - Description: Subnet ID for the private subnets + Description: Subnet ID of the private subnets. Value: !Join ["", [!Ref PrivateSubnet]] ---- ==== + +ifeval::["{context}" == "configuring-aws-outposts"] +:!outposts: +endif::[] \ No newline at end of file diff --git a/modules/installation-creating-aws-vpc-subnets-edge.adoc b/modules/installation-creating-aws-vpc-subnets-edge.adoc index 8572d760ba..1761136915 100644 --- a/modules/installation-creating-aws-vpc-subnets-edge.adoc +++ b/modules/installation-creating-aws-vpc-subnets-edge.adoc @@ -1,12 +1,17 @@ // Module included in the following assemblies: // -// * post_installation_configuration//aws-compute-edge-zone-tasks..adoc (AWS Local Zone or Wavelength Zone tasks) +// * post_i// * post_installation_configuration/configuring-aws-outposts.adoc + +ifeval::["{context}" == "configuring-aws-outposts"] +:outposts: +endif::[] :_mod-docs-content-type: PROCEDURE [id="installation-creating-aws-vpc-subnets-edge_{context}"] -= Creating subnets in AWS Local Zones or Wavelength Zones += Creating subnets for AWS edge compute services -Before you configure a machine set for edge compute nodes in your {product-title} cluster, you must create the subnets in {zone-type}. Complete the following procedure for each Wavelength Zone that you want to deploy compute nodes to. +Before you configure a machine set for edge compute nodes in your {product-title} cluster, you must create a subnet in {zone-type}. +ifndef::outposts[Complete the following procedure for each Wavelength Zone that you want to deploy compute nodes to.] You can use the provided CloudFormation template and create a CloudFormation stack. You can then use this stack to custom provision a subnet. @@ -19,7 +24,8 @@ If you do not use the provided CloudFormation template to create your AWS infras * You configured an AWS account. * You added your AWS keys and region to your local AWS profile by running `aws configure`. -* You opted in to the {zone-type} group. +ifndef::outposts[* You opted in to the {zone-type} group.] +ifdef::outposts[* You have obtained the required information about your environment from your {product-title} cluster, Outpost, and AWS account.] .Procedure @@ -29,32 +35,40 @@ If you do not use the provided CloudFormation template to create your AWS infras + [source,terminal] ---- -$ aws cloudformation create-stack --stack-name \ <1> +$ aws cloudformation create-stack --stack-name \// <1> --region ${CLUSTER_REGION} \ - --template-body file://