1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Merge pull request #82314 from openshift-cherrypick-robot/cherry-pick-82143-to-enterprise-4.17

[enterprise-4.17] OSDOCS#11004: Part 2: Migration: Manage AWS on HCP
This commit is contained in:
Servesha Dudhgaonkar
2024-09-24 12:16:14 +05:30
committed by GitHub
13 changed files with 777 additions and 2 deletions

View File

@@ -1,7 +1,34 @@
:_mod-docs-content-type: ASSEMBLY
[id="hcp-manage-aws"]
include::_attributes/common-attributes.adoc[]
= Managing {hcp} on AWS
= Managing {hcp} on {aws-short}
:context: hcp-managing-aws
toc::[]
toc::[]
When you use {hcp} for {product-title} on {aws-first}, the infrastructure requirements vary based on your setup.
include::modules/hcp-manage-aws-prereq.adoc[leveloffset=+1]
include::modules/hcp-manage-aws-infra-req.adoc[leveloffset=+2]
include::modules/hcp-manage-aws-infra-ho-req.adoc[leveloffset=+2]
include::modules/hcp-unmanaged-aws-hc-prereq.adoc[leveloffset=+2]
include::modules/hcp-managed-aws-infra-mgmt.adoc[leveloffset=+2]
include::modules/hcp-managed-aws-infra-hc.adoc[leveloffset=+2]
include::modules/hcp-k8s-managed-aws-infra-hc.adoc[leveloffset=+2]
include::modules/hcp-managed-aws-iam.adoc[leveloffset=+1]
include::modules/hcp-managed-aws-infra-iam-separate.adoc[leveloffset=+1]
include::modules/hcp-managed-aws-infra-separate.adoc[leveloffset=+2]
include::modules/hcp-managed-aws-iam-separate.adoc[leveloffset=+2]
include::modules/hcp-managed-aws-hc-separate.adoc[leveloffset=+2]

View File

@@ -0,0 +1,12 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-manage/hcp-manage-aws.adoc
:_mod-docs-content-type: CONCEPT
[id="hcp-k8s-managed-aws-infra-hc_{context}"]
= Kubernetes-managed infrastructure in a hosted cluster {aws-short} account
When Kubernetes manages your infrastructure in a hosted cluster {aws-first} account, the infrastructure requirements are as follows:
* A network load balancer for default Ingress
* An S3 bucket for registry

View File

@@ -0,0 +1,19 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-manage/hcp-manage-aws.adoc
:_mod-docs-content-type: CONCEPT
[id="hcp-manage-aws-infra-ho-req_{context}"]
= Unmanaged infrastructure for the HyperShift Operator in an {aws-short} account
An arbitrary {aws-first} account depends on the provider of the {hcp} service.
In self-managed {hcp}, the cluster service provider controls the {aws-short} account. The cluster service provider is the administrator who hosts cluster control planes and is responsible for uptime. In managed {hcp}, the {aws-short} account belongs to Red Hat.
In a prerequired and unmanaged infrastructure for the HyperShift Operator, the following infrastructure requirements apply for a management cluster {aws-short} account:
* One S3 Bucket
** OpenID Connect (OIDC)
* Route 53 hosted zones
** A domain to host private and public entries for hosted clusters

View File

@@ -0,0 +1,17 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-manage/hcp-manage-aws.adoc
:_mod-docs-content-type: CONCEPT
[id="hcp-manage-aws-infra-req_{context}"]
= Infrastructure requirements for {aws-short}
When you use {hcp} on {aws-first}, the infrastructure requirements fit in the following categories:
* Prerequired and unmanaged infrastructure for the HyperShift Operator in an arbitrary {aws-short} account
* Prerequired and unmanaged infrastructure in a hosted cluster {aws-short} account
* {hcp-capital}-managed infrastructure in a management {aws-short} account
* {hcp-capital}-managed infrastructure in a hosted cluster {aws-short} account
* Kubernetes-managed infrastructure in a hosted cluster {aws-short} account
Prerequired means that {hcp} requires {aws-short} infrastructure to properly work. Unmanaged means that no Operator or controller creates the infrastructure for you.

View File

@@ -0,0 +1,12 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-manage/hcp-manage-aws.adoc
:_mod-docs-content-type: CONCEPT
[id="hcp-manage-aws-prereq_{context}"]
= Prerequisites to manage {aws-short} infrastructure and IAM permissions
To configure {hcp} for {product-title} on {aws-first}, you must meet the following the infrastructure requirements:
* You configured {hcp} before you can create hosted clusters.
* You created an {aws-short} Identity and Access Management (IAM) role and {aws-short} Security Token Service (STS) credentials.

View File

@@ -0,0 +1,41 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-manage/hcp-manage-aws.adoc
:_mod-docs-content-type: CONCEPT
[id="hcp-managed-aws-hc-separate_{context}"]
= Creating a hosted cluster separately
You can create a hosted cluster separately on {aws-first}.
To create a hosted cluster separately, enter the following command:
[source,terminal]
[subs="+quotes"]
----
$ hcp create cluster aws \
--infra-id <infra_id> \// <1>
--name <hosted_cluster_name> \// <2>
--sts-creds <path_to_sts_credential_file> \// <3>
--pull-secret <path_to_pull_secret> \// <4>
--generate-ssh \// <5>
--node-pool-replicas 3
--role-arn <role_name> <6>
----
<1> Replace `<infra_id>` with the same ID that you specified in the `create infra aws` command. This value identifies the IAM resources that are associated with the hosted cluster.
<2> Replace `<hosted_cluster_name>` with the name of your hosted cluster.
<3> Replace `<path_to_sts_credential_file>` with the same name that you specified in the `create infra aws` command.
<4> Replace `<path_to_pull_secret>` with the name of the file that contains a valid {ocp-short} pull secret.
<5> The `--generate-ssh` flag is optional, but is good to include in case you need to SSH to your workers. An SSH key is generated for you and is stored as a secret in the same namespace as the hosted cluster.
<6> Replace `<role_name>` with the Amazon Resource Name (ARN), for example, `arn:aws:iam::820196288204:role/myrole`. Specify the Amazon Resource Name (ARN), for example, `arn:aws:iam::820196288204:role/myrole`. For more information about ARN roles, see "Identity and Access Management (IAM) permissions".
You can also add the `--render` flag to the command and redirect output to a file where you can edit the resources before you apply them to the cluster.
After you run the command, the following resources are applied to your cluster:
* A namespace
* A secret with your pull secret
* A `HostedCluster`
* A `NodePool`
* Three AWS STS secrets for control plane components
* One SSH key secret if you specified the `--generate-ssh` flag.

View File

@@ -0,0 +1,13 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-manage/hcp-manage-aws.adoc
:_mod-docs-content-type: CONCEPT
[id=" hcp-managed-aws-iam-separate_{context}"]
= Creating the {aws-short} IAM resources
In {aws-first}, you must create the following IAM resources:
* link:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html[An OpenID Connect (OIDC) identity provider in IAM], which is required to enable STS authentication.
* link:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create.html[Seven roles], which are separate for every component that interacts with the provider, such as the Kubernetes controller manager, cluster API provider, and registry
* The link:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html[instance profile], which is the profile that is assigned to all worker instances of the cluster

View File

@@ -0,0 +1,353 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-manage/hcp-manage-aws.adoc
:_mod-docs-content-type: CONCEPT
[id="hcp-managed-aws-iam_{context}"]
= Identity and Access Management (IAM) permissions
In the context of {hcp}, the consumer is responsible to create the Amazon Resource Name (ARN) roles. The _consumer_ is an automated process to generate the permissions files. The consumer might be the CLI or {cluster-manager}. {hcp-capital} can enable granularity to honor the principle of least-privilege components, which means that every component uses its own role to operate or create {aws-first} objects, and the roles are limited to what is required for the product to function normally.
The hosted cluster receives the ARN roles as input and the consumer creates an {aws-short} permission configuration for each component. As a result, the component can authenticate through STS and preconfigured OIDC IDP.
The following roles are consumed by some of the components from {hcp} that run on the control plane and operate on the data plane:
* `controlPlaneOperatorARN`
* `imageRegistryARN`
* `ingressARN`
* `kubeCloudControllerARN`
* `nodePoolManagementARN`
* `storageARN`
* `networkARN`
The following example shows a reference to the IAM roles from the hosted cluster:
[source,yaml]
----
...
endpointAccess: Public
region: us-east-2
resourceTags:
- key: kubernetes.io/cluster/example-cluster-bz4j5
value: owned
rolesRef:
controlPlaneOperatorARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-control-plane-operator
imageRegistryARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-openshift-image-registry
ingressARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-openshift-ingress
kubeCloudControllerARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-cloud-controller
networkARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-cloud-network-config-controller
nodePoolManagementARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-node-pool
storageARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-aws-ebs-csi-driver-controller
type: AWS
...
----
The roles that {hcp} uses are shown in the following examples:
* `ingressARN`
+
[source,yaml]
----
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:DescribeLoadBalancers",
"tag:GetResources",
"route53:ListHostedZones"
],
"Resource": "\*"
},
{
"Effect": "Allow",
"Action": [
"route53:ChangeResourceRecordSets"
],
"Resource": [
"arn:aws:route53:::PUBLIC_ZONE_ID",
"arn:aws:route53:::PRIVATE_ZONE_ID"
]
}
]
}
----
* `imageRegistryARN`
+
[source,yaml]
----
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:DeleteBucket",
"s3:PutBucketTagging",
"s3:GetBucketTagging",
"s3:PutBucketPublicAccessBlock",
"s3:GetBucketPublicAccessBlock",
"s3:PutEncryptionConfiguration",
"s3:GetEncryptionConfiguration",
"s3:PutLifecycleConfiguration",
"s3:GetLifecycleConfiguration",
"s3:GetBucketLocation",
"s3:ListBucket",
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucketMultipartUploads",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts"
],
"Resource": "\*"
}
]
}
----
* `storageARN`
+
[source,yaml]
----
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:AttachVolume",
"ec2:CreateSnapshot",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:DeleteSnapshot",
"ec2:DeleteTags",
"ec2:DeleteVolume",
"ec2:DescribeInstances",
"ec2:DescribeSnapshots",
"ec2:DescribeTags",
"ec2:DescribeVolumes",
"ec2:DescribeVolumesModifications",
"ec2:DetachVolume",
"ec2:ModifyVolume"
],
"Resource": "\*"
}
]
}
----
* `networkARN`
+
[source,yaml]
----
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeInstanceStatus",
"ec2:DescribeInstanceTypes",
"ec2:UnassignPrivateIpAddresses",
"ec2:AssignPrivateIpAddresses",
"ec2:UnassignIpv6Addresses",
"ec2:AssignIpv6Addresses",
"ec2:DescribeSubnets",
"ec2:DescribeNetworkInterfaces"
],
"Resource": "\*"
}
]
}
----
* `kubeCloudControllerARN`
+
----
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeImages",
"ec2:DescribeRegions",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVolumes",
"ec2:CreateSecurityGroup",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:ModifyInstanceAttribute",
"ec2:ModifyVolume",
"ec2:AttachVolume",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateRoute",
"ec2:DeleteRoute",
"ec2:DeleteSecurityGroup",
"ec2:DeleteVolume",
"ec2:DetachVolume",
"ec2:RevokeSecurityGroupIngress",
"ec2:DescribeVpcs",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:AttachLoadBalancerToSubnets",
"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
"elasticloadbalancing:CreateLoadBalancer",
"elasticloadbalancing:CreateLoadBalancerPolicy",
"elasticloadbalancing:CreateLoadBalancerListeners",
"elasticloadbalancing:ConfigureHealthCheck",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:DeleteLoadBalancerListeners",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeLoadBalancerAttributes",
"elasticloadbalancing:DetachLoadBalancerFromSubnets",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:ModifyLoadBalancerAttributes",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:CreateListener",
"elasticloadbalancing:CreateTargetGroup",
"elasticloadbalancing:DeleteListener",
"elasticloadbalancing:DeleteTargetGroup",
"elasticloadbalancing:DescribeListeners",
"elasticloadbalancing:DescribeLoadBalancerPolicies",
"elasticloadbalancing:DescribeTargetGroups",
"elasticloadbalancing:DescribeTargetHealth",
"elasticloadbalancing:ModifyListener",
"elasticloadbalancing:ModifyTargetGroup",
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:SetLoadBalancerPoliciesOfListener",
"iam:CreateServiceLinkedRole",
"kms:DescribeKey"
],
"Resource": [
"\*"
],
"Effect": "Allow"
}
]
}
----
* `nodePoolManagementARN`
+
[source,yaml]
----
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:AllocateAddress",
"ec2:AssociateRouteTable",
"ec2:AttachInternetGateway",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateInternetGateway",
"ec2:CreateNatGateway",
"ec2:CreateRoute",
"ec2:CreateRouteTable",
"ec2:CreateSecurityGroup",
"ec2:CreateSubnet",
"ec2:CreateTags",
"ec2:DeleteInternetGateway",
"ec2:DeleteNatGateway",
"ec2:DeleteRouteTable",
"ec2:DeleteSecurityGroup",
"ec2:DeleteSubnet",
"ec2:DeleteTags",
"ec2:DescribeAccountAttributes",
"ec2:DescribeAddresses",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeImages",
"ec2:DescribeInstances",
"ec2:DescribeInternetGateways",
"ec2:DescribeNatGateways",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeNetworkInterfaceAttribute",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVpcs",
"ec2:DescribeVpcAttribute",
"ec2:DescribeVolumes",
"ec2:DetachInternetGateway",
"ec2:DisassociateRouteTable",
"ec2:DisassociateAddress",
"ec2:ModifyInstanceAttribute",
"ec2:ModifyNetworkInterfaceAttribute",
"ec2:ModifySubnetAttribute",
"ec2:ReleaseAddress",
"ec2:RevokeSecurityGroupIngress",
"ec2:RunInstances",
"ec2:TerminateInstances",
"tag:GetResources",
"ec2:CreateLaunchTemplate",
"ec2:CreateLaunchTemplateVersion",
"ec2:DescribeLaunchTemplates",
"ec2:DescribeLaunchTemplateVersions",
"ec2:DeleteLaunchTemplate",
"ec2:DeleteLaunchTemplateVersions"
],
"Resource": [
"\*"
],
"Effect": "Allow"
},
{
"Condition": {
"StringLike": {
"iam:AWSServiceName": "elasticloadbalancing.amazonaws.com"
}
},
"Action": [
"iam:CreateServiceLinkedRole"
],
"Resource": [
"arn:*:iam::*:role/aws-service-role/elasticloadbalancing.amazonaws.com/AWSServiceRoleForElasticLoadBalancing"
],
"Effect": "Allow"
},
{
"Action": [
"iam:PassRole"
],
"Resource": [
"arn:*:iam::*:role/*-worker-role"
],
"Effect": "Allow"
}
]
}
----
* `controlPlaneOperatorARN`
+
[source,yaml]
----
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:CreateVpcEndpoint",
"ec2:DescribeVpcEndpoints",
"ec2:ModifyVpcEndpoint",
"ec2:DeleteVpcEndpoints",
"ec2:CreateTags",
"route53:ListHostedZones"
],
"Resource": "\*"
},
{
"Effect": "Allow",
"Action": [
"route53:ChangeResourceRecordSets",
"route53:ListResourceRecordSets"
],
"Resource": "arn:aws:route53:::%s"
}
]
}
----

View File

@@ -0,0 +1,23 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-manage/hcp-manage-aws.adoc
:_mod-docs-content-type: CONCEPT
[id="hcp-managed-aws-infra-hc_{context}"]
= Infrastructure requirements for an {aws-short} account in a hosted cluster
When your infrastructure is managed by {hcp} in a hosted cluster {aws-first} account, the infrastructure requirements differ depending on whether your clusters are public, private, or a combination.
For accounts with public clusters, the infrastructure requirements are as follows:
* Node pools must have EC2 instances that have `Role` and `RolePolicy` defined.
For accounts with private clusters, the infrastructure requirements are as follows:
* One private link endpoint for each availability zone
* EC2 instances for node pools
For accounts with public and private clusters, the infrastructure requirements are as follows:
* One private link endpoint for each availability zone
* EC2 instances for node pools

View File

@@ -0,0 +1,11 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-manage/hcp-manage-aws.adoc
:_mod-docs-content-type: CONCEPT
[id="hcp-managed-aws-infra-iam-separate_{context}"]
= Creating {aws-short} infrastructure and IAM resources separate
By default, the `hcp create cluster aws` command creates cloud infrastructure with the hosted cluster and applies it. You can create the cloud infrastructure portion separately so that you can use the `hcp create cluster aws` command only to create the cluster, or render it to modify it before you apply it.
To create the cloud infrastructure portion separately, you need to create the {aws-first} infrastructure, create the {aws-short} Identity and Access (IAM) resources, and create the cluster.

View File

@@ -0,0 +1,32 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-manage/hcp-manage-aws.adoc
:_mod-docs-content-type: CONCEPT
[id="hcp-managed-aws-infra-mgmt_{context}"]
= Infrastructure requirements for a management {aws-short} account
When your infrastructure is managed by hosted control planes in a management AWS account, the infrastructure requirements differ depending on whether your clusters are public, private, or a combination.
For accounts with public clusters, the infrastructure requirements are as follows:
* Network load balancer: a load balancer Kube API server
** Kubernetes creates a security group
* Volumes
** For etcd (one or three depending on high availability)
** For OVN-Kube
For accounts with private clusters, the infrastructure requirements are as follows:
* Network load balancer: a load balancer private router
* Endpoint service (private link)
For accounts with public and private clusters, the infrastructure requirements are as follows:
* Network load balancer: a load balancer public router
* Network load balancer: a load balancer private router
* Endpoint service (private link)
* Volumes
** For etcd (one or three depending on high availability)
** For OVN-Kube

View File

@@ -0,0 +1,186 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-manage/hcp-manage-aws.adoc
:_mod-docs-content-type: CONCEPT
[id="hcp-managed-aws-infra-separate_{context}"]
= Creating the {aws-short} infrastructure separately
To create the {aws-first} infrastructure, you need to create a Virtual Private Cloud (VPC) and other resources for your cluster. You can use the {aws-short} console or an infrastructure automation and provisioning tool. For instructions to use the {aws-short} console, see link:https://docs.aws.amazon.com/vpc/latest/userguide/create-vpc.html#create-vpc-and-other-resources[Create a VPC plus other VPC resources] in the {aws-short} Documentation.
The VPC must include private and public subnets and resources for external access, such as a network address translation (NAT) gateway and an internet gateway. In addition to the VPC, you need a private hosted zone for the ingress of your cluster. If you are creating clusters that use PrivateLink (`Private` or `PublicAndPrivate` access modes), you need an additional hosted zone for PrivateLink.
Create the {aws-short} infrastructure for your hosted cluster by using the following example configuration:
[source,yaml]
[subs="+quotes"]
----
---
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: null
name: clusters
spec: {}
status: {}
---
apiVersion: v1
data:
.dockerconfigjson: xxxxxxxxxxx
kind: Secret
metadata:
creationTimestamp: null
labels:
hypershift.openshift.io/safe-to-delete-with-cluster: "true"
name: <pull_secret_name> <1>
namespace: clusters
---
apiVersion: v1
data:
key: xxxxxxxxxxxxxxxxx
kind: Secret
metadata:
creationTimestamp: null
labels:
hypershift.openshift.io/safe-to-delete-with-cluster: "true"
name: <etcd_encryption_key_name> <2>
namespace: clusters
type: Opaque
---
apiVersion: v1
data:
id_rsa: xxxxxxxxx
id_rsa.pub: xxxxxxxxx
kind: Secret
metadata:
creationTimestamp: null
labels:
hypershift.openshift.io/safe-to-delete-with-cluster: "true"
name: <ssh-key-name> <3>
namespace: clusters
---
apiVersion: hypershift.openshift.io/v1beta1
kind: HostedCluster
metadata:
creationTimestamp: null
name: <hosted_cluster_name> <4>
namespace: clusters
spec:
autoscaling: {}
configuration: {}
controllerAvailabilityPolicy: SingleReplica
dns:
baseDomain: <dns_domain> <5>
privateZoneID: xxxxxxxx
publicZoneID: xxxxxxxx
etcd:
managed:
storage:
persistentVolume:
size: 8Gi
storageClassName: gp3-csi
type: PersistentVolume
managementType: Managed
fips: false
infraID: <infra_id> <6>
issuerURL: <issuer_url> <7>
networking:
clusterNetwork:
- cidr: 10.132.0.0/14
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OVNKubernetes
serviceNetwork:
- cidr: 172.31.0.0/16
olmCatalogPlacement: management
platform:
aws:
cloudProviderConfig:
subnet:
id: <subnet_xxx> <8>
vpc: <vpc_xxx> <9>
zone: us-west-1b
endpointAccess: Public
multiArch: false
region: us-west-1
rolesRef:
controlPlaneOperatorARN: arn:aws:iam::820196288204:role/<infra_id>-control-plane-operator
imageRegistryARN: arn:aws:iam::820196288204:role/<infra_id>-openshift-image-registry
ingressARN: arn:aws:iam::820196288204:role/<infra_id>-openshift-ingress
kubeCloudControllerARN: arn:aws:iam::820196288204:role/<infra_id>-cloud-controller
networkARN: arn:aws:iam::820196288204:role/<infra_id>-cloud-network-config-controller
nodePoolManagementARN: arn:aws:iam::820196288204:role/<infra_id>-node-pool
storageARN: arn:aws:iam::820196288204:role/<infra_id>-aws-ebs-csi-driver-controller
type: AWS
pullSecret:
name: <pull_secret_name>
release:
image: quay.io/openshift-release-dev/ocp-release:4.16-x86_64
secretEncryption:
aescbc:
activeKey:
name: <etcd_encryption_key_name>
type: aescbc
services:
- service: APIServer
servicePublishingStrategy:
type: LoadBalancer
- service: OAuthServer
servicePublishingStrategy:
type: Route
- service: Konnectivity
servicePublishingStrategy:
type: Route
- service: Ignition
servicePublishingStrategy:
type: Route
- service: OVNSbDb
servicePublishingStrategy:
type: Route
sshKey:
name: <ssh_key_name>
status:
controlPlaneEndpoint:
host: ""
port: 0
---
apiVersion: hypershift.openshift.io/v1beta1
kind: NodePool
metadata:
creationTimestamp: null
name: <node_pool_name> <10>
namespace: clusters
spec:
arch: amd64
clusterName: <hosted_cluster_name>
management:
autoRepair: true
upgradeType: Replace
nodeDrainTimeout: 0s
platform:
aws:
instanceProfile: <instance_profile_name> <11>
instanceType: m6i.xlarge
rootVolume:
size: 120
type: gp3
subnet:
id: <subnet_xxx>
type: AWS
release:
image: quay.io/openshift-release-dev/ocp-release:4.16-x86_64
replicas: 2
status:
replicas: 0
----
<1> Replace `<pull_secret_name>` with the name of your pull secret.
<2> Replace `<etcd_encryption_key_name>` with the name of your etcd encryption key.
<3> Replace `<ssh_key_name>` with the name of your SSH key.
<4> Replace `<hosted_cluster_name>` with the name of your hosted cluster.
<5> Replace `<dns_domain>` with your base DNS domain, such as `example.com`.
<6> Replace `<infra_id>` with the value that identifies the IAM resources that are associated with the hosted cluster.
<7> Replace `<issuer_url>` with your issuer URL, which ends with your `infra_id` value. For example, `https://example-hosted-us-west-1.s3.us-west-1.amazonaws.com/example-hosted-infra-id`.
<8> Replace `<subnet_xxx>` with your subnet ID. Both private and public subnets need to be tagged. For public subnets, use `kubernetes.io/role/elb=1`. For private subnets, use `kubernetes.io/role/internal-elb=1`.
<9> Replace `<vpc_xxx>` with your VPC ID.
<10> Replace `<node_pool_name>` with the name of your `NodePool` resource.
<11> Replace `<instance_profile_name>` with the name of your {aws-short} instance.

View File

@@ -0,0 +1,29 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-manage/hcp-manage-aws.adoc
:_mod-docs-content-type: CONCEPT
[id="hcp-unmanaged-aws-hc-prereq_{context}"]
= Unmanaged infrastructure requirements for a management {aws-short} account
When your infrastructure is prerequired and unmanaged in a hosted cluster {aws-first} account, the infrastructure requirements for all access modes are as follows:
* One VPC
* One DHCP Option
* Two subnets
** A private subnet that is an internal data plane subnet
** A public subnet that enables access to the internet from the data plane
* One internet gateway
* One elastic IP
* One NAT gateway
* One security group (worker nodes)
* Two route tables (one private and one public)
* Two Route 53 hosted zones
* Enough quota for the following items:
** One Ingress service load balancer for public hosted clusters
** One private link endpoint for private hosted clusters
[NOTE]
====
For private link networking to work, the endpoint zone in the hosted cluster {aws-short} account must match the zone of the instance that is resolved by the service endpoint in the management cluster {aws-short} account. In {aws-short}, the zone names are aliases, such as us-east-2b, which do not necessarily map to the same zone in different accounts. As a result, for private link to work, the management cluster must have subnets or workers in all zones of its region.
====