1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

OSDOCS#10249: Drafted docs for creating a disconnected cluster

This commit is contained in:
EricPonvelle
2024-09-06 17:33:38 -04:00
committed by openshift-cherrypick-robot
parent 43c6d7d52a
commit 16d0725976
8 changed files with 323 additions and 8 deletions

View File

@@ -20,7 +20,13 @@
#
# The ordering of the records in this document determines the ordering of the
# topic groups and topics on the main page.
---
Name: What's new
Dir: rosa_release_notes
Distros: openshift-rosa-hcp
Topics:
- Name: What's new with Red Hat OpenShift Service on AWS
File: rosa-release-notes
---
Name: Introduction to ROSA
Dir: rosa_architecture
@@ -203,6 +209,8 @@ Topics:
File: rosa-hcp-creating-cluster-with-aws-kms-key
- Name: Creating a private cluster on ROSA with HCP
File: rosa-hcp-aws-private-creating-cluster
- Name: Creating a ROSA with HCP cluster with egress lockdown
File: rosa-hcp-egress-lockdown-install
- Name: Creating ROSA with HCP clusters with external authentication
File: rosa-hcp-sts-creating-a-cluster-ext-auth
---
@@ -234,8 +242,6 @@ Topics:
File: dedicated-aws-vpn
- Name: Configuring AWS Direct Connect
File: dedicated-aws-dc
# - Name: Cluster autoscaling # Cluster autoscaling not supported on HCP
# File: rosa-cluster-autoscaling
- Name: Manage nodes using machine pools
Dir: rosa_nodes
Topics:

View File

@@ -1,4 +1,10 @@
// * rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc
ifeval::["{context}" == "rosa-hcp-egress-lockdown-install"]
:egress-lockdown:
endif::[]
:_mod-docs-content-type: PROCEDURE
[id="rosa-sts-creating-account-wide-sts-roles-and-policies_{context}"]
= Creating the account-wide STS roles and policies
@@ -27,6 +33,18 @@ Before using the {product-title} (ROSA) CLI (`rosa`) to create {hcp-title-first}
$ rosa create account-roles --hosted-cp
----
ifdef::egress-lockdown[]
. Ensure that the your worker role has the correct AWS policy by running the following command:
+
[source,terminal]
----
$ aws iam attach-role-policy \
--role-name ManagedOpenShift-HCP-ROSA-Worker-Role \ <1>
--policy-arn "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
----
<1> This role needs to include the prefix that was created in the previous step.
endif::egress-lockdown[]
. Optional: Set your prefix as an environmental variable by running the following command:
+
[source,terminal]
@@ -48,4 +66,8 @@ $ echo $ACCOUNT_ROLES_PREFIX
ManagedOpenShift
----
For more information regarding AWS managed IAM policies for ROSA, see link:https://docs.aws.amazon.com/ROSA/latest/userguide/security-iam-awsmanpol.html[AWS managed IAM policies for ROSA].
For more information regarding AWS managed IAM policies for ROSA, see link:https://docs.aws.amazon.com/ROSA/latest/userguide/security-iam-awsmanpol.html[AWS managed IAM policies for ROSA].
ifeval::["{context}" == "rosa-hcp-egress-lockdown-install"]
:!egress-lockdown:
endif::[]

View File

@@ -0,0 +1,70 @@
// Module included in the following assemblies:
//
// * rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc
:_mod-docs-content-type: PROCEDURE
[id="rosa-hcp-sgs-and-vpce_{context}"]
= Configuring AWS security groups and PrivateLink connections
After creating your VPC, create your AWS security groups and VPC endpoints.
.Procedure
. Create the AWS security group by running the following command:
+
[source,terminal]
----
$ aws ec2 create-security-group \
--group-name allow-inbound-traffic \
--description "allow inbound traffic" \
--vpc-id <vpc_id> \ <1>
--region <aws_region> \ <2>
----
<1> Enter your VPC's ID.
<2> Enter the AWS region where the VPC was installed.
. Grant access to the security group's ingress by running the following command:
+
[source,terminal]
----
$ aws ec2 authorize-security-group-ingress \
--group-id <group_id> \ <1>
--protocol -1 \
--port 0-0 \
--cidr <vpc_cidr> \ <2>
--region <aws_region> \ <3>
----
<1> `--group-id` uses ID of the security group created with the previous command.
<2> Enter the CIDR of your VPC.
<3> The AWS region where you installed your VPC
. Create your STS VPC endpoint by running the following command:
+
[source,terminal]
----
$ aws ec2 create-vpc-endpoint \
--vpc-id <vpc_id> \ <1>
--service-name com.amazonaws.<aws_region>.sts \ <2>
--vpc-endpoint-type Interface
----
<1> Enter your VPC's ID.
<2> Enter the AWS region where the VPC was installed.
. Create your ECR VPC endpoints by running the following command:
+
[source,terminal]
----
$ aws ec2 create-vpc-endpoint \
--vpc-id <vpc_id> \
--service-name com.amazonaws.<aws_region>.ecr.dkr \ <1>
--vpc-endpoint-type Interface
----
<1> Enter the AWS region where the VPC is located.
. Create your S3 VPC endpoint by running the following command:
+
[source,terminal]
----
$ aws ec2 create-vpc-endpoint \
--vpc-id <vpc_id> \
--service-name com.amazonaws.<aws_region>.s3
----

View File

@@ -0,0 +1,93 @@
// Module included in the following assemblies:
//
// * rosa_hcp/rosa-hcp-disconnected-install.adoc
:_mod-docs-content-type: PROCEDURE
[id="rosa-hcp-sts-creating-a-cluster-egress-lockdown-cli_{context}"]
= Creating a {hcp-title} cluster with egress lockdown using the CLI
When using the {product-title} (ROSA) command-line interface (CLI), `rosa`, to create a cluster, you can select the default options to create the cluster quickly.
.Prerequisites
* You have completed the AWS prerequisites for {hcp-title}.
* You have available AWS service quotas.
* You have enabled the ROSA service in the AWS Console.
* You have installed and configured the latest ROSA CLI (`rosa`) on your installation host. Run `rosa version` to see your currently installed version of the ROSA CLI. If a newer version is available, the CLI provides a link to download this upgrade.
* You have logged in to your Red{nbsp}Hat account by using the ROSA CLI.
* You have created an OIDC configuration.
* You have verified that the AWS Elastic Load Balancing (ELB) service role exists in your AWS account.
.Procedure
. Use one of the following commands to create your {hcp-title} cluster:
+
[NOTE]
====
When creating a {hcp-title} cluster, the default machine Classless Inter-Domain Routing (CIDR) is `10.0.0.0/16`. If this does not correspond to the CIDR range for your VPC subnets, add `--machine-cidr <address_block>` to the following commands. To learn more about the default CIDR ranges for {product-title}, see the CIDR range definitions.
====
+
* If you did not set environment variables, run the following command:
+
[source,terminal]
----
$ rosa create cluster --cluster-name=<cluster_name> \ <.>
--mode=auto --hosted-cp [--private] \ <.>
--operator-roles-prefix <operator-role-prefix> \ <.>
--oidc-config-id <id-of-oidc-configuration> \
--subnet-ids=<private-subnet-id> --region <region> \
--machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 \
--pod-cidr 10.128.0.0/14 --host-prefix 23 \
--billing-account <root-acct-id> \ <.>
--properties zero_egress:true
----
+
--
<.> Specify the name of your cluster. If your cluster name is longer than 15 characters, it will contain an autogenerated domain prefix as a subdomain for your provisioned cluster on openshiftapps.com. To customize the subdomain, use the `--domain-prefix` flag. The domain prefix cannot be longer than 15 characters, must be unique, and cannot be changed after cluster creation.
<.> By default, the cluster-specific Operator role names are prefixed with the cluster name and a random 4-digit hash. You can optionally specify a custom prefix to replace `<cluster_name>-<hash>` in the role names. The prefix is applied when you create the cluster-specific Operator IAM roles. For information about the prefix, see _About custom Operator IAM role prefixes_.
+
[NOTE]
====
If you specified custom ARN paths when you created the associated account-wide roles, the custom path is automatically detected. The custom path is applied to the cluster-specific Operator roles when you create them in a later step.
====
<.> Provide the AWS account that is responsible for all billing.
--
* If you set the environment variables, create a cluster with egress lockdown that has a single, initial machine pool, using a privately available API, and a privately available Ingress by running the following command:
+
[source,terminal]
----
$ rosa create cluster --private --cluster-name=<cluster_name> \
--mode=auto --hosted-cp --operator-roles-prefix=$OPERATOR_ROLES_PREFIX \
--oidc-config-id=$OIDC_ID --subnet-ids=$SUBNET_IDS \
--region <region> --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 \
--pod-cidr 10.128.0.0/14 --host-prefix 23 --billing-account <root-acct-id> \
--private --properties zero_egress:true
----
+
. Check the status of your cluster by running the following command:
+
[source,terminal]
----
$ rosa describe cluster --cluster=<cluster_name>
----
+
The following `State` field changes are listed in the output as cluster installation progresses:
+
* `pending (Preparing account)`
* `installing (DNS setup in progress)`
* `installing`
* `ready`
+
[NOTE]
====
If the installation fails or the `State` field does not change to `ready` after more than 10 minutes, check the installation troubleshooting documentation for details. For more information, see _Troubleshooting installations_. For steps to contact Red{nbsp}Hat Support for assistance, see _Getting support for Red{nbsp}Hat OpenShift Service on AWS_.
====
+
. Track the cluster creation progress by watching the {product-title} installation program logs. To check the logs, run the following command:
+
[source,terminal]
----
$ rosa logs install --cluster=<cluster_name> --watch \ <.>
----
<.> Optional: To watch for new log messages as the installation progresses, use the `--watch` argument.

View File

@@ -40,13 +40,13 @@ You must tag at least one private subnet and, if applicable, and one public subn
+
[source,terminal]
----
$ aws ec2 create-tags --resources <public-subnet-id> --tags Key=kubernetes.io/role/elb,Value=1
$ aws ec2 create-tags --resources <public-subnet-id> --region <aws_region> --tags Key=kubernetes.io/role/elb,Value=1
----
.. For private subnets, run:
+
[source,terminal]
----
$ aws ec2 create-tags --resources <private-subnet-id> --tags Key=kubernetes.io/role/internal-elb,Value=1
$ aws ec2 create-tags --resources <private-subnet-id> --region <aws_region> --tags Key=kubernetes.io/role/internal-elb,Value=1
----
.Verification

View File

@@ -2,6 +2,10 @@
//
// * rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc
ifeval::["{context}" == "rosa-hcp-egress-lockdown-install"]
:egress-lockdown-rosa:
endif::[]
:_mod-docs-content-type: PROCEDURE
[id="rosa-hcp-vpc-terraform_{context}"]
= Creating a Virtual Private Cloud using Terraform
@@ -23,11 +27,20 @@ $ git clone https://github.com/openshift-cs/terraform-vpc-example
----
. Navigate to the created directory by running the following command:
ifndef::egress-lockdown-rosa[]
+
[source,terminal]
----
$ cd terraform-vpc-example
----
endif::egress-lockdown-rosa[]
ifdef::egress-lockdown-rosa[]
+
[source,terminal]
----
$ cd terraform-vpc-example/zero-egress
----
endif::egress-lockdown-rosa[]
. Initiate the Terraform file by running the following command:
+
@@ -38,14 +51,42 @@ $ terraform init
+
A message confirming the initialization appears when this process completes.
ifdef::egress-lockdown-rosa[]
. To build your VPC Terraform plan based on the existing Terraform template, run the `plan` command. You must include your AWS region, availability zones, CIDR blocks, and private subnets. You can choose to specify a cluster name. A `rosa-zero-egress.tfplan` file is added to the `hypershift-tf` directory after the `terraform plan` completes. For more detailed options, see the link:https://github.com/openshift-cs/terraform-vpc-example/blob/main/README.md[Terraform VPC repository's README file].
+
[source,terminal]
----
$ terraform plan -out rosa-zero-egress.tfplan -var region=<aws_region> \ <1>
-var 'availability_zones=["aws_region_1a","aws_region_1b","aws_region_1c"]'\ <2>
-var vpc_cidr_block=10.0.0.0/16 \ <3>
-var 'private_subnets=["10.0.0.0/24", "10.0.1.0/24", "10.0.2.0/24"]' <4>
----
+
--
<1> Enter your AWS region.
<2> Enter the availability zones for the VPC. For example, for a VPC that uses `ap-southeast-1`, you would use the following as availability zones: `["ap-southeast-1a", "ap-southeast-1b", "ap-southeast-1c"]`.
<3> Enter the CIDR block for your VPC.
<4> Enter each of the subnets that are created for the VPC.
--
endif::egress-lockdown-rosa[]
ifndef::egress-lockdown-rosa[]
. To build your VPC Terraform plan based on the existing Terraform template, run the `plan` command. You must include your AWS region. You can choose to specify a cluster name. A `rosa.tfplan` file is added to the `hypershift-tf` directory after the `terraform plan` completes. For more detailed options, see the link:https://github.com/openshift-cs/terraform-vpc-example/blob/main/README.md[Terraform VPC repository's README file].
+
[source,terminal]
----
$ terraform plan -out rosa.tfplan -var region=<region>
----
endif::egress-lockdown-rosa[]
. Apply this plan file to build your VPC by running the following command:
ifdef::egress-lockdown-rosa[]
+
[source,terminal]
----
$ terraform apply rosa-zero-egress.tfplan
----
endif::egress-lockdown-rosa[]
ifndef::egress-lockdown-rosa[]
+
[source,terminal]
----
@@ -71,4 +112,9 @@ $ echo $SUBNET_IDS
[source,terminal]
----
$ subnet-0a6a57e0f784171aa,subnet-078e84e5b10ecf5b0
----
----
endif::egress-lockdown-rosa[]
ifeval::["{context}" == "rosa-hcp-egress-lockdown-install"]
:!egress-lockdown-rosa:
endif::[]

View File

@@ -0,0 +1,60 @@
:_mod-docs-content-type: ASSEMBLY
[id="rosa-hcp-egress-lockdown-install"]
= Creating a {product-title} cluster with egress lockdown
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: rosa-hcp-egress-lockdown-install
toc::[]
Creating a {product-title} cluster with egress lockdown provides a way to enhance your cluster's stability and security by allowing your cluster to use the image registry in the local region if the cluster cannot access the Internet. Your cluster will try to pull the images from Quay, but when they aren't reached, it will instead pull the images from the image registry in the local region. All public and private clusters with egress lockdown get their Red Hat container images from a registery that is located in the local region of the cluster instead of gathering these images from various endpoints and registeries on the Internet. You can create a fully operational cluster that does not require a public egress by configuring a virtual private cloud (VPC) and using the `--properties zero_egress:true` flag when creating your cluster.
:FeatureName: Egress lockdown
include::snippets/technology-preview.adoc[]
.Prequisites
* You have an AWS account with sufficient permissions to create VPCs, subnets, and other required infrastructure.
* You have installed the Terraform v1.4.0+ CLI.
* You have installed the ROSA v1.2.45+ CLI.
* You have installed and configured the AWS CLI with the necessary credentials.
* You have installed the git CLI.
[IMPORTANT]
====
You can use egress lockdown on all supported versions of {product-title} that use the hosted control plane architecture; however, Red Hat suggests using the latest available z-stream release for each {ocp} version.
While you may install and upgrade your clusters as you would a regular cluster, due to an upstream issue with how the internal image registry functions in disconnected environments, your cluster that uses egress lockdown will not be able to fully use all platform components, such as the image registry. You can restore these features by using the latest ROSA version when upgrading or installing your cluster.
====
[id="rosa-hcp-egress-lockdown-install-creating_{context}"]
== Creating a Virtual Private Cloud for your egress lockdown {hcp-title} clusters
You must have a Virtual Private Cloud (VPC) to create {hcp-title} clusters. You can use one of the following methods to create a VPC:
* Create a VPC by using a Terraform template
* Manually create the VPC resources in the AWS console
[NOTE]
====
The Terraform instructions are for testing and demonstration purposes. Your own installation requires modifications to the VPC for your specific needs and constraints. You should also ensure that when you use the following Terraform script it is in the same region that you intend to install your cluster. In the following examples, use `us-east-2`.
====
include::modules/rosa-hcp-vpc-terraform.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* See the link:https://github.com/openshift-cs/terraform-vpc-example/tree/main/zero-egress[Zero Egress Terraform VPC Example] repository for a detailed list of all options available when customizing the VPC for your needs.
include::modules/rosa-hcp-vpc-manual.adoc[leveloffset=+2]
[discrete]
include::modules/rosa-hcp-vpc-subnet-tagging.adoc[leveloffset=+3]
[discrete]
include::modules/rosa-hcp-sgs-and-vpce.adoc[leveloffset=+3]
include::modules/rosa-hcp-creating-account-wide-sts-roles-and-policies.adoc[leveloffset=+1]
include::modules/rosa-sts-byo-oidc.adoc[leveloffset=+1]
include::modules/rosa-operator-config.adoc[leveloffset=+1]
include::modules/rosa-hcp-sts-creating-a-cluster-egress-lockdown-cli.adoc[leveloffset=+1]

View File

@@ -13,10 +13,22 @@ toc::[]
[id="rosa-new-changes-and-updates_{context}"]
== New changes and updates
[id="rosa-q1-2025_{context}"]
=== Q1 2025
ifdef::openshift-rosa-hcp[]
[IMPORTANT]
====
Egress lockdown is a Technology Preview feature.
====
* **Egress lockdown is now available as a Technology Preview on {product-title} clusters.** You can create a fully operational cluster that does not require a public egress by configuring a virtual private cloud (VPC) and using the `--properties zero_egress:true` flag when creating your cluster. For more information, see xref:../rosa_hcp/rosa-hcp-egress-lockdown-install.adoc#rosa-hcp-egress-lockdown-install[Creating a {product-title} cluster with egress lockdown].
endif::openshift-rosa-hcp[]
ifdef::openshift-rosa[]
[id="rosa-q4-2024_{context}"]
=== Q4 2024
* *Learning tutorials for ROSA cluster and application deployment.* You can now use the xref:../cloud_experts_tutorials/cloud-experts-getting-started/cloud-experts-getting-started-deploying/cloud-experts-getting-started-choose-deployment-method.adoc#cloud-experts-getting-started-choose-deployment-method[Getting started with ROSA] tutorials to quickly deploy a ROSA cluster for demo or learning purposes. You can also use the xref:../cloud_experts_tutorials/cloud-experts-deploying-application/cloud-experts-deploying-application-intro.adoc#cloud-experts-deploying-application-intro[Deploying an application] tutorials to deploy an application on your demo cluster.
* **Learning tutorials for ROSA cluster and application deployment.** You can now use the xref:../cloud_experts_tutorials/cloud-experts-getting-started/cloud-experts-getting-started-deploying/cloud-experts-getting-started-choose-deployment-method.adoc#cloud-experts-getting-started-choose-deployment-method[Getting started with ROSA] tutorials to quickly deploy a ROSA cluster for demo or learning purposes. You can also use the xref:../cloud_experts_tutorials/cloud-experts-deploying-application/cloud-experts-deploying-application-intro.adoc#cloud-experts-deploying-application-intro[Deploying an application] tutorials to deploy an application on your demo cluster.
* **`rosa create network` command added for {hcp-title} clusters.** You can now use the `rosa create network` command when creating {hcp-title} clusters to create networks using AWS CloudFormation templates. This helper command is intended to help create and configure a VPC for use with {hcp-title}. This command also supports zero egress clusters. For more information, see xref:../cli_reference/rosa_cli/rosa-manage-objects-cli.adoc#rosa-create-network_rosa-managing-objects-cli[create network].
@@ -173,9 +185,15 @@ include::snippets/technology-preview.adoc[leveloffset=+1]
=== Q1 2023
* **OIDC provider endpoint URL update.** Starting with ROSA CLI version 1.2.7, all new cluster OIDC provider endpoint URLs are no longer regional. Amazon CloudFront is part of this implementation to improve access speed, reduce latency, and improve resiliency. This change is only available for new clusters created with ROSA CLI 1.2.7 or later. There are no supported migration paths for existing OIDC provider configurations.
endif::openshift-rosa[]
[id="rosa-known-issues_{context}"]
== Known issues
ifdef::openshift-rosa-hcp[]
* While egress lockdown works across all supported versions of ROSA, Red Hat suggests you upgrade your cluster or build a cluster to the latest z-stream for your {ocp}. Due to an upstream issue with the internal image registry functionality in disconnected environments, you may experience issues with various {ocp} components within your cluster until you upgrade your version of HCP to the latest z-stream. If you are using older z-stream ROSA clusters with the egress lockdown feature, you must include a public route to the internet from your cluster. See link:https://issues.redhat.com/browse/OCPBUGS-44314[OCPBUGS-44314] for further details.
endif::openshift-rosa-hcp[]
* {OCP} 4.14 introduced an updated HAProxy image from 2.2 to 2.6. This update created a change in behavior enforcing strict RFC 7230 compliance, rejecting requests with multiple `Transfer-Encoding` headers. This may cause exposed pods in {product-title} 4.14 clusters sending multiple `Transfer-Encoding` headers to respond with a `502 Bad Gateway` or `400 Bad Request error`. To avoid this issue, ensure that your applications are not sending multiple `Transfer-Encoding` headers. For more information, see link:https://access.redhat.com/solutions/7055002[Red Hat Knowledgebase article]. (link:https://issues.redhat.com/browse/OCPBUGS-43095[*OCPBUGS-43095*])
* If you configure your cluster using external OIDC configuration and set the `--user-auth` flag to `disabled`, the console pods might enter a crash loop. (link:https://issues.redhat.com/browse/OCPBUGS-29510[*OCPBUGS-29510*])
* The OpenShift Cluster Manager roles (`ocm-role`) and user roles (`user-role`) that are key to the ROSA provisioning wizard might get enabled accidentally in your Red{nbsp}Hat organization by another user. However, this behavior does not affect the usability.