1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

OSDOCS-1960 - Adding to the AWS UPI installation path

This commit is contained in:
Paul Needle
2020-12-04 13:52:49 +00:00
committed by openshift-cherrypick-robot
parent 4b4d8d3feb
commit 39ddee2e22
46 changed files with 459 additions and 202 deletions

View File

@@ -6,6 +6,15 @@ toc::[]
include::modules/installation-overview.adoc[leveloffset=+1]
.Additional resources
* See xref:../installing/install_config/customizations.adoc#customizations[Available cluster customizations] for details about {product-title} configuration resources.
include::modules/update-service-overview.adoc[leveloffset=+1]
include::modules/unmanaged-operators.adoc[leveloffset=+1]
[id="architecture-installation-next-steps"]
== Next steps
* xref:../installing/installing-preparing.adoc#installing-preparing[Selecting a cluster installation method and preparing it for users]

View File

@@ -18,6 +18,10 @@ include::modules/installation-aws-permissions.adoc[leveloffset=+1]
include::modules/installation-aws-iam-user.adoc[leveloffset=+1]
.Additional resources
* See xref:../../installing/installing_aws/manually-creating-iam.adoc#manually-creating-iam-aws[Manually creating IAM for AWS] for steps to set the Cloud Credential Operator (CCO) to manual mode prior to installation. Use this mode in environments where the cloud identity and access management (IAM) APIs are not reachable, or if you prefer not to store an administrator-level credential secret in the cluster `kube-system` project.
include::modules/installation-aws-regions.adoc[leveloffset=+1]
== Next steps

View File

@@ -39,6 +39,10 @@ environments where the cloud IAM APIs are not reachable.
include::modules/cluster-entitlements.adoc[leveloffset=+1]
.Additional resources
* See xref:../../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring] for more information about the Telemetry service.
include::modules/ssh-agent-using.adoc[leveloffset=+1]
include::modules/installation-obtaining-installer.adoc[leveloffset=+1]

View File

@@ -37,12 +37,20 @@ environments where the cloud IAM APIs are not reachable.
include::modules/cluster-entitlements.adoc[leveloffset=+1]
.Additional resources
* See xref:../../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring] for more information about the Telemetry service
include::modules/ssh-agent-using.adoc[leveloffset=+1]
include::modules/installation-obtaining-installer.adoc[leveloffset=+1]
include::modules/installation-launching-installer.adoc[leveloffset=+1]
.Additional resources
* See link:https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html[Configuration and credential file settings] in the AWS documentation for more information about AWS profile and credential configuration.
include::modules/cli-installing-cli.adoc[leveloffset=+1]
include::modules/cli-logging-in-kubeadmin.adoc[leveloffset=+1]

View File

@@ -46,6 +46,10 @@ include::modules/installation-custom-aws-vpc.adoc[leveloffset=+1]
include::modules/cluster-entitlements.adoc[leveloffset=+1]
.Additional resources
* See xref:../../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring] for more information about the Telemetry service.
include::modules/ssh-agent-using.adoc[leveloffset=+1]
include::modules/installation-obtaining-installer.adoc[leveloffset=+1]

View File

@@ -46,6 +46,10 @@ environments where the cloud IAM APIs are not reachable.
include::modules/cluster-entitlements.adoc[leveloffset=+1]
.Additional resources
* See xref:../../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring] for more information about the Telemetry service.
include::modules/ssh-agent-using.adoc[leveloffset=+1]
include::modules/installation-obtaining-installer.adoc[leveloffset=+1]

View File

@@ -43,6 +43,10 @@ include::modules/installation-custom-aws-vpc.adoc[leveloffset=+1]
include::modules/cluster-entitlements.adoc[leveloffset=+1]
.Additional resources
* See xref:../../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring] for more information about the Telemetry service.
include::modules/ssh-agent-using.adoc[leveloffset=+1]
include::modules/installation-obtaining-installer.adoc[leveloffset=+1]

View File

@@ -15,10 +15,10 @@ according to your company's policies.
== Prerequisites
* Review details about the
* You reviewed details about the
xref:../../architecture/architecture-installation.adoc#architecture-installation[{product-title} installation and update]
processes.
* xref:../../installing/installing_aws/installing-aws-account.adoc#installing-aws-account[Configure an AWS account]
* You xref:../../installing/installing_aws/installing-aws-account.adoc#installing-aws-account[configured an AWS account]
to host the cluster.
+
[IMPORTANT]
@@ -32,11 +32,10 @@ link:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys
in the AWS documentation. You can supply the keys when you run the installation
program.
====
* Download the AWS CLI and install it on your computer. See
* You downloaded the AWS CLI and installed it on your computer. See
link:https://docs.aws.amazon.com/cli/latest/userguide/install-bundle.html[Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix)]
in the AWS documentation.
* If you use a firewall, you must
xref:../../installing/install_config/configuring-firewall.adoc#configuring-firewall[configure it to allow the sites] that your cluster requires access to.
* If you use a firewall, you xref:../../installing/install_config/configuring-firewall.adoc#configuring-firewall[configured it to allow the sites] that your cluster requires access to.
+
[NOTE]
====
@@ -50,6 +49,10 @@ environments where the cloud IAM APIs are not reachable.
include::modules/cluster-entitlements.adoc[leveloffset=+1]
.Additional resources
* See xref:../../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring] for more information about the Telemetry service.
include::modules/installation-aws-user-infra-requirements.adoc[leveloffset=+1]
include::modules/installation-aws-permissions.adoc[leveloffset=+2]
@@ -62,6 +65,10 @@ include::modules/installation-user-infra-generate.adoc[leveloffset=+1]
include::modules/installation-generate-aws-user-infra-install-config.adoc[leveloffset=+2]
.Additional resources
* See link:https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html[Configuration and credential file settings] in the AWS documentation for more information about AWS profile and credential configuration.
include::modules/installation-configure-proxy.adoc[leveloffset=+2]
//include::modules/installation-three-node-cluster.adoc[leveloffset=+2]
@@ -74,14 +81,30 @@ include::modules/installation-creating-aws-vpc.adoc[leveloffset=+1]
include::modules/installation-cloudformation-vpc.adoc[leveloffset=+2]
.Additional resources
* You can view details about the CloudFormation stacks that you create by navigating to the link:https://console.aws.amazon.com/cloudformation/[AWS CloudFormation console].
include::modules/installation-creating-aws-dns.adoc[leveloffset=+1]
include::modules/installation-cloudformation-dns.adoc[leveloffset=+2]
.Additional resources
* You can view details about the CloudFormation stacks that you create by navigating to the link:https://console.aws.amazon.com/cloudformation/[AWS CloudFormation console].
* You can view details about your hosted zones by navigating to the link:https://console.aws.amazon.com/route53/[AWS Route 53 console].
* See link:https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/ListInfoOnHostedZone.html[Listing public hosted zones] in the AWS documentation for more information about listing public hosted zones.
include::modules/installation-creating-aws-security.adoc[leveloffset=+1]
include::modules/installation-cloudformation-security.adoc[leveloffset=+2]
.Additional resources
* You can view details about the CloudFormation stacks that you create by navigating to the link:https://console.aws.amazon.com/cloudformation/[AWS CloudFormation console].
include::modules/installation-aws-user-infra-rhcos-ami.adoc[leveloffset=+1]
include::modules/installation-aws-regions-with-no-ami.adoc[leveloffset=+2]
@@ -92,11 +115,21 @@ include::modules/installation-creating-aws-bootstrap.adoc[leveloffset=+1]
include::modules/installation-cloudformation-bootstrap.adoc[leveloffset=+2]
.Additional resources
* You can view details about the CloudFormation stacks that you create by navigating to the link:https://console.aws.amazon.com/cloudformation/[AWS CloudFormation console].
* See xref:../../installing/installing_aws/installing-aws-user-infra.adoc#installation-aws-user-infra-rhcos-ami_installing-aws-user-infra[{op-system} AMIs for the AWS infrastructure] for details about the {op-system-first} AMIs for the AWS zones.
include::modules/installation-creating-aws-control-plane.adoc[leveloffset=+1]
include::modules/installation-cloudformation-control-plane.adoc[leveloffset=+2]
include::modules/installation-aws-user-infra-bootstrap.adoc[leveloffset=+1]
.Additional resources
* You can view details about the CloudFormation stacks that you create by navigating to the link:https://console.aws.amazon.com/cloudformation/[AWS CloudFormation console].
include::modules/installation-creating-aws-worker.adoc[leveloffset=+1]
////
[id="installing-workers-aws-user-infra"]
@@ -108,10 +141,21 @@ the workers, you can allow the cluster to manage them. This allows you to easily
scale, manage, and upgrade your workers.
////
include::modules/installation-cloudformation-worker.adoc[leveloffset=+2]
include::modules/installation-creating-aws-worker.adoc[leveloffset=+2]
.Additional resources
include::modules/installation-cloudformation-worker.adoc[leveloffset=+3]
* You can view details about the CloudFormation stacks that you create by navigating to the link:https://console.aws.amazon.com/cloudformation/[AWS CloudFormation console].
include::modules/installation-aws-user-infra-bootstrap.adoc[leveloffset=+1]
.Additional resources
* See xref:../../support/troubleshooting/troubleshooting-installations.html#monitoring-installation-progress_troubleshooting-installations[Monitoring installation progress] for details about monitoring the installation, bootstrap, and control plane logs as an {product-title} installation progresses.
* See xref:../../support/troubleshooting/troubleshooting-installations.adoc#gathering-bootstrap-diagnostic-data_troubleshooting-installations[Gathering bootstrap node diagnostic data] for information about troubleshooting issues related to the bootstrap process.
* You can view details about the running instances that are created by using the link:https://console.aws.amazon.com/ec2[AWS EC2 console].
include::modules/cli-installing-cli.adoc[leveloffset=+1]
@@ -137,10 +181,14 @@ include::modules/installation-aws-user-infra-installation.adoc[leveloffset=+1]
include::modules/logging-in-by-using-the-web-console.adoc[leveloffset=+1]
.Additional resources
[id="installing-aws-user-infra-additional-resources"]
== Additional resources
* See xref:../../web_console/web-console.adoc#web-console[Accessing the web console] for more details about accessing and understanding the {product-title} web console.
* See link:https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacks.html[Working with stacks] in the AWS documentation for more information about AWS CloudFormation stacks.
[id="installing-aws-user-infra-next-steps"]
== Next steps
* xref:../../installing/validating-an-installation.adoc#validating-an-installation[Validating an installation].

View File

@@ -39,6 +39,10 @@ include::modules/installation-custom-aws-vpc.adoc[leveloffset=+1]
include::modules/cluster-entitlements.adoc[leveloffset=+1]
.Additional resources
* See xref:../../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring] for more information about the Telemetry service.
include::modules/ssh-agent-using.adoc[leveloffset=+1]
include::modules/installation-obtaining-installer.adoc[leveloffset=+1]

View File

@@ -22,18 +22,18 @@ according to your company's policies.
== Prerequisites
* xref:../../installing/install_config/installing-restricted-networks-preparations.adoc#installing-restricted-networks-preparations[Create a mirror registry on your mirror host]
and obtain the `imageContentSources` data for your version of {product-title}.
* You xref:../../installing/install_config/installing-restricted-networks-preparations.adoc#installing-restricted-networks-preparations[created a mirror registry on your mirror host]
and obtained the `imageContentSources` data for your version of {product-title}.
+
[IMPORTANT]
====
Because the installation media is on the mirror host, you can use that computer
to complete all installation steps.
====
* Review details about the
* You reviewed details about the
xref:../../architecture/architecture-installation.adoc#architecture-installation[{product-title} installation and update]
processes.
* xref:../../installing/installing_aws/installing-aws-account.adoc#installing-aws-account[Configure an AWS account]
* You xref:../../installing/installing_aws/installing-aws-account.adoc#installing-aws-account[configured an AWS account]
to host the cluster.
+
[IMPORTANT]
@@ -47,11 +47,10 @@ link:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys
in the AWS documentation. You can supply the keys when you run the installation
program.
====
* Download the AWS CLI and install it on your computer. See
* You downloaded the AWS CLI and installed it on your computer. See
link:https://docs.aws.amazon.com/cli/latest/userguide/install-bundle.html[Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix)]
in the AWS documentation.
* If you use a firewall and plan to use telemetry, you must
xref:../../installing/install_config/configuring-firewall.adoc#configuring-firewall[configure the firewall to allow the sites] that your cluster requires access to.
* If you use a firewall and plan to use the Telemetry service, you xref:../../installing/install_config/configuring-firewall.adoc#configuring-firewall[configured the firewall to allow the sites] that your cluster requires access to.
+
[NOTE]
====
@@ -67,6 +66,10 @@ include::modules/installation-about-restricted-network.adoc[leveloffset=+1]
include::modules/cluster-entitlements.adoc[leveloffset=+1]
.Additional resources
* See xref:../../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring] for more information about the Telemetry service
include::modules/installation-aws-user-infra-requirements.adoc[leveloffset=+1]
include::modules/installation-aws-permissions.adoc[leveloffset=+2]
@@ -79,6 +82,10 @@ include::modules/installation-user-infra-generate.adoc[leveloffset=+1]
include::modules/installation-generate-aws-user-infra-install-config.adoc[leveloffset=+2]
.Additional resources
* See link:https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html[Configuration and credential file settings] in the AWS documentation for more information about AWS profile and credential configuration.
include::modules/installation-configure-proxy.adoc[leveloffset=+2]
//include::modules/installation-three-node-cluster.adoc[leveloffset=+2]
@@ -95,6 +102,10 @@ include::modules/installation-creating-aws-dns.adoc[leveloffset=+1]
include::modules/installation-cloudformation-dns.adoc[leveloffset=+2]
.Additional resources
* See link:https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/ListInfoOnHostedZone.html[Listing public hosted zones] in the AWS documentation for more information about listing public hosted zones.
include::modules/installation-creating-aws-security.adoc[leveloffset=+1]
include::modules/installation-cloudformation-security.adoc[leveloffset=+2]
@@ -105,11 +116,15 @@ include::modules/installation-creating-aws-bootstrap.adoc[leveloffset=+1]
include::modules/installation-cloudformation-bootstrap.adoc[leveloffset=+2]
.Additional resources
* See xref:../../installing/installing_aws/installing-aws-user-infra.adoc#installation-aws-user-infra-rhcos-ami_installing-aws-user-infra[{op-system} AMIs for the AWS infrastructure] for details about the {op-system-first} AMIs for the AWS zones.
include::modules/installation-creating-aws-control-plane.adoc[leveloffset=+1]
include::modules/installation-cloudformation-control-plane.adoc[leveloffset=+2]
include::modules/installation-aws-user-infra-bootstrap.adoc[leveloffset=+1]
include::modules/installation-creating-aws-worker.adoc[leveloffset=+1]
////
[id="installing-workers-aws-user-infra"]
@@ -121,10 +136,15 @@ the workers, you can allow the cluster to manage them. This allows you to easily
scale, manage, and upgrade your workers.
////
include::modules/installation-cloudformation-worker.adoc[leveloffset=+2]
include::modules/installation-creating-aws-worker.adoc[leveloffset=+2]
include::modules/installation-aws-user-infra-bootstrap.adoc[leveloffset=+1]
include::modules/installation-cloudformation-worker.adoc[leveloffset=+3]
.Additional resources
* See xref:../../support/troubleshooting/troubleshooting-installations.html#monitoring-installation-progress_troubleshooting-installations[Monitoring installation progress] for details about monitoring the installation, bootstrap, and control plane logs as an {product-title} installation progresses.
* See xref:../../support/troubleshooting/troubleshooting-installations.adoc#gathering-bootstrap-diagnostic-data_troubleshooting-installations[Gathering bootstrap node diagnostic data] for information about troubleshooting issues related to the bootstrap process.
//You can install the CLI on the mirror host.
@@ -148,13 +168,17 @@ include::modules/installation-aws-user-infra-installation.adoc[leveloffset=+1]
include::modules/logging-in-by-using-the-web-console.adoc[leveloffset=+1]
.Additional resources
[id="installing-restricted-networks-aws-additional-resources"]
== Additional resources
* See xref:../../web_console/web-console.adoc#web-console[Accessing the web console] for more details about accessing and understanding the {product-title} web console.
* See link:https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacks.html[Working with stacks] in the AWS documentation for more information about AWS CloudFormation stacks.
[id="installing-restricted-networks-aws-next-steps"]
== Next steps
* xref:../../installing/validating-an-installation.adoc#validating-an-installation[Validating an installation].
* xref:../../installing/validating-an-installation.adoc#validating-an-installation[Validate an installation].
* xref:../../installing/install_config/customizations.adoc#customizations[Customize your cluster].
* If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by xref:../../openshift_images/image-configuration.adoc#images-configuration-cas_image-configuration[configuring additional trust stores].
* If necessary, you can

View File

@@ -5,6 +5,14 @@ include::modules/common-attributes.adoc[]
toc::[]
In environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster `kube-system` namespace, you can put the Cloud Credential Operator (CCO) into manual mode before you install the cluster.
include::modules/alternatives-to-storing-admin-secrets-in-kube-system.adoc[leveloffset=+1]
.Additional resources
See xref:../../operators/operator-reference.adoc#cloud-credential-operator_red-hat-operators[Cloud Credential Operator] for a detailed description of all available CCO credential modes and their supported platforms.
include::modules/manually-create-identity-access-management.adoc[leveloffset=+1]
include::modules/admin-credentials-root-secret-formats.adoc[leveloffset=+1]
@@ -14,3 +22,12 @@ include::modules/manually-maintained-credentials-upgrade.adoc[leveloffset=+1]
include::modules/mint-mode.adoc[leveloffset=+1]
include::modules/mint-mode-with-removal-of-admin-credential.adoc[leveloffset=+1]
[id="manually-creating-iam-aws-next-steps"]
== Next steps
* Install an {product-title} cluster:
** xref:../../installing/installing_aws/installing-aws-default.adoc#installing-aws-default[Quickly install a cluster] with default options on installer-provisioned infrastructure
** xref:../../installing/installing_aws/installing-aws-customizations.adoc#installing-aws-customizations[Install a cluster with cloud customizations on installer-provisioned infrastructure]
** xref:../../installing/installing_aws/installing-aws-network-customizations.adoc#installing-aws-network-customizations[Install a cluster with network customizations on installer-provisioned infrastructure]
** xref:../../installing/installing_aws/installing-aws-user-infra.adoc#installing-aws-user-infra[Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates]

View File

@@ -0,0 +1,14 @@
// Module included in the following assemblies:
//
// * installing/installing_gcp/manually-creating-iam-gcp.adoc
[id="alternatives-to-storing-admin-secrets-in-kube-system.adoc_{context}"]
= Alternatives to storing administrator-level secrets in the `kube-system` project
The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). You can configure the CCO to suit the security requirements of your organization by setting different values for the `credentialsMode` parameter in the `install-config.yaml` file.
If you prefer not to store an administrator-level credential secret in the cluster `kube-system` project, you can choose one of the following options when installing {product-title} on AWS:
* *Manage cloud credentials manually*. You can set the `credentialsMode` for the CCO to `Manual` to manage cloud credentials manually. Using manual mode allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. You can also use this mode if your environment does not have connectivity to the AWS public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. You must also manually supply credentials for every component that requests them.
* *Remove the administrator-level credential secret after installing {product-title} with mint mode*. You can remove or rotate the administrator-level credential after installing {product-title} with the `Mint` CCO credentials mode applied. The `Mint` CCO credentials mode is the default. This option requires the presence of the administrator-level credential during an installation. The administrator-level credential is used during the installation to mint other credentials with some permissions granted. The original credential secret is not stored in the cluster permanently.

View File

@@ -44,8 +44,8 @@ The file is specific to a cluster and is created during {product-title} installa
.Prerequisites
* Deploy an {product-title} cluster.
* Install the `oc` CLI.
* You deployed an {product-title} cluster.
* You installed the `oc` CLI.
.Procedure

View File

@@ -20,7 +20,7 @@ You must configure hybrid networking with OVN-Kubernetes during the installation
.Procedure
. Use the following command to create manifests:
. Create the manifests from the directory that contains the installation program:
+
[source,terminal]
----

View File

@@ -58,6 +58,11 @@ worker-1 NotReady worker 70s v1.20.0
----
+
The output lists all of the machines that you created.
+
[NOTE]
====
The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.
====
. Review the pending CSRs and ensure that you see a client and server request with the `Pending` or `Approved` status for each machine that you added to the cluster:
+
@@ -125,6 +130,11 @@ $ oc adm certificate approve <csr_name> <1>
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
----
[NOTE]
====
Some Operators might not become available until some CSRs are approved.
====
.Additional information
* For more information on CSRs, see link:https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/[Certificate Signing Requests].

View File

@@ -11,13 +11,13 @@ that region.
.Prerequisites
* Configure an AWS account.
* Create an Amazon S3 bucket with the required IAM
* You configured an AWS account.
* You created an Amazon S3 bucket with the required IAM
link:https://docs.aws.amazon.com/vm-import/latest/userguide/vmie_prereqs.html#vmimport-role[service role].
* Upload your {op-system} VMDK file to Amazon S3. The {op-system} VMDK file must
* You uploaded your {op-system} VMDK file to Amazon S3. The {op-system} VMDK file must
be the highest version that is less than or equal to the {product-title} version
you are installing.
* Download the AWS CLI and install it on your computer. See
* You downloaded the AWS CLI and installed it on your computer. See
link:https://docs.aws.amazon.com/cli/latest/userguide/install-bundle.html[Install the AWS CLI Using the Bundled Installer].
.Procedure

View File

@@ -4,26 +4,26 @@
// * installing/installing_aws/installing-restricted-networks-aws.adoc
[id="installation-aws-user-infra-bootstrap_{context}"]
= Initializing the bootstrap node on AWS with user-provisioned infrastructure
= Initializing the bootstrap sequence on AWS with user-provisioned infrastructure
After you create all of the required infrastructure in Amazon Web Services (AWS),
you can install the cluster.
you can start the bootstrap sequence that initializes the {product-title} control plane.
.Prerequisites
* Configure an AWS account.
* Generate the Ignition config files for your cluster.
* Create and configure a VPC and associated subnets in AWS.
* Create and configure DNS, load balancers, and listeners in AWS.
* Create control plane and compute roles.
* Create the bootstrap machine.
* Create the control plane machines.
* If you plan to manually manage the worker machines, create the worker machines.
* You configured an AWS account.
* You added your AWS keys and region to your local AWS profile by running `aws configure`.
* You generated the Ignition config files for your cluster.
* You created and configured a VPC and associated subnets in AWS.
* You created and configured DNS, load balancers, and listeners in AWS.
* You created the security groups and roles required for your cluster in AWS.
* You created the bootstrap machine.
* You created the control plane machines.
* You created the worker nodes.
.Procedure
. Change to the directory that contains the installation program and run the
following command:
. Change to the directory that contains the installation program and start the bootstrap process that initializes the {product-title} control plane:
+
[source,terminal]
----
@@ -35,5 +35,20 @@ stored the installation files in.
<2> To view different installation details, specify `warn`, `debug`, or
`error` instead of `info`.
+
If the command exits without a `FATAL` warning, your production control plane
has initialized.
.Example output
[source,terminal]
----
INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443...
INFO API v1.19.0+9f84db3 up
INFO Waiting up to 30m0s for bootstrapping to complete...
INFO It is now safe to remove the bootstrap resources
INFO Time elapsed: 1s
----
+
If the command exits without a `FATAL` warning, your {product-title} control plane
has initialized.
+
[NOTE]
====
After the control plane initializes, it sets up the compute nodes and installs additional services in the form of Operators.
====

View File

@@ -16,9 +16,11 @@ After you complete the initial Operator configuration for the cluster, remove th
. Delete the bootstrap resources. If you used the CloudFormation template,
link:https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html[delete its stack]:
** Delete the stack by using the AWS CLI:
+
[source,terminal]
----
$ aws cloudformation delete-stack --stack-name <name> <1>
----
<1> `<name>` is the name of your bootstrap stack.
** Delete the stack by using the link:https://console.aws.amazon.com/cloudformation/[AWS CloudFormation console].

View File

@@ -15,16 +15,16 @@ user-provisioned infrastructure, monitor the deployment to completion.
.Prerequisites
* Removed the bootstrap node for an {product-title} cluster on user-provisioned AWS infrastructure.
* Install the `oc` CLI and log in.
* You removed the bootstrap node for an {product-title} cluster on user-provisioned AWS infrastructure.
* You installed the `oc` CLI.
.Procedure
ifdef::restricted[]
. Complete
. From the directory that contains the installation program, complete
endif::restricted[]
ifndef::restricted[]
* Complete
* From the directory that contains the installation program, complete
endif::restricted[]
the cluster installation:
+
@@ -38,7 +38,13 @@ stored the installation files in.
.Example output
[source,terminal]
----
INFO Waiting up to 30m0s for the cluster to initialize...
INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize...
INFO Waiting up to 10m0s for the openshift-console route to be created...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Fe5en-ymBEc-Wt6NL"
INFO Time elapsed: 1s
----
+
[IMPORTANT]

View File

@@ -6,12 +6,20 @@
[id="installation-aws-user-infra-requirements_{context}"]
= Required AWS infrastructure components
To install {product-title} on user-provisioned infrastructure in Amazon Web Services (AWS), you must manually create both the machines and their
supporting infrastructure.
To install {product-title} on user-provisioned infrastructure in Amazon Web Services (AWS), you must manually create both the machines and their supporting infrastructure.
For more information about the integration testing for different platforms, see the link:https://access.redhat.com/articles/4128421[OpenShift Container Platform 4.x Tested Integrations] page.
You can use the provided Cloud Formation templates to create this infrastructure, you can manually create the components, or you can reuse existing infrastructure that meets the cluster requirements. Review the Cloud Formation templates for more details about how the components interrelate.
By using the provided CloudFormation templates, you can create stacks of AWS resources that represent the following components:
* An AWS Virtual Private Cloud (VPC)
* Networking and load balancing components
* Security groups and roles
* An {product-title} bootstrap node
* {product-title} control plane nodes
* An {product-title} compute node
Alternatively, you can manually create the components or you can reuse existing infrastructure that meets the cluster requirements. Review the CloudFormation templates for more details about how the components interrelate.
[id="installation-aws-user-infra-cluster-machines_{context}"]
== Cluster machines
@@ -28,7 +36,7 @@ control plane initializes and you can access the cluster API by using the `oc`
command line interface.
////
You can use the following instance types for the cluster machines with the provided Cloud Formation templates.
You can use the following instance types for the cluster machines with the provided CloudFormation templates.
[IMPORTANT]
@@ -527,7 +535,7 @@ a `AWS::EC2::SecurityGroupIngress` resource.
.Roles and instance profiles
You must grant the machines permissions in AWS. The provided CloudFormation
templates grant the machines permission the following `AWS::IAM::Role` objects
templates grant the machines `Allow` permissions for the following `AWS::IAM::Role` objects
and provide a `AWS::IAM::InstanceProfile` for each set of roles. If you do
not use the templates, you can grant the machines the following broad permissions
or the following individual permissions.

View File

@@ -28,7 +28,7 @@ running cluster, use the `oc adm must-gather` command.
the bootstrap and control plane machines:
+
--
** If you used installer-provisioned infrastructure, run the following command:
** If you used installer-provisioned infrastructure, change to the directory that contains the installation program and run the following command:
+
[source,terminal]
----
@@ -41,7 +41,7 @@ For installer-provisioned infrastructure, the installation program stores
information about the cluster, so you do not specify the host names or IP
addresses.
** If you used infrastructure that you provisioned yourself, run the following
** If you used infrastructure that you provisioned yourself, change to the directory that contains the installation program and run the following
command:
+
[source,terminal]

View File

@@ -47,9 +47,9 @@ endif::bare-metal[]
.Prerequisites
* An existing `install-config.yaml` file.
* You have an existing `install-config.yaml` file.
// TODO: xref (../../installing/install_config/configuring-firewall.adoc#configuring-firewall)
* Review the sites that your cluster requires access to and determine whether any need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. Add sites to the `Proxy` object's `spec.noProxy` field to bypass the proxy if necessary.
* You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the `Proxy` object's `spec.noProxy` field to bypass the proxy if necessary.
+
[NOTE]
====

View File

@@ -12,9 +12,9 @@ You can create either a wildcard record or specific records. While the following
.Prerequisites
* You deployed an {product-title} cluster on Amazon Web Services (AWS) that uses infrastructure that you provisioned.
* Install the OpenShift CLI (`oc`).
* Install the `jq` package.
* Download the AWS CLI and install it on your computer. See
* You installed the OpenShift CLI (`oc`).
* You installed the `jq` package.
* You downloaded the AWS CLI and installed it on your computer. See
link:https://docs.aws.amazon.com/cli/latest/userguide/install-bundle.html[Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix)].
.Procedure

View File

@@ -6,9 +6,9 @@
[id="installation-creating-aws-bootstrap_{context}"]
= Creating the bootstrap node in AWS
You must create the bootstrap node in Amazon Web Services (AWS) to use during
{product-title} cluster initialization. The easiest way to create this node is
to modify the provided CloudFormation template.
You must create the bootstrap node in Amazon Web Services (AWS) to use during {product-title} cluster initialization.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the bootstrap node that your {product-title} installation requires.
[NOTE]
====
@@ -20,11 +20,12 @@ have to contact Red Hat support with your installation logs.
.Prerequisites
* Configure an AWS account.
* Generate the Ignition config files for your cluster.
* Create and configure a VPC and associated subnets in AWS.
* Create and configure DNS, load balancers, and listeners in AWS.
* Create control plane and compute roles.
* You configured an AWS account.
* You added your AWS keys and region to your local AWS profile by running `aws configure`.
* You generated the Ignition config files for your cluster.
* You created and configured a VPC and associated subnets in AWS.
* You created and configured DNS, load balancers, and listeners in AWS.
* You created the security groups and roles required for your cluster in AWS.
.Procedure
@@ -61,14 +62,15 @@ address that the bootstrap machine can reach.
----
$ aws s3 mb s3://<cluster-name>-infra <1>
----
<1> `<cluster-name>-infra` is the bucket name.
<1> `<cluster-name>-infra` is the bucket name. When creating the `install-config.yaml` file, replace `<cluster-name>` with the name specified for the cluster.
.. Upload the `bootstrap.ign` Ignition config file to the bucket:
+
[source,terminal]
----
$ aws s3 cp bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign
$ aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign <1>
----
<1> For `<installation_directory>`, specify the path to the directory that you stored the installation files in.
.. Verify that the file uploaded:
+
@@ -185,7 +187,7 @@ deploying the cluster to an AWS GovCloud region.
section of this topic and save it as a YAML file on your computer. This template
describes the bootstrap machine that your cluster requires.
. Launch the template:
. Launch the CloudFormation template to create a stack of AWS resources that represent the bootstrap node:
+
[IMPORTANT]
====
@@ -197,7 +199,7 @@ You must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name> <1>
--template-body file://<template>.yaml <2>
--parameters file://<parameters>.json <3>
--capabilities CAPABILITY_NAMED_IAM
--capabilities CAPABILITY_NAMED_IAM <4>
----
<1> `<name>` is the name for the CloudFormation stack, such as `cluster-bootstrap`.
You need the name of this stack if you remove the cluster.
@@ -205,6 +207,13 @@ You need the name of this stack if you remove the cluster.
YAML file that you saved.
<3> `<parameters>` is the relative path to and name of the CloudFormation
parameters JSON file.
<4> You must explicitly declare the `CAPABILITY_NAMED_IAM` capability because the provided template creates some `AWS::IAM::Role` and `AWS::IAM::InstanceProfile` resources.
+
.Example output
[source,terminal]
----
arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83
----
. Confirm that the template components exist:
+

View File

@@ -6,9 +6,14 @@
[id="installation-creating-aws-control-plane_{context}"]
= Creating the control plane machines in AWS
You must create the control plane machines in Amazon Web Services (AWS) for your
cluster to use. The easiest way to create these nodes is
to modify the provided CloudFormation template.
You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the control plane nodes.
[IMPORTANT]
====
The CloudFormation template creates a stack that represents three control plane nodes.
====
[NOTE]
====
@@ -20,12 +25,13 @@ have to contact Red Hat support with your installation logs.
.Prerequisites
* Configure an AWS account.
* Generate the Ignition config files for your cluster.
* Create and configure a VPC and associated subnets in AWS.
* Create and configure DNS, load balancers, and listeners in AWS.
* Create control plane and compute roles.
* Create the bootstrap machine.
* You configured an AWS account.
* You added your AWS keys and region to your local AWS profile by running `aws configure`.
* You generated the Ignition config files for your cluster.
* You created and configured a VPC and associated subnets in AWS.
* You created and configured DNS, load balancers, and listeners in AWS.
* You created the security groups and roles required for your cluster in AWS.
* You created the bootstrap machine.
.Procedure
@@ -198,7 +204,7 @@ describes the control plane machines that your cluster requires.
add that instance type to the `MasterInstanceType.AllowedValues` parameter
in the CloudFormation template.
. Launch the template:
. Launch the CloudFormation template to create a stack of AWS resources that represent the control plane nodes:
+
[IMPORTANT]
====
@@ -217,6 +223,17 @@ You need the name of this stack if you remove the cluster.
YAML file that you saved.
<3> `<parameters>` is the relative path to and name of the CloudFormation
parameters JSON file.
+
.Example output
[source,terminal]
----
arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b
----
+
[NOTE]
====
The CloudFormation template creates a stack that represents three control plane nodes.
====
. Confirm that the template components exist:
+

View File

@@ -6,12 +6,11 @@
[id="installation-creating-aws-dns_{context}"]
= Creating networking and load balancing components in AWS
You must configure networking and load balancing (classic or network) in Amazon Web Services (AWS) for your
{product-title} cluster to use. The easiest way to create these components is
to modify the provided CloudFormation template, which also creates a hosted zone
and subnet tags.
You must configure networking and classic or network load balancing in Amazon Web Services (AWS) that your {product-title} cluster can use.
You can run the template multiple times within a single VPC.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the networking and load balancing components that your {product-title} cluster requires. The template also creates a hosted zone and subnet tags.
You can run the template multiple times within a single Virtual Private Cloud (VPC).
[NOTE]
====
@@ -23,29 +22,31 @@ have to contact Red Hat support with your installation logs.
.Prerequisites
* Configure an AWS account.
* Generate the Ignition config files for your cluster.
* Create and configure a VPC and associated subnets in AWS.
* You configured an AWS account.
* You added your AWS keys and region to your local AWS profile by running `aws configure`.
* You generated the Ignition config files for your cluster.
* You created and configured a VPC and associated subnets in AWS.
.Procedure
. Obtain the Hosted Zone ID for the Route 53 zone that you specified in the
`install-config.yaml` file for your cluster. You can obtain this ID from the
AWS console or by running the following command:
+
[IMPORTANT]
====
You must enter the command on a single line.
====
. Obtain the hosted zone ID for the Route 53 base domain that you specified in the
`install-config.yaml` file for your cluster. You can obtain details about your hosted zone by running the following command:
+
[source,terminal]
----
$ aws route53 list-hosted-zones-by-name |
jq --arg name "<route53_domain>." \ <1>
-r '.HostedZones | .[] | select(.Name=="\($name)") | .Id'
$ aws route53 list-hosted-zones-by-name --dns-name <route53_domain> <1>
----
<1> For the `<route53_domain>`, specify the Route 53 base domain that you used
when you generated the `install-config.yaml` file for the cluster.
+
.Example output
[source,terminal]
----
mycluster.example.com. False 100
HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10
----
+
In the example output, the hosted zone ID is `Z21IXYZABCZ2A4`.
. Create a JSON file that contains the parameter values that the template
requires:
@@ -116,7 +117,7 @@ describes the networking and load balancing objects that your cluster requires.
If you are deploying your cluster to an AWS government or secret region, you must update the `InternalApiServerRecord` in the CloudFormation template to use `CNAME` records. Records of type `ALIAS` are not supported for AWS government regions.
====
. Launch the template:
. Launch the CloudFormation template to create a stack of AWS resources that provide the networking and load balancing components:
+
[IMPORTANT]
====
@@ -128,7 +129,7 @@ You must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name> <1>
--template-body file://<template>.yaml <2>
--parameters file://<parameters>.json <3>
--capabilities CAPABILITY_NAMED_IAM
--capabilities CAPABILITY_NAMED_IAM <4>
----
<1> `<name>` is the name for the CloudFormation stack, such as `cluster-dns`.
You need the name of this stack if you remove the cluster.
@@ -136,6 +137,13 @@ You need the name of this stack if you remove the cluster.
YAML file that you saved.
<3> `<parameters>` is the relative path to and name of the CloudFormation
parameters JSON file.
<4> You must explicitly declare the `CAPABILITY_NAMED_IAM` capability because the provided template creates some `AWS::IAM::Role` resources.
+
.Example output
[source,terminal]
----
arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183
----
. Confirm that the template components exist:
+

View File

@@ -6,9 +6,9 @@
[id="installation-creating-aws-security_{context}"]
= Creating security group and roles in AWS
You must create security groups and roles in Amazon Web Services (AWS) for your
{product-title} cluster to use. The easiest way to create these components is
to modify the provided CloudFormation template.
You must create security groups and roles in Amazon Web Services (AWS) for your {product-title} cluster to use.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the security groups and roles that your {product-title} cluster requires.
[NOTE]
====
@@ -20,9 +20,10 @@ have to contact Red Hat support with your installation logs.
.Prerequisites
* Configure an AWS account.
* Generate the Ignition config files for your cluster.
* Create and configure a VPC and associated subnets in AWS.
* You configured an AWS account.
* You added your AWS keys and region to your local AWS profile by running `aws configure`.
* You generated the Ignition config files for your cluster.
* You created and configured a VPC and associated subnets in AWS.
.Procedure
@@ -68,7 +69,7 @@ the VPC.
section of this topic and save it as a YAML file on your computer. This template
describes the security groups and roles that your cluster requires.
. Launch the template:
. Launch the CloudFormation template to create a stack of AWS resources that represent the security groups and roles:
+
[IMPORTANT]
====
@@ -80,7 +81,7 @@ You must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name> <1>
--template-body file://<template>.yaml <2>
--parameters file://<parameters>.json <3>
--capabilities CAPABILITY_NAMED_IAM
--capabilities CAPABILITY_NAMED_IAM <4>
----
<1> `<name>` is the name for the CloudFormation stack, such as `cluster-sec`.
You need the name of this stack if you remove the cluster.
@@ -88,6 +89,13 @@ You need the name of this stack if you remove the cluster.
YAML file that you saved.
<3> `<parameters>` is the relative path to and name of the CloudFormation
parameters JSON file.
<4> You must explicitly declare the `CAPABILITY_NAMED_IAM` capability because the provided template creates some `AWS::IAM::Role` and `AWS::IAM::InstanceProfile` resources.
+
.Example output
[source,terminal]
----
arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db
----
. Confirm that the template components exist:
+

View File

@@ -6,10 +6,11 @@
[id="installation-creating-aws-vpc_{context}"]
= Creating a VPC in AWS
You must create a VPC in Amazon Web Services (AWS) for your {product-title}
You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your {product-title}
cluster to use. You can customize the VPC to meet your requirements, including
VPN and route tables. The easiest way to create the VPC is to modify the
provided CloudFormation template.
VPN and route tables.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC.
[NOTE]
====
@@ -21,8 +22,9 @@ have to contact Red Hat support with your installation logs.
.Prerequisites
* Configure an AWS account.
* Generate the Ignition config files for your cluster.
* You configured an AWS account.
* You added your AWS keys and region to your local AWS profile by running `aws configure`.
* You generated the Ignition config files for your cluster.
.Procedure
@@ -57,7 +59,7 @@ requires:
section of this topic and save it as a YAML file on your computer. This template
describes the VPC that your cluster requires.
. Launch the template:
. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC:
+
[IMPORTANT]
====
@@ -76,6 +78,12 @@ You need the name of this stack if you remove the cluster.
YAML file that you saved.
<3> `<parameters>` is the relative path to and name of the CloudFormation
parameters JSON file.
+
.Example output
[source,terminal]
----
arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f
----
. Confirm that the template components exist:
+

View File

@@ -11,13 +11,13 @@ If you do not plan to automatically create worker nodes by using a MachineSet,
////
You can create worker nodes in Amazon Web Services (AWS) for your cluster to use.
The easiest way to manually create these nodes is to modify the provided
CloudFormation template.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent a worker node.
[IMPORTANT]
====
The CloudFormation template creates a stack that represents one worker machine.
You must create a stack for each worker machine.
The CloudFormation template creates a stack that represents one worker node.
You must create a stack for each worker node.
====
[NOTE]
@@ -30,13 +30,14 @@ have to contact Red Hat support with your installation logs.
.Prerequisites
* Configure an AWS account.
* Generate the Ignition config files for your cluster.
* Create and configure a VPC and associated subnets in AWS.
* Create and configure DNS, load balancers, and listeners in AWS.
* Create control plane and compute roles.
* Create the bootstrap machine.
* Create the control plane machines.
* You configured an AWS account.
* You added your AWS keys and region to your local AWS profile by running `aws configure`.
* You generated the Ignition config files for your cluster.
* You created and configured a VPC and associated subnets in AWS.
* You created and configured DNS, load balancers, and listeners in AWS.
* You created the security groups and roles required for your cluster in AWS.
* You created the bootstrap machine.
* You created the control plane machines.
.Procedure
@@ -144,8 +145,7 @@ describes the networking objects and load balancers that your cluster requires.
add that instance type to the `WorkerInstanceType.AllowedValues` parameter
in the CloudFormation template.
. Create a worker stack.
.. Launch the template:
. Launch the CloudFormation template to create a stack of AWS resources that represent a worker node:
+
[IMPORTANT]
====
@@ -158,22 +158,32 @@ $ aws cloudformation create-stack --stack-name <name> <1>
--template-body file://<template>.yaml \ <2>
--parameters file://<parameters>.json <3>
----
<1> `<name>` is the name for the CloudFormation stack, such as `cluster-workers`.
<1> `<name>` is the name for the CloudFormation stack, such as `cluster-worker-1`.
You need the name of this stack if you remove the cluster.
<2> `<template>` is the relative path to and name of the CloudFormation template
YAML file that you saved.
<3> `<parameters>` is the relative path to and name of the CloudFormation
parameters JSON file.
+
.Example output
[source,terminal]
----
arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59
----
+
[NOTE]
====
The CloudFormation template creates a stack that represents one worker node.
====
.. Confirm that the template components exist:
. Confirm that the template components exist:
+
[source,terminal]
----
$ aws cloudformation describe-stacks --stack-name <name>
----
. Continue to create worker stacks until you have created enough worker machines
for your cluster.
. Continue to create worker stacks until you have created enough worker machines for your cluster. You can create additional worker stacks by referencing the same template and parameter files and specifying a different stack name.
+
[IMPORTANT]
====

View File

@@ -61,7 +61,7 @@ endif::[]
ifdef::aws,gcp[]
The Ignition config files contain a unique cluster identifier that you can use to
uniquely identify your cluster in {cp-first} ({cp}). The provided {cp-template}
uniquely identify your cluster in {cp-first} ({cp}). The infrastructure name is also used to locate the appropriate {cp} resources during an {product-title} installation. The provided {cp-template}
templates contain references to this infrastructure name, so you must extract
it.
endif::aws,gcp[]
@@ -80,9 +80,9 @@ endif::vsphere[]
.Prerequisites
* Obtain the {product-title} installation program and the pull secret for your cluster.
* Generate the Ignition config files for your cluster.
* Install the `jq` package.
* You obtained the {product-title} installation program and the pull secret for your cluster.
* You generated the Ignition config files for your cluster.
* You installed the `jq` package.
.Procedure

View File

@@ -15,17 +15,16 @@ installation program needs to deploy your cluster.
.Prerequisites
* Obtain the {product-title} installation program and the pull secret for your
cluster.
* You obtained the {product-title} installation program for user-provisioned infrastructure and the pull secret for your cluster.
ifdef::restricted[]
For a restricted network installation, these files are on your mirror host.
endif::restricted[]
* Check that you are deploying your cluster to a region with an accompanying {op-system-first} AMI published by Red Hat. If you are deploying to a region that requires a custom AMI, such as an AWS GovCloud region, you must create the `install-config.yaml` file manually.
* You checked that you are deploying your cluster to a region with an accompanying {op-system-first} AMI published by Red Hat. If you are deploying to a region that requires a custom AMI, such as an AWS GovCloud region, you must create the `install-config.yaml` file manually.
.Procedure
. Obtain the `install-config.yaml` file.
.. Run the following command:
. Create the `install-config.yaml` file.
.. Change to the directory that contains the installation program and run the following command:
+
[source,terminal]
----
@@ -54,6 +53,11 @@ For production {product-title} clusters on which you want to perform installatio
... If you do not have an AWS profile stored on your computer, enter the AWS
access key ID and secret access key for the user that you configured to run the
installation program.
+
[NOTE]
====
The AWS access key ID and secret access key are stored in `~/.aws/credentials` in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file.
====
... Select the AWS region to deploy the cluster to.
... Select the base domain for the Route 53 service that you configured for your cluster.
... Enter a descriptive name for your cluster.
@@ -63,18 +67,6 @@ ifdef::openshift-origin[]
This field is optional.
endif::[]
. Edit the `install-config.yaml` file to set the number of compute replicas, which are also known as worker
replicas, to `0`, as shown in the following `compute` stanza:
+
[source,yaml]
----
compute:
- hyperthreading: Enabled
name: worker
platform: {}
replicas: 0
----
ifdef::restricted[]
. Edit the `install-config.yaml` file to provide the additional information that
is required for an installation in a restricted network.

View File

@@ -15,7 +15,7 @@ $ cat ~/<installation_directory>/.openshift_install.log <1>
----
<1> For `installation_directory`, specify the same directory you specified when you ran `./openshift-install create cluster`.
* Re-run the installation program with `--log-level=debug`:
* Change to the directory that contains the installation program and re-run it with `--log-level=debug`:
+
[source,terminal]
----

View File

@@ -135,7 +135,7 @@ endif::osp+restricted[]
. Create the `install-config.yaml` file.
+
.. Run the following command:
.. Change to the directory that contains the installation program and run the following command:
+
[source,terminal]
----

View File

@@ -138,7 +138,7 @@ environment variables
** The `gcloud cli` default credentials
endif::gcp[]
. Run the installation program:
. Change to the directory that contains the installation program and initialize the cluster deployment:
+
[source,terminal]
----
@@ -181,6 +181,11 @@ ifdef::aws[]
.. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS
access key ID and secret access key for the user that you configured to run the
installation program.
+
[NOTE]
====
The AWS access key ID and secret access key are stored in `~/.aws/credentials` in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file.
====
.. Select the AWS region to deploy the cluster to.
.. Select the base domain for the Route 53 service that you configured for your cluster.
endif::aws[]
@@ -327,6 +332,22 @@ When the cluster deployment completes, directions for accessing your cluster,
including a link to its web console and credentials for the `kubeadmin` user,
display in your terminal.
+
.Example output
[source,terminal]
----
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL"
INFO Time elapsed: 36m22s
----
+
[NOTE]
====
The cluster access and credential information also outputs to `<installation_directory>/.openshift_install.log` when an installation succeeds.
====
+
[IMPORTANT]
====
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending `node-bootstrapper` certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for _Recovering from expired control plane certificates_ for more information.
@@ -341,6 +362,11 @@ program creates. Both are required to delete the cluster.
ifdef::aws[]
. Optional: Remove or disable the `AdministratorAccess` policy from the IAM
account that you used to install the cluster.
+
[NOTE]
====
The elevated permissions provided by the `AdministratorAccess` policy are required only during installation.
====
endif::aws[]
ifdef::gcp[]

View File

@@ -53,14 +53,15 @@ endif::restricted[]
.Prerequisites
ifdef::ibm-z,ibm-z-kvm[* A machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space]
ifndef::ibm-z,ibm-z-kvm[* A computer that runs Linux or macOS, with 500 MB of local disk space]
ifdef::ibm-z,ibm-z-kvm[* You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space]
ifndef::ibm-z,ibm-z-kvm[* You have a computer that runs Linux or macOS, with 500 MB of local disk space]
.Procedure
ifndef::openshift-origin[]
. Access the link:https://cloud.redhat.com/openshift/install[Infrastructure Provider]
page on the {cloud-redhat-com} site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
. Select your infrastructure provider.
. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.
endif::[]
ifdef::openshift-origin[]
@@ -70,9 +71,7 @@ endif::[]
+
[IMPORTANT]
====
The installation program creates several files on the computer that you use to
install your cluster. You must keep both the installation program and the files
that the installation program creates after you finish installing the cluster.
The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
====
+
[IMPORTANT]

View File

@@ -120,8 +120,7 @@ cluster.
* If you provision and manage the infrastructure for your cluster, you must
provide all of the cluster infrastructure and resources, including the
bootstrap machine, networking, load balancing, storage, and individual cluster
machines. You cannot use the advanced machine management and scaling capabilities
that an installer-provisioned infrastructure cluster offers.
machines.
You use three sets of files during installation: an installation configuration
file that is named `install-config.yaml`, Kubernetes manifests, and Ignition

View File

@@ -16,7 +16,7 @@ On previous versions of {op-system}, disk encryption was configured by specifyin
. Check to see if TPM v2 encryption needs to be enabled in the BIOS on each node.
This is required on most Dell systems. Check the manual for your computer.
. Generate the Kubernetes manifests for the cluster:
. Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster:
+
[source,terminal]
----

View File

@@ -25,7 +25,7 @@ It is best to only add kernel arguments with this procedure if they are needed t
.Procedure
. Generate the Kubernetes manifests for the cluster:
. Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster:
+
[source,terminal]
----

View File

@@ -32,7 +32,7 @@ cluster.
.Procedure
. From the computer that you used to install the cluster, run the following command:
. From the directory that contains the installation program on the computer that you used to install the cluster, run the following command:
+
[source,terminal]
----

View File

@@ -62,6 +62,8 @@ endif::[]
Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines.
The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to create the cluster.
[IMPORTANT]
====
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending `node-bootstrapper` certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for _Recovering from expired control plane certificates_ for more information.
@@ -69,15 +71,15 @@ The Ignition config files that the installation program generates contain certif
.Prerequisites
* Obtain the {product-title} installation program.
* You obtained the {product-title} installation program.
ifdef::restricted,baremetal-restricted[]
For a restricted network installation, these files are on your mirror host.
endif::restricted,baremetal-restricted[]
* Create the `install-config.yaml` installation configuration file.
* You created the `install-config.yaml` installation configuration file.
.Procedure
. Generate the Kubernetes manifests for the cluster:
. Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster:
+
[source,terminal]
----
@@ -87,14 +89,12 @@ $ ./openshift-install create manifests --dir=<installation_directory> <1>
.Example output
[source,terminal]
----
INFO Consuming Install Config from target directory
WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings
INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials"
INFO Consuming Install Config from target directory
INFO Manifests created in: install_dir/manifests and install_dir/openshift
----
<1> For `<installation_directory>`, specify the installation directory that
contains the `install-config.yaml` file you created.
+
Because you create your own compute machines later in the installation process,
you can safely ignore this warning.
ifdef::aws,azure,gcp[]
. Remove the Kubernetes manifest files that define the control plane machines:
@@ -147,11 +147,11 @@ ifdef::baremetal,baremetal-restricted[]
If you are running a three-node cluster, skip the following step to allow the masters to be schedulable.
====
endif::baremetal,baremetal-restricted[]
. Modify the `<installation_directory>/manifests/cluster-scheduler-02-config.yml` Kubernetes manifest file to prevent pods from being scheduled on the control plane machines:
. Check that the `mastersSchedulable` parameter in the `<installation_directory>/manifests/cluster-scheduler-02-config.yml` Kubernetes manifest file is set to `false`. This setting prevents pods from being scheduled on the control plane machines:
+
--
.. Open the `<installation_directory>/manifests/cluster-scheduler-02-config.yml` file.
.. Locate the `mastersSchedulable` parameter and set its value to `False`.
.. Locate the `mastersSchedulable` parameter and ensure that it is set to `false`.
.. Save and exit the file.
--
@@ -263,7 +263,7 @@ $ export RESOURCE_GROUP=<resource_group> <1>
<1> All resources created in this Azure deployment exists as part of a link:https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/overview#resource-groups[resource group]. The resource group name is also based on the `INFRA_ID`, in the form of `<cluster_name>-<random_string>-rg`. This is the value of the `.status.platformStatus.azure.resourceGroupName` attribute from the `manifests/cluster-infrastructure-02-config.yml` file.
endif::azure-user-infra[]
. Obtain the Ignition config files:
. To create the Ignition configuration files, run the following command from the directory that contains the installation program:
+
[source,terminal]
----

View File

@@ -1,6 +1,8 @@
// Module included in the following assemblies:
//
// *installing/validating-an-installation.adoc
// *installing/installing_aws/installing-aws-user-infra.adoc
// *installing/installing_aws/installing-restricted-networks-aws.adoc
[id="logging-in-by-using-the-web-console_{context}"]
= Logging in to the cluster by using the web console
@@ -10,7 +12,7 @@ The `kubeadmin` user exists by default after an {product-title} installation. Yo
.Prerequisites
* You have access to the installation host.
* You have completed a cluster installation and all cluster Operators are available.
* You completed a cluster installation and all cluster Operators are available.
.Procedure
@@ -18,12 +20,12 @@ The `kubeadmin` user exists by default after an {product-title} installation. Yo
+
[source,terminal]
----
$ cat <install_dir>/auth/kubeadmin-password
$ cat <installation_directory>/auth/kubeadmin-password
----
+
[NOTE]
====
Alternatively, you can obtain the `kubeadmin` password from the `<install_dir>/.openshift_install.log` log file on the installation host.
Alternatively, you can obtain the `kubeadmin` password from the `<installation_directory>/.openshift_install.log` log file on the installation host.
====
. List the {product-title} web console route:
@@ -35,7 +37,7 @@ $ oc get routes -n openshift-console | grep 'console-openshift'
+
[NOTE]
====
Alternatively, you can obtain the {product-title} route from the `<install_dir>/.openshift_install.log` log file on the installation host.
Alternatively, you can obtain the {product-title} route from the `<installation_directory>/.openshift_install.log` log file on the installation host.
====
+
.Example output

View File

@@ -25,7 +25,7 @@ administrator-level credential secret in the cluster `kube-system` namespace.
.Procedure
ifdef::aws[]
//credentialsMode=Manual only verified supported on AWS in 4.6 GA
. Create the `install-config.yaml` file:
. Change to the directory that contains the installation program and create the `install-config.yaml` file:
+
[source,terminal]
----
@@ -47,7 +47,7 @@ compute:
----
<1> This line is added to set the `credentialsMode` parameter to `Manual`.
endif::aws[]
. Run the {product-title} installer to generate manifests:
. To generate the manifests, run the following command from the directory that contains the installation program:
+
[source,terminal]
----
@@ -83,7 +83,7 @@ This removal prevents your `admin` credential from being stored in the cluster:
$ rm mycluster/openshift/99_cloud-creds-secret.yaml
----
. Obtain the {product-title} release image your `openshift-install` binary is built to use:
. From the directory that contains the installation program, obtain details of the {product-title} release image that your `openshift-install` binary is built to use:
+
[source,terminal]
----
@@ -191,7 +191,7 @@ endif::google-cloud-platform[]
. Create YAML files for secrets in the `openshift-install` manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the `spec.secretRef` for each `credentialsRequest`. The format for the secret data varies for each cloud provider.
. Proceed with cluster creation:
. From the directory that contains the installation program, proceed with your cluster creation:
+
[source,terminal]
----

View File

@@ -5,21 +5,15 @@
// * installing/installing_gcp/manually-creating-iam-gcp.adoc
[id="mint-mode_{context}"]
= Mint Mode
= Mint mode
Mint Mode is supported for AWS, GCP, and Azure.
Mint mode is the default and recommended Cloud Credential Operator (CCO) credentials mode for {product-title}. In this mode, the CCO uses the provided administrator-level cloud credential to run the cluster. Mint mode is supported for AWS, GCP, and Azure.
The default and recommended best practice for running {product-title} is to run
the installer with an administrator-level cloud credential. The `admin` credential is
stored in the `kube-system` namespace, and then used by the Cloud Credential
Operator to process the `CredentialsRequest` objects in the cluster and create new users
for each with specific permissions.
In mint mode, the `admin` credential is stored in the `kube-system` namespace and then used by the CCO to process the `CredentialsRequest` objects in the cluster and create users for each with specific permissions.
The benefits of Mint Mode include:
The benefits of mint mode include:
* Each cluster component only has the permissions it requires.
* Automatic, on-going reconciliation for cloud credentials including upgrades,
which might require additional credentials or permissions.
* Each cluster component has only the permissions it requires
* Automatic, on-going reconciliation for cloud credentials, including additional credentials or permissions that might be required for upgrades
One drawback is that Mint Mode requires `admin` credential storage in a cluster
`kube-system` secret.
One drawback is that mint mode requires `admin` credential storage in a cluster `kube-system` secret.

View File

@@ -15,7 +15,7 @@ You can create an Ingress Controller backed by an AWS Network Load Balancer (NLB
Create an Ingress Controller backed by an AWS NLB on a new cluster.
. Use the following command to create manifests:
. Change to the directory that contains the installation program and create the manifests:
+
[source,terminal]
----

View File

@@ -40,7 +40,7 @@ endif::ignition-config[]
.Procedure
. Use the following command to create manifests:
. Change to the directory that contains the installation program and create the manifests:
+
[source,terminal]
----

View File

@@ -16,7 +16,7 @@ procedure.
.Prerequisites
* A cluster on AWS with user-provisioned infrastructure.
* You have a cluster on AWS with user-provisioned infrastructure.
* For Amazon S3 storage, the secret is expected to contain two keys:
** `REGISTRY_STORAGE_S3_ACCESSKEY`
** `REGISTRY_STORAGE_S3_SECRETKEY`