1
0
mirror of https://github.com/openshift/installer.git synced 2026-02-06 00:48:45 +01:00

Adding user doc/guide for AWS account and installation

This commit is contained in:
Stephen Cuppett
2019-01-09 12:13:17 -05:00
parent 6880b3e446
commit b383cd7545
25 changed files with 319 additions and 1 deletions

View File

@@ -2,7 +2,7 @@
## Supported Platforms
* AWS
* [AWS](docs/user/aws/README.md)
* [Libvirt with KVM](docs/dev/libvirt-howto.md) (development only)
* OpenStack (experimental)

21
docs/user/aws/README.md Normal file
View File

@@ -0,0 +1,21 @@
# AWS Account Set Up
This document is a guide for preparing a new AWS account for use with OpenShift. It
will help prepare an account to create a single cluster and provide insight for adjustments which may be
needed for additional clusters.
Follow along with the steps and links below to configure your AWS cluster:
1. [Route53](route53.md)
2. [Limits](limits.md)
3. [IAM User](iam.md)
4. [Cluster Installation](install.md)
5. [IAM User: Revisited](iam_after.md)
## Reporting Issues
Please see the [Issue Tracker][issues] for current known issues.
Please report a new issue if you do not find an issue related to any trouble
you're having.
[issues]: https://github.com/openshift/installer/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+aws

45
docs/user/aws/iam.md Normal file
View File

@@ -0,0 +1,45 @@
# IAM User
In a new AWS account, you are provided with a root user. This account is based on the email address which created
the account. This is a highly privileged account and not recommended for usage beyond configuring initial account and
billing settings, an initial set of users, and locking it down.
Before proceeding with the OpenShift install, you should create a secondary IAM administrative user following the steps
outlined here:
[AWS: Creating an IAM User in Your AWS Account][user-create]
## Step 1: Name User, Identify Programmatic Access
In this step, you identify the IAM user name. We require programmatic access to AWS (via generated access key), check
this box.
![IAM Create User Step 1](images/iam_create_user_step1.png)
## Step 2: Attach Administrative Policy
Many permissions are required by the AWS installer. A specific set of policies and services will be identified at a
future date so a specific policy can be created and attached. Until then, attach the predefined "AdministratorAccess"
for the installation to use.
![IAM Create User Step 2](images/iam_create_user_step2.png)
## Step 3: Optional, Skip
Step 3 is optional and well skip it.
## Step 4: Review Settings
Step 4 allows us to review the settings we've selected. Make sure your screen reflects your chosen name and
AdministratorAccess
![IAM Create User Step 4](images/iam_create_user_step4.png)
## Step 5: Acquire Access Key and Secret
In Step 5, you need to save the access key ID and secret access key values to configure your local machine to run
the installer. This step is your only opportunity to collect those values.
![IAM Create User Step 5](images/iam_create_user_step5.png)
[user-create]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html

View File

@@ -0,0 +1,12 @@
# IAM User: Revisited
Once OpenShift is installed, the installer account no longer requires the AdministratorAccess policy. This user account
may be deleted or have its access keys revoked (or disabled until needed later/again). You may also opt to remove the
AdministratorAccess policy in favor of ReadOnlyAccess. You perform all these steps by revisiting IAM and updating the
user created earlier.
## Example: Remove AdministratorAccess, Attach ReadOnlyAccess
![IAM Remove AdministratorAccess](images/iam_after_remove_admin.png)
![IAM Remove AdministratorAccess](images/iam_after_add_readonly.png)

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 82 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 114 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 266 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 134 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 122 KiB

58
docs/user/aws/install.md Normal file
View File

@@ -0,0 +1,58 @@
# Cluster Installation
At this point, you are ready to perform the OpenShift installation outlined [here][cloud-install] and begin at
Step 3: Download the Installer.
## Example
### Create Configuration
```console
[~]$ openshift-install-linux-amd64 create install-config
? SSH Public Key /home/user_id/.ssh/id_rsa.pub
? Platform aws
? Region us-east-1
? Base Domain openshiftcorp.com
? Cluster Name test
? Pull Secret [? for help]
```
### Create Cluster
```console
[~]$ openshift-install-linux-amd64 create cluster
INFO Waiting up to 30m0s for the Kubernetes API...
INFO API v1.11.0+85a0623 up
INFO Waiting up to 30m0s for the bootstrap-complete event...
INFO Destroying the bootstrap resources...
INFO Waiting up to 10m0s for the openshift-console route to be created...
INFO Install complete!
INFO Run 'export KUBECONFIG=/home/user/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI.
INFO The cluster is ready when 'oc login -u kubeadmin -p XXXX' succeeds (wait a few minutes).
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.test.openshiftcorp.com
INFO Login to the console with user: kubeadmin, password: XXXX
```
### Running Cluster
In Route53, there will be a new, private hosted zone (for internal lookups):
![Route53 private hosted zone](images/install_private_hosted_zone.png)
In EC2, there will be 6 running instances:
![EC2 instances after install](images/install_nodes.png)
The relationship of the EC2 instances, elastic load balancers (ELBs) and Route53 hosted zones is as depicted:
![Architecture relationship of ELBs and instances](images/install_nodes_elbs.png)
The nodes within the VPC utilize the internal DNS and use the Router and Internal API load balancers. External/Internet
access to the cluster use the Router and External API load balancers. Nodes are spread equally across 3 availability
zones.
The OpenShift console is available via the kubeadmin login provided by the installer:
![OpenShift web console](images/install_console.png)
[cloud-install]: https://cloud.openshift.com/clusters/install

77
docs/user/aws/limits.md Normal file
View File

@@ -0,0 +1,77 @@
# Limits
You can find a comprehensive list of the default AWS service limits published here:
[AWS Service Limits][service-limits]
Below, we'll identify OpenShift cluster needs and how those impact some of those limits.
## S3
There is a default limit of 100 S3 buckets per account. The installation creates a bucket temporarily. Also, the
registry component creates a permanent bucket. This will limit the number of clusters per account to 99 initially. To
support additional clusters, you must open a support case with AWS.
## VPC
Each cluster creates its own VPC. The default limit of VPCs per region is 5 and will allow 5 clusters. To have more
than 5 clusters, you will need to increase this limit.
## Elastic Network Interfaces (ENI)
The default installation creates 21 + the number of availability zones of ENIs (e.g. us-east-1 = 21 + 6 = 27 ENIs).
The default limit per region is 350. Additional ENIs are created for additional machines and elastic load balancers
created by cluster usage and deployed workloads. A service limit increase here may be required to satisfy the needs of
additional clusters and deployed workloads.
## Elastic IP (EIP)
For a single, default cluster, your account will have the needed capacity limits required. There is one exception,
"EC2-VPC Elastic IPs". The installer creates a public and private subnet for each
[availability zone within a region][availability-zones] to provision the cluster in a highly available configuration. In
each private subnet, a separate [NAT Gateway][nat-gateways] is created and requires a separate [elastic IP][elastic-ip].
The default limit of 5 is sufficient for most regions and a single cluster. For the us-east-1 region, a higher limit is
required. For multiple clusters, a higher limit is required. Please see [this map][az-map] for a current region map with
availability zone count. We recommend selecting regions with 3 or more availability zones.
### Example: Using N. Virginia (us-east-1)
To use N. Virginia (us-east-1) for a new cluster, please submit a limit increase for VPC Elastic IPs similar to the
following in the support dashboard (to create more than one cluster, a higher limit will be necessary):
![Increase Elastic IP limit in AWS](images/support_increase_elastic_ip.png)
## NAT Gateway
The default limit for NAT Gateways is 5 per availability zone. This is sufficient for up to 5 clusters in a dedicated
account. If you intend to create more than 5 clusters, you will need to request an increase to this limit.
## VPC Gateway
The default limit of VPC Gateways (for S3 access) is 20. Each cluster will create a single S3 gateway endpoint within
the new VPC. If you intend to create more than 20 clusters, you will need to request an increase to this limit.
## Security Groups
Each cluster creates 10 distinct security groups. The default limit of 2,500 for new accounts allows for many clusters
to be created.
## Instance Limits
By default, a cluster will create 6 nodes (3 masters and 3 workers). Currently, these are m4.large and within a new
account's default limit. If you intend to start with a higher number of workers, enable autoscaling and large workloads
or a different instance type, please ensure you have the necessary remaining instance count within the instance type's
limit to satisfy the need. If not, please ask AWS to increase the limit via a support case.
## Elastic Load Balancing (ELB/NLB)
By default, each cluster will create 2 network load balancers for the master API server (1 internal, 1 external) and a
single classic elastic load balancer for the router. Additional Kubernetes LoadBalancer Service objects will create
additional [load balancers][load-balancing]
[service-limits]: https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html
[load-balancing]: https://aws.amazon.com/elasticloadbalancing/
[availability-zones]: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html
[nat-gateways]: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html
[elastic-ip]: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html
[az-map]: https://aws.amazon.com/about-aws/global-infrastructure/

105
docs/user/aws/route53.md Normal file
View File

@@ -0,0 +1,105 @@
# Route53
Amazon's Route53 service is used by the OpenShift installer to configure cluster DNS resolution and provide name lookup
for the cluster to the outside world. To use OpenShift, you must have created a public hosted zone in Amazon Route53 in
the same account as your OpenShift cluster. You must also ensure the zone is "authoritative" for the domain. There are
two ways to do this outlined below: root domain and subdomain. A root domain is `openshiftcorp.com`. A subdomain is of
the form `clusters.openshiftcorp.com`.
The below sections identify how to ensure your hosted zone is authoritative for a domain.
## Step 1: Acquire/Identify Domain
You may skip this step if using an existing domain and registrar. You will move the authoritative DNS to Route53 or
submit a delegation request for a subdomain in a later step.
Route53 can also purchase domains for you and act as a registrar. If you allow Route53 to purchase a new domain for you,
you can skip the remainder of these steps (the domain is created and the hosted zone is created correctly for you)!
### Example: Purchasing a new domain
![Domain purchased in Route53 registrar](images/route53_registrar.png)
Later:
![Automatic hosted zone in Route53](images/route53_hosted_zone.png)
## Step 2: Create Public Hosted Zone
Whether using a root domain or a subdomain, you must create a public, hosted zone.
[AWS: Creating a Public Hosted Zone][create-hosted-zone]
To use the root domain, you'd create the hosted zone with the value `openshiftcorp.com`. To use a subdomain, you'd
create a hosted zone with the value `clusters.openshiftcorp.com`. (Use appropriate domain values for your situation.)
### Example: Root Domain
![Create hosted zone in Route53](images/route53_create_hosted_zone.png)
## Step 3: Get Public Nameservers of Hosted Zone
For either a root domain `openshiftcorp.com` or a subdomain `clusters.openshiftcorp.com`, you must extract the new
authoritative nameservers from the hosted zone records.
[AWS: Getting the Name Servers for a Public Hosted Zone][get-hosted-zone-info]
### Example: Root Domain
![Get hosted zone info from Route53](images/route53_hosted_zone_info.png)
## Step 4a: Root Domain - Update Registrar
Each registrar requires a slightly different procedure. Using the four nameserver values from the previous step,
you will update the registrar records to the AWS Route53 nameservers.
If you have previously registered your root domain with AWS Route53 (in another account), you can follow the procedure
here:
[AWS: Adding or Changing Name Servers or Glue Records][set-glue-records]
If you are migrating your root domain to Route53, care should be taken to migrate any existing DNS records first:
[AWS: Making Amazon Route 53 the DNS Service for an Existing Domain][migrate-dns]
### Example
![Set nameservers in Route53](images/route53_set_nameservers.png)
## Step 4b: Subdomain - Perform DNS Delegation
For a subdomain of `openshiftcorp.com` (e.g. `clusters.openshiftcorp.com`), you must add delegation records to the
parent/root domain. This may require a request to your company's IT department or the division which controls the root
domain and DNS services for your company.
### Example: BIND
Delegation records in the root domain for `openshiftcorp.com` to AWS Route53 for the subdomain of
`clusters.openshiftcorp.com` would take the following form:
```
$ORIGIN clusters.openshiftcorp.com.
IN NS ns-124.awsdns-15.com.
IN NS ns-1062.awsdns-04.org.
IN NS ns-1603.awsdns-08.co.uk.
IN NS ns-972.awsdns-57.net.
```
### Example: Route53
Following our previous example, if using entirely AWS Route 53 for the registrar, root domain and subdomain, the root
domain (`openshiftcorp.com`) hosted zone would look like the following:
![Subdomain delegation for hosted zone in Route53](images/route53_hosted_zone_delegation.png)
The root domain would contain the authoritative information for the root domain and also identify a separate set of
nameservers for the subdomain (the nameservers for a separate Hosted Zone in Route53)
The hosted zone of the subdomain (`clusters.openshiftcorp.com`) would show:
![Subdomain hosted zone in Route53](images/route53_hosted_zone_subdomain.png)
[create-hosted-zone]: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingHostedZone.html
[get-hosted-zone-info]: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/GetInfoAboutHostedZone.html
[set-glue-records]: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-name-servers-glue-records.html#domain-name-servers-glue-records-procedure
[migrate-dns]: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/MigratingDNS.html