1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-06 06:46:26 +01:00

OSDOCS-7728: Migrate AWS LOAD BALANCER OPERATOR ON ROSA content from MOBB to ROSA docs

This commit is contained in:
bmcelvee
2023-09-12 15:52:45 -04:00
parent 50962dc7cc
commit f5d363e6d5
3 changed files with 711 additions and 2 deletions

View File

@@ -84,14 +84,16 @@ Distros: openshift-rosa
Topics:
#- Name: ROSA prerequisites
# File: rosa-mobb-prerequisites-tutorial
- Name: Configuring ROSA/OSD to use custom TLS ciphers on the ingress controllers
File: cloud-experts-configure-custom-tls-ciphers
- Name: Verifying Permissions for a ROSA STS Deployment
File: rosa-mobb-verify-permissions-sts-deployment
- Name: Configuring the Cluster Log Forwarder for Cloudwatch logs and STS
File: rosa-mobb-cloudwatch-sts
- Name: Deploying OpenShift API for Data Protection on a ROSA cluster
File: cloud-experts-deploy-api-data-protection
- Name: AWS Load Balancer Operator on ROSA
File: cloud-experts-aws-load-balancer-operator
- Name: Configuring ROSA/OSD to use custom TLS ciphers on the ingress controllers
File: cloud-experts-configure-custom-tls-ciphers
---
Name: Getting started
Dir: rosa_getting_started

View File

@@ -0,0 +1,436 @@
:_content-type: ASSEMBLY
[id="cloud-experts-aws-load-balancer-operator"]
= Tutorial: AWS Load Balancer Operator on ROSA
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: cloud-experts-aws-load-balancer-operator
toc::[]
//Mobb content metadata
//Brought into ROSA product docs 2023-09-12
//---
//date: '2023-01-03T22:07:08.574151'
//title: AWS Load Balancer Operator On ROSA
//aliases: ['/docs/rosa/alb-sts']
//tags: ["AWS", "ROSA"]
//authors:
// - Shaozhen Ding
// - Paul Czarkowski
//---
include::snippets/mobb-support-statement.adoc[leveloffset=+1]
[TIP]
====
Load Balancers created by the AWS Load Balancer (ALB) Operator cannot be used for xref:../networking/routes/route-configuration.adoc#route-configuration[{product-title} Routes], and should only be used for individual services or Ingress that does not need the full layer 7 capabilties of a ROSA route.
====
link:https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/[AWS Load Balancer (ALB)Controller] is a Kubernetes controller that manages Elastic Load Balancing v2 (ELBv2) for a Kubernetes cluster.
* It satisfies Kubernetes link:https://kubernetes.io/docs/concepts/services-networking/ingress/[Ingress and service resources] by provisioning link:https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html[Application Load Balancers (ALB)] and
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html[Network Load Balancers (NLB)].
Compared with default AWS In Tree Provider, this controller is actively developed with advanced annotations for both ALB and NLB. Some advanced use cases are:
* Using native Kubernetes Ingress with ALB
* Integrate ALB with web application firewall (WAF)
* Specify NLB source IP ranges
* Specify NLB internal IP address
link:https://github.com/openshift/aws-load-balancer-operator[ALB Operator] is used to used to install, manage and configure an instance of `aws-load-balancer-controller` in a OpenShift cluster.
.Prerequisites
[NOTE]
====
ALB requires a multi-AZ cluster, three public subnets split across three AZs in the same VPC as the cluster, and is not suitable for most PrivateLink clusters.
====
* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-a-cluster-quickly[A multi-AZ ROSA classic cluster]
* BYO VPC cluster
* AWS CLI
* OC CLI
.Environment
* Prepare the environment variables:
+
[source,terminal]
----
$ export AWS_PAGER=""
$ export ROSA_CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}" | sed 's/-[a-z0-9]\{5\}$//')
$ export REGION=$(oc get infrastructure cluster -o=jsonpath="{.status.platformStatus.aws.region}")
$ export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||')
$ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
$ export SCRATCH="/tmp/${ROSA_CLUSTER_NAME}/alb-operator"
$ mkdir -p ${SCRATCH}
$ echo "Cluster: ${ROSA_CLUSTER_NAME}, Region: ${REGION}, OIDC Endpoint: ${OIDC_ENDPOINT}, AWS Account ID: ${AWS_ACCOUNT_ID}"
----
== AWS VPC and subnets
[NOTE]
====
This section only applies to BYO VPC clusters, if you let ROSA create your VPCs you can skip to the following Installation section. You can skip this section if you already installed xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-a-cluster-quickly[a Multi-AZ ROSA Classic cluster].
====
. Set Variables describing your VPC and Subnets:
+
[source,terminal]
----
$ export VPC_ID=<vpc-id>
$ export PUBLIC_SUBNET_IDS=<public-subnets>
$ export PRIVATE_SUBNET_IDS=<private-subnets>
$ export CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}")
----
+
. Tag VPC with the cluster name:
+
[source,terminal]
----
$ aws ec2 create-tags --resources ${VPC_ID} --tags Key=kubernetes.io/cluster/${CLUSTER_NAME},Value=owned --region ${REGION}
----
+
. Add tags to Public Subnets:
+
[source,terminal]
----
$ aws ec2 create-tags \
--resources ${PUBLIC_SUBNET_IDS} \
--tags Key=kubernetes.io/role/elb,Value='' \
--region ${REGION}
----
+
. Add tags to Private Subnets:
+
[source,terminal]
----
$ aws ec2 create-tags \
--resources "${PRIVATE_SUBNET_IDS}" \
--tags Key=kubernetes.io/role/internal-elb,Value='' \
--region ${REGION}
----
== Installation
. Create Policy for the ALB Controller:
+
[NOTE]
====
Policy is from link:https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.4/docs/install/iam_policy.json[ALB controller policy] plus subnet create tags permission. This is required by the Operator.
====
+
[source,terminal]
----
$ oc new-project aws-load-balancer-operator
$ POLICY_ARN=$(aws iam list-policies --query \
"Policies[?PolicyName=='aws-load-balancer-operator-policy'].{ARN:Arn}" \
--output text)
$ if [[ -z "${POLICY_ARN}" ]]; then
wget -O "${SCRATCH}/load-balancer-operator-policy.json" \
https://raw.githubusercontent.com/rh-mobb/documentation/main/content/docs/rosa/aws-load-balancer-operator/load-balancer-operator-policy.json
POLICY_ARN=$(aws --region "$REGION" --query Policy.Arn \
--output text iam create-policy \
--policy-name aws-load-balancer-operator-policy \
--policy-document "file://${SCRATCH}/load-balancer-operator-policy.json")
fi
$ echo $POLICY_ARN
----
+
. Create trust policy for ALB Operator:
+
[source,terminal]
----
$ cat <<EOF > "${SCRATCH}/trust-policy.json"
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Condition": {
"StringEquals" : {
"${OIDC_ENDPOINT}:sub": ["system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-operator-controller-manager", "system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-controller-cluster"]
}
},
"Principal": {
"Federated": "arn:aws:iam::$AWS_ACCOUNT_ID:oidc-provider/${OIDC_ENDPOINT}"
},
"Action": "sts:AssumeRoleWithWebIdentity"
}
]
}
EOF
----
+
. Create Role for ALB Operator:
+
[source,terminal]
----
$ ROLE_ARN=$(aws iam create-role --role-name "${ROSA_CLUSTER_NAME}-alb-operator" \
--assume-role-policy-document "file://${SCRATCH}/trust-policy.json" \
--query Role.Arn --output text)
$ echo $ROLE_ARN
$ aws iam attach-role-policy --role-name "${ROSA_CLUSTER_NAME}-alb-operator" \
--policy-arn $POLICY_ARN
----
+
. Create secret for ALB Operator:
+
[source,terminal]
----
$ cat << EOF | oc apply -f -
apiVersion: v1
kind: Secret
metadata:
name: aws-load-balancer-operator
namespace: aws-load-balancer-operator
stringData:
credentials: |
[default]
role_arn = $ROLE_ARN
web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token
EOF
----
+
. Install Red Hat ALB Operator:
+
[source,terminal]
----
$ cat << EOF | oc apply -f -
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: aws-load-balancer-operator
namespace: aws-load-balancer-operator
spec:
upgradeStrategy: Default
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: aws-load-balancer-operator
namespace: aws-load-balancer-operator
spec:
channel: stable-v1.0
installPlanApproval: Automatic
name: aws-load-balancer-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
startingCSV: aws-load-balancer-operator.v1.0.0
EOF
----
+
. Install Red Hat ALB Controller:
+
[NOTE]
====
If you get an error here wait a minute and try again, it means the Operator has not completed installing yet.
====
+
[source,terminal]
----
$ cat << EOF | oc apply -f -
apiVersion: networking.olm.openshift.io/v1
kind: AWSLoadBalancerController
metadata:
name: cluster
spec:
credentials:
name: aws-load-balancer-operator
EOF
----
+
. Check the Operator and controller pods are both running:
+
[source,terminal]
----
$ oc -n aws-load-balancer-operator get pods
----
+
You should see the following, if not wait a moment and retry:
+
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
aws-load-balancer-controller-cluster-6ddf658785-pdp5d 1/1 Running 0 99s
aws-load-balancer-operator-controller-manager-577d9ffcb9-w6zqn 2/2 Running 0 2m4s
----
== Validate the deployment with a hello world application
. Create a new project:
+
[source,terminal]
----
$ oc new-project hello-world
----
+
. Deploy a hello world application:
+
[source,terminal]
----
$ oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
----
+
. Configure a NodePort service for the ALB to connect to:
+
[source,terminal]
----
$ cat << EOF | oc apply -f -
apiVersion: v1
kind: Service
metadata:
name: hello-openshift-nodeport
namespace: hello-world
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
type: NodePort
selector:
deployment: hello-openshift
EOF
----
+
. Deploy an ALB using the Operator:
+
[source,terminal]
----
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-openshift-alb
namespace: hello-world
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
pathType: Exact
backend:
service:
name: hello-openshift-nodeport
port:
number: 80
EOF
----
+
. Curl the ALB Ingress endpoint to verify the hello world application is accessible:
+
[NOTE]
====
ALB provisioning takes a few minutes. If you receive an error that says `curl: (6) Could not resolve host:`, please wait and try again.
====
+
[source,termnial]
----
$ INGRESS=$(oc -n hello-world get ingress hello-openshift-alb \
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
$ curl "http://${INGRESS}"
----
+
.Example output
[source,text]
----
Hello OpenShift!
----
. Next, deploy an NLB for your hello world application:
+
[source,terminal]
----
$ cat << EOF | oc apply -f -
apiVersion: v1
kind: Service
metadata:
name: hello-openshift-nlb
namespace: hello-world
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
type: LoadBalancer
selector:
deployment: hello-openshift
EOF
----
+
. Test the NLB endpoint:
+
[NOTE]
====
NLB provisioning takes a few minutes. If you receive an error that says `curl: (6) Could not resolve host:`, please wait and try again.
====
+
[source,terminal]
----
$ NLB=$(oc -n hello-world get service hello-openshift-nlb \
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
$ curl "http://${NLB}"
----
+
.Example output
[source,text]
----
Hello OpenShift!
----
== Clean up
. Delete the hello world application namespace (and all the resources in the namespace):
+
[source,terminal]
----
$ oc delete ns hello-world
----
+
. Delete the Operator and the AWS roles:
+
[source,terminal]
----
$ oc delete subscription aws-load-balancer-operator -n aws-load-balancer-operator
$ aws iam detach-role-policy \
--role-name "${ROSA_CLUSTER_NAME}-alb-operator" \
--policy-arn $POLICY_ARN
$ aws iam delete-role \
--role-name "${ROSA_CLUSTER_NAME}-alb-operator"
----
+
. You can delete the policy:
+
[source,terminal]
----
$ aws iam delete-policy --policy-arn $POLICY_ARN
----
== Clean up
. Delete the Operator and the AWS roles:
+
[source,terminal]
----
$ oc delete subscription aws-load-balancer-operator -n aws-load-balancer-operator
aws iam detach-role-policy \
--role-name "${ROSA_CLUSTER_NAME}-alb-operator" \
--policy-arn $POLICY_ARN
aws iam delete-role \
--role-name "${ROSA_CLUSTER_NAME}-alb-operator"
----
+
. You can delete the policy:
+
[source,terminal]
----
$ aws iam delete-policy --policy-arn $POLICY_ARN
----

View File

@@ -0,0 +1,271 @@
:_content-type: ASSEMBLY
[id="rosa-mobb-cli-quickstart"]
= Create a {product-title} cluster using the CLI
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: rosa-mobb-cli-quickstart
toc::[]
//Mobb content metadata
//Brought into ROSA product docs 2023-09-12
//---
//date: '2021-06-10'
//title: ROSA Quickstart
//weight: 1
//aliases: [/docs/quickstart-rosa.md]
//Tags: ["AWS", "ROSA", "Quickstarts"]
//authors:
// - Steve Mirman
// - Paul Czarkowski
//---
A Quickstart guide to deploying a Red Hat OpenShift cluster on AWS using the CLI.
== Video Walkthrough
////
Introduction to ROSA by Charlotte Fung on [AWS YouTube channel](https://youtu.be/KRqXxek4GvQ)
<iframe width="560" height="315" src="https://www.youtube.com/embed/KRqXxek4GvQ" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
If you prefer a more visual medium, you can watch [Steve Mirman](https://twitter.com/stevemirman) walk through this quickstart on [YouTube](https://www.youtube.com/watch?v=IFNig_Z_p2Y).
<iframe width="560" height="315" src="https://www.youtube.com/embed/IFNig_Z_p2Y" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
////
== Prerequisites
=== AWS
You must have an AWS account with the link:https://console.aws.amazon.com/rosa/home?#/get-started[AWS ROSA Prerequisites] met.
image::rosa-aws-pre.png[AWS console rosa requisites]
**MacOS**
//See [AWS Docs](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-mac.html) for alternative install options.
* Install AWS CLI using the macOS command line:
[source,bash]
----
$ curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
sudo installer -pkg AWSCLIV2.pkg -target /
----
**Linux**
// See [AWS Docs](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-linux.html) for alternative install options.
* Install AWS CLI using the Linux command line:
[source,bash]
----
$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
----
**Windows**
// See [AWS Docs](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-windows.html) for alternative install options.
* Install AWS CLI using the Windows command line
[source,bash]
----
$ C:\> msiexec.exe /i https://awscli.amazonaws.com/AWSCLIV2.msi
----
////
**Docker**
> See [AWS Docs](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-docker.html) for alternative install options.
1. To run the AWS CLI version 2 Docker image, use the docker run command.
```bash
docker run --rm -it amazon/aws-cli command
```
////
=== Prepare AWS Account for OpenShift
* Configure the AWS CLI by running the following command:
[source,bash]
----
$ aws configure
----
2. You will be required to enter an `AWS Access Key ID` and an `AWS Secret Access Key` along with a default region name and output format
```bash
% aws configure
AWS Access Key ID []:
AWS Secret Access Key []:
Default region name [us-east-2]:
Default output format [json]:
```
The `AWS Access Key ID` and `AWS Secret Access Key` values can be obtained by logging in to the AWS console and creating an **Access Key** in the **Security Credentials** section of the IAM dashboard for your user
3. Validate your credentials
```bash
aws sts get-caller-identity
```
You should receive output similar to the following
```
{
"UserId": <your ID>,
"Account": <your account>,
"Arn": <your arn>
}
```
4. If this is a brand new AWS account that has never had a AWS Load Balancer installed in it, you should run the following
```bash
aws iam create-service-linked-role --aws-service-name \
"elasticloadbalancing.amazonaws.com"
```
### Get a Red Hat Offline Access Token
1. Log into cloud.redhat.com
2. Browse to https://cloud.redhat.com/openshift/token/rosa
3. Copy the **Offline Access Token** and save it for the next step
### Set up the OpenShift CLI (oc)
1. Download the OS specific OpenShift CLI from [Red Hat](https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/)
2. Unzip the downloaded file on your local machine
3. Place the extracted `oc` executable in your OS path or local directory
### Set up the ROSA CLI
1. Download the OS specific ROSA CLI from [Red Hat](https://www.openshift.com/products/amazon-openshift/download)
2. Unzip the downloaded file on your local machine
3. Place the extracted `rosa` and `kubectl` executables in your OS path or local directory
4. Log in to ROSA
```bash
rosa login
```
You will be prompted to enter in the **Red Hat Offline Access Token** you retrieved earlier and should receive the following message
```
Logged in as <email address> on 'https://api.openshift.com'
```
### Verify ROSA privileges
Verify that ROSA has the minimal permissions
```bash
rosa verify permissions
```
>Expected output: `AWS SCP policies ok`
Verify that ROSA has the minimal quota
```bash
rosa verify quota
```
>Expected output: `AWS quota ok`
### Initialize ROSA
Initialize the ROSA CLI to complete the remaining validation checks and configurations
```bash
rosa init
```
## Deploy Red Hat OpenShift on AWS (ROSA)
### Interactive Installation
ROSA can be installed using command line parameters or in interactive mode. For an interactive installation run the following command
```bash
rosa create cluster --interactive --mode auto
```
As part of the interactive install you will be required to enter the following parameters or accept the default values (if applicable)
```
Cluster name:
Multiple availability zones (y/N):
AWS region (select):
OpenShift version (select):
Install into an existing VPC (y/N):
Compute nodes instance type (optional):
Enable autoscaling (y/N):
Compute nodes [2]:
Machine CIDR [10.0.0.0/16]:
Service CIDR [172.30.0.0/16]:
Pod CIDR [10.128.0.0/14]:
Host prefix [23]:
Private cluster (y/N):
```
>Note: the installation process should take between 30 - 45 minutes
### Get the web console link to the ROSA cluster
To get the web console link run the following command.
>Substitute your actual cluster name for `<cluster-name>`
```bash
rosa describe cluster --cluster=<cluster-name>
```
### Create cluster-admin user
By default, only the OpenShift SRE team will have access to the ROSA cluster. To add a local admin user, run the following command to create the `cluster-admin` account in your cluster.
>Substitute your actual cluster name for `<cluster-name>`
```bash
rosa create admin --cluster=<cluster-name>
```
>Refresh your web browser and you should see the `cluster-admin` option to log in
## Delete Red Hat OpenShift on AWS (ROSA)
Deleting a ROSA cluster consists of two parts
1. Delete the cluster instance, including the removal of AWS resources.
>Substitute your actual cluster name for `<cluster-name>`
```bash
rosa delete cluster --cluster=<cluster-name>
```
Delete Cluster's operator-roles and oidc-provider as shown in the above delete cluster command's output. For e.g.
```bash
rosa delete operator-roles -c <cluster-name>
rosa delete oidc-provider -c <cluster-name>
```
2. Delete the CloudFormation stack, including the removal of the `osdCcsAdmin` user
```bash
rosa init --delete-stack
```