1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

OSDOCS-13051: Porting the ALBO book from OCP to ROSA and ROSA classic

This commit is contained in:
EricPonvelle
2025-02-07 10:26:58 -05:00
committed by openshift-cherrypick-robot
parent 910583441e
commit 4f2617b2ae
5 changed files with 447 additions and 24 deletions

View File

@@ -62,7 +62,6 @@ using the terminal. Unlike the web console, it allows the user to work directly
* xref:../cli_reference/osdk/cli-osdk-install.adoc#cli-osdk-install[Operator SDK]: The Operator SDK, a component of the Operator Framework, provides a CLI tool that Operator developers can use to build, test, and deploy an Operator from the terminal. It simplifies the process of building Kubernetes-native applications, which can require deep, application-specific operational knowledge.
ifdef::openshift-rosa[]
ifdef::openshift-rosa,openshift-rosa-hcp[]
* xref:../cli_reference/rosa_cli/rosa-get-started-cli.adoc#rosa-get-started-cli[ROSA CLI (`rosa`)]: Use the `rosa` CLI to create, update, manage, and delete ROSA clusters and resources.
endif::openshift-rosa[]
endif::openshift-rosa,openshift-rosa-hcp[]

View File

@@ -0,0 +1,31 @@
// Module included in the following assemblies:
//
:_mod-docs-content-type: PROCEDURE
[id="aws-load-balancer-operator-deleting_{context}"]
= Deleting the example AWS Load Balancer Operator installation
. Delete the hello world application namespace (and all the resources in the namespace):
+
[source,terminal]
----
$ oc delete project hello-world
----
+
. Delete the AWS Load Balancer Operator and the AWS IAM roles:
+
[source,terminal]
----
$ oc delete subscription aws-load-balancer-operator -n aws-load-balancer-operator
$ aws iam detach-role-policy \
--role-name "${ROSA_CLUSTER_NAME}-alb-operator" \
--policy-arn $POLICY_ARN
$ aws iam delete-role \
--role-name "${ROSA_CLUSTER_NAME}-alb-operator"
----
+
. Delete the AWS IAM policy:
+
[source,terminal]
----
$ aws iam delete-policy --policy-arn $POLICY_ARN
----

View File

@@ -0,0 +1,288 @@
// Module included in the following assemblies:
//
:_mod-docs-content-type: PROCEDURE
[id="aws-load-balancer-operator-installation_{context}"]
= Installing the AWS Load Balancer Operator
After setting up your environment with your cluster, you can install the AWS Load Balancer Operator using the CLI.
.Procedure
. Create a new project within your cluster for the AWS Load Balancer Operator:
+
[source,terminal]
----
$ oc new-project aws-load-balancer-operator
----
. Create an AWS IAM policy for the AWS Load Balancer Controller:
+
[NOTE]
====
You can find the AWS IAM policy from link:https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.4/docs/install/iam_policy.json[the upstream AWS Load Balancer Controller policy]. This policy includes all of the permissions you needed by the Operator to function.
====
+
[source,terminal]
----
$ POLICY_ARN=$(aws iam list-policies --query \
"Policies[?PolicyName=='aws-load-balancer-operator-policy'].{ARN:Arn}" \
--output text)
$ if [[ -z "${POLICY_ARN}" ]]; then
wget -O "${SCRATCH}/load-balancer-operator-policy.json" \
https://raw.githubusercontent.com/rh-mobb/documentation/main/content/rosa/aws-load-balancer-operator/load-balancer-operator-policy.json
POLICY_ARN=$(aws --region "$REGION" --query Policy.Arn \
--output text iam create-policy \
--policy-name aws-load-balancer-operator-policy \
--policy-document "file://${SCRATCH}/load-balancer-operator-policy.json")
fi
$ echo $POLICY_ARN
----
+
. Create an AWS IAM trust policy for AWS Load Balancer Operator:
+
[source,terminal]
----
$ cat <<EOF > "${SCRATCH}/trust-policy.json"
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Condition": {
"StringEquals" : {
"${OIDC_ENDPOINT}:sub": ["system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-operator-controller-manager", "system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-controller-cluster"]
}
},
"Principal": {
"Federated": "arn:aws:iam::$AWS_ACCOUNT_ID:oidc-provider/${OIDC_ENDPOINT}"
},
"Action": "sts:AssumeRoleWithWebIdentity"
}
]
}
EOF
----
+
. Create an AWS IAM role for the AWS Load Balancer Operator:
+
[source,terminal]
----
$ ROLE_ARN=$(aws iam create-role --role-name "${ROSA_CLUSTER_NAME}-alb-operator" \
--assume-role-policy-document "file://${SCRATCH}/trust-policy.json" \
--query Role.Arn --output text)
$ echo $ROLE_ARN
$ aws iam attach-role-policy --role-name "${ROSA_CLUSTER_NAME}-alb-operator" \
--policy-arn $POLICY_ARN
----
+
. Create a secret for the AWS Load Balancer Operator to assume our newly created AWS IAM role:
+
[source,terminal]
----
$ cat << EOF | oc apply -f -
apiVersion: v1
kind: Secret
metadata:
name: aws-load-balancer-operator
namespace: aws-load-balancer-operator
stringData:
credentials: |
[default]
role_arn = $ROLE_ARN
web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token
EOF
----
+
. Install the AWS Load Balancer Operator:
+
[source,terminal]
----
$ cat << EOF | oc apply -f -
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: aws-load-balancer-operator
namespace: aws-load-balancer-operator
spec:
upgradeStrategy: Default
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: aws-load-balancer-operator
namespace: aws-load-balancer-operator
spec:
channel: stable-v1.0
installPlanApproval: Automatic
name: aws-load-balancer-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
startingCSV: aws-load-balancer-operator.v1.0.0
EOF
----
+
. Deploy an instance of the AWS Load Balancer Controller using the Operator:
+
[NOTE]
====
If you get an error here wait a minute and try again, it means the Operator has not completed installing yet.
====
+
[source,terminal]
----
$ cat << EOF | oc apply -f -
apiVersion: networking.olm.openshift.io/v1
kind: AWSLoadBalancerController
metadata:
name: cluster
spec:
credentials:
name: aws-load-balancer-operator
EOF
----
+
. Check the that the Operator and controller pods are both running:
+
[source,terminal]
----
$ oc -n aws-load-balancer-operator get pods
----
+
You should see the following, if not wait a moment and retry:
+
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
aws-load-balancer-controller-cluster-6ddf658785-pdp5d 1/1 Running 0 99s
aws-load-balancer-operator-controller-manager-577d9ffcb9-w6zqn 2/2 Running 0 2m4s
----
[id="aws-load-balancer-operator-validating-the-deployment_{context}"]
== Validating the deployment
. Create a new project:
+
[source,terminal]
----
$ oc new-project hello-world
----
+
. Deploy a hello world application:
+
[source,terminal]
----
$ oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
----
+
. Configure a NodePort service for the AWS ALB to connect to:
+
[source,terminal]
----
$ cat << EOF | oc apply -f -
apiVersion: v1
kind: Service
metadata:
name: hello-openshift-nodeport
namespace: hello-world
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
type: NodePort
selector:
deployment: hello-openshift
EOF
----
+
. Deploy an AWS ALB using the AWS Load Balancer Operator:
+
[source,terminal]
----
$ cat << EOF | oc apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-openshift-alb
namespace: hello-world
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
pathType: Exact
backend:
service:
name: hello-openshift-nodeport
port:
number: 80
EOF
----
+
. Curl the AWS ALB Ingress endpoint to verify the hello world application is accessible:
+
[NOTE]
====
AWS ALB provisioning takes a few minutes. If you receive an error that says `curl: (6) Could not resolve host`, please wait and try again.
====
+
[source,termnial]
----
$ INGRESS=$(oc -n hello-world get ingress hello-openshift-alb \
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
$ curl "http://${INGRESS}"
----
+
.Example output
[source,text]
----
Hello OpenShift!
----
. Deploy an AWS NLB for your hello world application:
+
[source,terminal]
----
$ cat << EOF | oc apply -f -
apiVersion: v1
kind: Service
metadata:
name: hello-openshift-nlb
namespace: hello-world
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
type: LoadBalancer
selector:
deployment: hello-openshift
EOF
----
+
. Test the AWS NLB endpoint:
+
[NOTE]
====
NLB provisioning takes a few minutes. If you receive an error that says `curl: (6) Could not resolve host`, please wait and try again.
====
+
[source,terminal]
----
$ NLB=$(oc -n hello-world get service hello-openshift-nlb \
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
$ curl "http://${NLB}"
----
+
.Example output
[source,text]
----
Hello OpenShift!
----

View File

@@ -0,0 +1,104 @@
// Module included in the following assemblies:
//
:_mod-docs-content-type: PROCEDURE
[id="aws-load-balancer-operator-prerequisites_{context}"]
= Setting up your environment to install the AWS Load Balancer Operator
The AWS Load Balancer Operator requires a cluster with multiple availiability zones (AZ), as well as three public subnets split across three AZs in the same virtual private cloud (VPC) as the cluster.
[IMPORTANT]
====
Because of these requirements, the AWS Load Balancer Operator maybe be unsuitable for many PrivateLink clusters. AWS NLBs do not have this restriction.
====
Before installing the AWS Load Balancer Operator, you must have configured the following:
ifndef::openshift-rosa-hcp[]
* A ROSA (classic architecture) cluster with multiple availability zones
endif::openshift-rosa-hcp[]
ifdef::openshift-rosa-hcp[]
* A ROSA cluster with multiple availability zones
endif::openshift-rosa-hcp[]
* BYO VPC cluster
* AWS CLI
* OC CLI
[id="aws-load-balancer-operator-environment_{context}"]
== AWS Load Balancer Operator environment set up
Optional: You can set up temporary environment variables to streamline your installation commands.
[NOTE]
====
If you decide not to use environmental variables, manually enter the values where prompted in the code snippets.
====
.Procedure
. After logging into your cluster as an admin user, run the following commands:
+
[source,terminal]
----
$ export CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}")
$ export REGION=$(oc get infrastructure cluster -o=jsonpath="{.status.platformStatus.aws.region}")
$ export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||')
$ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
$ export SCRATCH="/tmp/${CLUSTER_NAME}/alb-operator"
$ mkdir -p ${SCRATCH}
----
. You can verify that the variables are set by running the following command:
+
[source,terminal]
----
$ echo "Cluster name: ${CLUSTER_NAME}, Region: ${REGION}, OIDC Endpoint: ${OIDC_ENDPOINT}, AWS Account ID: ${AWS_ACCOUNT_ID}"
----
+
.Example output
[source,terminal]
----
Cluster name: <cluster_id>, Region: us-east-2, OIDC Endpoint: oidc.op1.openshiftapps.com/<oidc_id>, AWS Account ID: <aws_id>
----
[id="aws-vpc-subnets_{context}"]
== AWS VPC and subnets
Before you can install the AWS Load Balancer Operator, you must tag your AWS VPC resources.
.Procedure
. Set the environmental variables to the proper values for your ROSA deployment:
+
[source,terminal]
----
$ export VPC_ID=<vpc-id>
$ export PUBLIC_SUBNET_IDS="<public-subnet-a-id> <public-subnet-b-id> <public-subnet-c-id>"
$ export PRIVATE_SUBNET_IDS="<private-subnet-a-id> <private-subnet-b-id> <private-subnet-c-id>"
----
. Add a tag to your cluster's VPC with the cluster name:
+
[source,terminal]
----
$ aws ec2 create-tags --resources ${VPC_ID} --tags Key=kubernetes.io/cluster/${CLUSTER_NAME},Value=owned --region ${REGION}
----
. Add a tag to your public subnets:
+
[source,terminal]
----
$ aws ec2 create-tags \
--resources ${PUBLIC_SUBNET_IDS} \
--tags Key=kubernetes.io/role/elb,Value='' \
--region ${REGION}
----
. Add a tag to your private subnets:
+
[source,terminal]
----
$ aws ec2 create-tags \
--resources ${PRIVATE_SUBNET_IDS} \
--tags Key=kubernetes.io/role/internal-elb,Value='' \
--region ${REGION}
----

View File

@@ -1,38 +1,39 @@
:_mod-docs-content-type: ASSEMBLY
[id="aws-load-balancer"]
= AWS Load Balancer Operator
include::_attributes/common-attributes.adoc[]
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: aws-load-balancer-operator
toc::[]
The AWS Load Balancer Operator is an Operator supported by Red{nbsp}Hat that users can optionally install on SRE-managed {product-title} (ROSA) clusters. The AWS Load Balancer Operator manages the lifecycle of the AWS Load Balancer Controller that provisions AWS Elastic Load Balancing v2 (ELBv2) services for applications running in ROSA clusters.
// Creating an AWS IAM role by using the Cloud Credential Operator utility
include::modules/using-ccoctl-create-iam-role-alb-operator.adoc[leveloffset=+1]
[TIP]
====
Load Balancers created by the AWS Load Balancer Operator cannot be used for xref:../../networking/routes/route-configuration.adoc#route-configuration[OpenShift Routes], and should only be used for individual services or ingress resources that do not need the full layer 7 capabilities of an OpenShift Route.
====
// Creating an AWS IAM role for the controller by using the Cloud Credential Operator utility
include::modules/using-ccoctl-create-iam-role-alb-controller.adoc[leveloffset=+1]
The link:https://kubernetes-sigs.github.io/aws-load-balancer-controller/[AWS Load Balancer Controller] manages AWS Elastic Load Balancers for a {product-title} (ROSA) cluster. The controller provisions link:https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html[AWS Application Load Balancers (ALB)] when you create Kubernetes Ingress resources and link:https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html[AWS Network Load Balancers (NLB)] when implementing Kubernetes Service resources with a type of LoadBalancer.
// Installing an AWS Load Balancer Operator
include::modules/aws-installing-an-aws-load-balancer-operator.adoc[leveloffset=+1]
Compared with the default AWS in-tree load balancer provider, this controller is developed with advanced annotations for both ALBs and NLBs. Some advanced use cases are:
[role="_additional-resources"]
.Additional resources
* Using native Kubernetes Ingress objects with ALBs
* Integrate ALBs with the AWS Web Application Firewall (WAF) service
* Specify custom NLB source IP ranges
* Specify custom NLB internal IP addresses
* For more information about assigning trust policies to AWS IAM roles, see link:https://aws.amazon.com/blogs/security/how-to-use-trust-policies-with-iam-roles/[How to use trust policies with IAM roles].
The link:https://github.com/openshift/aws-load-balancer-operator[AWS Load Balancer Operator] is used to used to install, manage and configure an instance of `aws-load-balancer-controller` in a ROSA cluster.
* For more information about creating AWS IAM roles, see link:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create.html[Creating IAM roles].
include::modules/albo-prerequisites.adoc[leveloffset=+1]
* For more information on adding AWS IAM permissions to AWS IAM roles, see link:https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html[Adding and removing IAM identity permissions].
[role="additional_resources"]
* For more information about formatting credentials files, see link:https://access.redhat.com/documentation/en-us/openshift_container_platform/{ocp-version}/html/authentication_and_authorization/managing-cloud-provider-credentials#cco-mode-sts[Using manual mode with Amazon Web Services Security Token Service].
ifndef::openshift-rosa-hcp[]
* To set up a ROSA classic cluster with multiple availability zones, see xref:../../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-a-cluster-quickly[Creating a ROSA cluster with STS using the default options]
endif::openshift-rosa-hcp[]
ifdef::openshift-rosa-hcp[]
* To set up a ROSA classic cluster with multiple availability zones, see link:https://docs.redhat.com/rosa-hcp/rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.html[A multi-AZ ROSA cluster]
endif::openshift-rosa-hcp[]
* For more information regarding AWS Load Balancer Controllers configurations, link:https://access.redhat.com/documentation/en-us/openshift_container_platform/{ocp-version}/html/networking/aws-load-balancer-operator-1#nw-multiple-ingress-through-single-alb[Creating multiple ingresses] and link:https://access.redhat.com/documentation/en-us/openshift_container_platform/{ocp-version}/html/networking/aws-load-balancer-operator-1#nw-adding-tls-termination_adding-tls-termination[Adding TLS termination]
* For more information on adding tags to AWS resources, including VPCs and subnets, see link:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html[Tag your Amazon EC2 resources].
* For detailed instructions on verifying that the ELBv2 was created for the application running in the ROSA cluster, see link:https://docs.openshift.com/container-platform/4.13/networking/aws_load_balancer_operator/create-instance-aws-load-balancer-controller.html[Creating an instance of AWS Load Balancer Controller].
// Uninstalling an AWS Load Balancer Operator
include::modules/aws-uninstalling-an-aws-load-balancer-operator.adoc[leveloffset=+1]
include::modules/albo-installation.adoc[leveloffset=+1]
include::modules/albo-deleting.adoc[leveloffset=+1]