1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 03:47:04 +01:00

OSDOCS-15935 - Removing unused ROSA assemblies

This commit is contained in:
Olga Tikhomirova
2025-10-03 15:47:14 -07:00
committed by openshift-cherrypick-robot
parent 84e4f29375
commit 7db35630fb
51 changed files with 9 additions and 1616 deletions

View File

@@ -73,10 +73,6 @@ Topics:
File: rosa-policy-process-security
- Name: SRE and service account access
File: rosa-sre-access
# Created a new assembly in ROSA/OSD. In OCP, the assembly is in a book that is not in ROSA/OSD
# - Name: About admission plugins
# File: rosa-admission-plug-ins
# Distros: openshift-rosa
- Name: About IAM resources for STS clusters
File: rosa-sts-about-iam-resources
- Name: OpenID Connect Overview
@@ -109,8 +105,6 @@ Distros: openshift-rosa
Topics:
- Name: Tutorials overview
File: index
#- Name: ROSA classic architecture prerequisites
# File: rosa-mobb-prerequisites-tutorial
- Name: Verifying Permissions for a ROSA classic architecture STS Deployment
File: rosa-mobb-verify-permissions-sts-deployment
- Name: Deploying ROSA classic architecture with a Custom DNS Resolver
@@ -289,10 +283,6 @@ Topics:
File: rosa-private-cluster
# - Name: Creating a ROSA cluster using the web console
# File: rosa-creating-cluster-console
# - Name: Accessing a ROSA cluster
# File: rosa-accessing-cluster
# - Name: Configuring identity providers using the Red Hat OpenShift Cluster Manager
# File: rosa-config-identity-providers
- Name: Deleting access to a ROSA cluster
File: rosa-deleting-access-cluster
- Name: Deleting a ROSA cluster
@@ -1973,7 +1963,3 @@ Topics:
# Topics:
# - Name: Collecting OKD Virtualization data for community report
# File: virt-collecting-virt-data
# - Name: Preparing to upgrade ROSA to 4.9
# File: rosa-upgrading-cluster-prepare
# - Name: Upgrading ROSA Classic clusters
# File: rosa-upgrading

View File

@@ -1805,7 +1805,3 @@ Topics:
# Topics:
# - Name: Collecting OKD Virtualization data for community report
# File: virt-collecting-virt-data
# - Name: Preparing to upgrade ROSA to 4.9
# File: rosa-upgrading-cluster-prepare
# - Name: Upgrading ROSA Classic clusters
# File: rosa-upgrading

View File

@@ -1,257 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="cloud-experts-getting-started-hcp-guide"]
= Workshop: Creating a cluster
include::_attributes/attributes-openshift-dedicated.adoc[]
include::_attributes/common-attributes.adoc[]
:context: cloud-experts-getting-started-hcp
toc::[]
//rosaworkshop.io content metadata
//Brought into ROSA product docs 2023-11-21
//Updated for HCP 2024-07-01
Follow this workshop to deploy a sample {product-title} (ROSA) cluster. You can then use your cluster in the next workshops.
.Workshop objectives
* Learn to create your cluster prerequisites:
** Create a sample virtual private cloud (VPC)
** Create sample OpenID Connect (OIDC) resources
* Create sample environment variables
* Deploy a sample ROSA cluster
.Prerequisites
* ROSA version 1.2.31 or later
* Amazon Web Service (AWS) command-line interface (CLI)
* ROSA CLI (`rosa`)
== Creating your cluster prerequisites
Before deploying a ROSA cluster, you must have both a VPC and OIDC resources. We will create these resources first. ROSA uses the bring your own VPC (BYO-VPC) model.
=== Creating a VPC
. Make sure your AWS CLI (`aws`) is configured to use a region where ROSA is available. See the regions supported by the AWS CLI by running the following command:
+
[source,terminal]
----
$ rosa list regions --hosted-cp
----
. Create the VPC. For this workshop, the following link:https://github.com/openshift-cs/rosaworkshop/blob/master/rosa-workshop/rosa/resources/setup-vpc.sh[script] creates the VPC and its required components. It uses the region configured in your `aws` CLI.
+
[source,bash]
----
#!/bin/bash
set -e
##########
# This script will create the network requirements for a ROSA cluster. This will be
# a public cluster. This creates:
# - VPC
# - Public and private subnets
# - Internet Gateway
# - Relevant route tables
# - NAT Gateway
#
# This will automatically use the region configured for the aws cli
#
##########
VPC_CIDR=10.0.0.0/16
PUBLIC_CIDR_SUBNET=10.0.1.0/24
PRIVATE_CIDR_SUBNET=10.0.0.0/24
# Create VPC
echo -n "Creating VPC..."
VPC_ID=$(aws ec2 create-vpc --cidr-block $VPC_CIDR --query Vpc.VpcId --output text)
# Create tag name
aws ec2 create-tags --resources $VPC_ID --tags Key=Name,Value=$CLUSTER_NAME
# Enable dns hostname
aws ec2 modify-vpc-attribute --vpc-id $VPC_ID --enable-dns-hostnames
echo "done."
# Create Public Subnet
echo -n "Creating public subnet..."
PUBLIC_SUBNET_ID=$(aws ec2 create-subnet --vpc-id $VPC_ID --cidr-block $PUBLIC_CIDR_SUBNET --query Subnet.SubnetId --output text)
aws ec2 create-tags --resources $PUBLIC_SUBNET_ID --tags Key=Name,Value=$CLUSTER_NAME-public
echo "done."
# Create private subnet
echo -n "Creating private subnet..."
PRIVATE_SUBNET_ID=$(aws ec2 create-subnet --vpc-id $VPC_ID --cidr-block $PRIVATE_CIDR_SUBNET --query Subnet.SubnetId --output text)
aws ec2 create-tags --resources $PRIVATE_SUBNET_ID --tags Key=Name,Value=$CLUSTER_NAME-private
echo "done."
# Create an internet gateway for outbound traffic and attach it to the VPC.
echo -n "Creating internet gateway..."
IGW_ID=$(aws ec2 create-internet-gateway --query InternetGateway.InternetGatewayId --output text)
echo "done."
aws ec2 create-tags --resources $IGW_ID --tags Key=Name,Value=$CLUSTER_NAME
aws ec2 attach-internet-gateway --vpc-id $VPC_ID --internet-gateway-id $IGW_ID > /dev/null 2>&1
echo "Attached IGW to VPC."
# Create a route table for outbound traffic and associate it to the public subnet.
echo -n "Creating route table for public subnet..."
PUBLIC_ROUTE_TABLE_ID=$(aws ec2 create-route-table --vpc-id $VPC_ID --query RouteTable.RouteTableId --output text)
aws ec2 create-tags --resources $PUBLIC_ROUTE_TABLE_ID --tags Key=Name,Value=$CLUSTER_NAME
echo "done."
aws ec2 create-route --route-table-id $PUBLIC_ROUTE_TABLE_ID --destination-cidr-block 0.0.0.0/0 --gateway-id $IGW_ID > /dev/null 2>&1
echo "Created default public route."
aws ec2 associate-route-table --subnet-id $PUBLIC_SUBNET_ID --route-table-id $PUBLIC_ROUTE_TABLE_ID > /dev/null 2>&1
echo "Public route table associated"
# Create a NAT gateway in the public subnet for outgoing traffic from the private network.
echo -n "Creating NAT Gateway..."
NAT_IP_ADDRESS=$(aws ec2 allocate-address --domain vpc --query AllocationId --output text)
NAT_GATEWAY_ID=$(aws ec2 create-nat-gateway --subnet-id $PUBLIC_SUBNET_ID --allocation-id $NAT_IP_ADDRESS --query NatGateway.NatGatewayId --output text)
aws ec2 create-tags --resources $NAT_IP_ADDRESS --resources $NAT_GATEWAY_ID --tags Key=Name,Value=$CLUSTER_NAME
sleep 10
echo "done."
# Create a route table for the private subnet to the NAT gateway.
echo -n "Creating a route table for the private subnet to the NAT gateway..."
PRIVATE_ROUTE_TABLE_ID=$(aws ec2 create-route-table --vpc-id $VPC_ID --query RouteTable.RouteTableId --output text)
aws ec2 create-tags --resources $PRIVATE_ROUTE_TABLE_ID $NAT_IP_ADDRESS --tags Key=Name,Value=$CLUSTER_NAME-private
aws ec2 create-route --route-table-id $PRIVATE_ROUTE_TABLE_ID --destination-cidr-block 0.0.0.0/0 --gateway-id $NAT_GATEWAY_ID > /dev/null 2>&1
aws ec2 associate-route-table --subnet-id $PRIVATE_SUBNET_ID --route-table-id $PRIVATE_ROUTE_TABLE_ID > /dev/null 2>&1
echo "done."
# echo "***********VARIABLE VALUES*********"
# echo "VPC_ID="$VPC_ID
# echo "PUBLIC_SUBNET_ID="$PUBLIC_SUBNET_ID
# echo "PRIVATE_SUBNET_ID="$PRIVATE_SUBNET_ID
# echo "PUBLIC_ROUTE_TABLE_ID="$PUBLIC_ROUTE_TABLE_ID
# echo "PRIVATE_ROUTE_TABLE_ID="$PRIVATE_ROUTE_TABLE_ID
# echo "NAT_GATEWAY_ID="$NAT_GATEWAY_ID
# echo "IGW_ID="$IGW_ID
# echo "NAT_IP_ADDRESS="$NAT_IP_ADDRESS
echo "Setup complete."
echo ""
echo "To make the cluster create commands easier, please run the following commands to set the environment variables:"
echo "export PUBLIC_SUBNET_ID=$PUBLIC_SUBNET_ID"
echo "export PRIVATE_SUBNET_ID=$PRIVATE_SUBNET_ID"
----
+
[role="_additional-resources"]
.Additional resources
* For more about VPC requirements, see the xref:../../../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-vpc_rosa-sts-aws-prereqs[VPC documentation].
. The script outputs commands. Set the commands as environment variables to store the subnet IDs for later use. Copy and run the commands:
+
[source,terminal]
----
$ export PUBLIC_SUBNET_ID=$PUBLIC_SUBNET_ID
$ export PRIVATE_SUBNET_ID=$PRIVATE_SUBNET_ID
----
. Confirm your environment variables by running the following command:
+
[source,terminal]
----
$ echo "Public Subnet: $PUBLIC_SUBNET_ID"; echo "Private Subnet: $PRIVATE_SUBNET_ID"
----
+
.Example output
+
[source,terminal]
----
Public Subnet: subnet-0faeeeb0000000000
Private Subnet: subnet-011fe340000000000
----
=== Creating your OIDC configuration
In this workshop, we will use the automatic mode when creating the OIDC configuration. We will also store the OIDC ID as an environment variable for later use. The command uses the ROSA CLI to create your cluster's unique OIDC configuration.
* Create the OIDC configuration by running the following command:
+
[source,terminal]
----
$ export OIDC_ID=$(rosa create oidc-config --mode auto --managed --yes -o json | jq -r '.id')
----
== Creating additional environment variables
* Run the following command to set up environment variables. These variables make it easier to run the command to create a ROSA cluster:
+
[source,terminal]
----
$ export CLUSTER_NAME=<cluster_name>
$ export REGION=<VPC_region>
----
+
[TIP]
====
Run `rosa whoami` to find the VPC region.
====
== Creating a cluster
. *Optional:* Run the following command to create the account-wide roles and policies, including the Operator policies and the AWS IAM roles and policies:
+
[IMPORTANT]
====
Only complete this step if this is the _first time_ you are deploying ROSA in this account and you have _not_ yet created your account roles and policies.
====
+
[source,terminal]
----
$ rosa create account-roles --mode auto --yes
----
. Run the following command to create the cluster:
+
[source,terminal]
----
$ rosa create cluster --cluster-name $CLUSTER_NAME \
--subnet-ids ${PUBLIC_SUBNET_ID},${PRIVATE_SUBNET_ID} \
--hosted-cp \
--region $REGION \
--oidc-config-id $OIDC_ID \
--sts --mode auto --yes
----
The cluster is ready after about 10 minutes. The cluster will have a control plane across three AWS availability zones in your selected region and create two worker nodes in your AWS account.
== Checking the installation status
. Run one of the following commands to check the status of the cluster:
+
* For a detailed view of the cluster status, run:
+
[source,terminal]
----
$ rosa describe cluster --cluster $CLUSTER_NAME
----
+
* For an abridged view of the cluster status, run:
+
[source,terminal]
----
$ rosa list clusters
----
+
* To watch the log as it progresses, run:
+
[source,terminal]
----
$ rosa logs install --cluster $CLUSTER_NAME --watch
----
. Once the state changes to “ready” your cluster is installed. It might take a few more minutes for the worker nodes to come online.

View File

@@ -1,232 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="cloud-experts-getting-started-setup"]
= Tutorial: Setup
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: cloud-experts-getting-started-setup
toc::[]
//rosaworkshop.io content metadata
//Brought into ROSA product docs 2023-11-13
There are currently two supported credential methods when creating a {product-title} (ROSA) cluster. One method uses an IAM user with the `AdministratorAccess` policy. The second and *recommended* method uses Amazon Web Services (AWS) Security Token Service (STS).
//To be added when the ROSA with STS Explained tutorial is published:
//For more information, see the xref../cloud_experts_tutorials/cloud_experts_rosa_with_sts_explained.adoc#id[ROSA with STS Explained] tutorial. This workshop uses the STS method.
== Prerequisites
Review the prerequisites listed in the xref:../../rosa_planning/rosa-cloud-expert-prereq-checklist.adoc#rosa-cloud-expert-prereq-checklist[Prerequisites for ROSA with STS] checklist.
You will need the following information from your AWS account:
* AWS IAM user
* AWS access key ID
* AWS secret access key
== Setting up a Red Hat account
. If you do not have a Red Hat account, create one on the link:https://console.redhat.com/[Red Hat console].
. Accept the required terms and conditions.
. Then check your email for a verification link.
== Installing the AWS CLI
* Install the link:https://aws.amazon.com/cli/[AWS CLI] for your operating system.
== Enabling ROSA
[NOTE]
====
Only complete this step if you have *not* enabled ROSA in your AWS account.
====
. Visit the link:https://console.aws.amazon.com/rosa[AWS console] to enable your account to use ROSA.
. Click the orange *Enable OpenShift* button.
+
image::cloud-experts-getting-started-setup-enable.png[]
. After about a minute, a green *service enabled* bar should appear.
+
image::cloud-experts-getting-started-setup-enabled.png[]
== Installing the ROSA CLI
. Install the link:https://console.redhat.com/openshift/downloads[ROSA CLI] for your operating system.
. Download and extract the relevant file for your operating system by using the following command:
+
[source,terminal]
----
tar -xvf rosa-linux.tar.gz
----
. Save the file to a location within your `PATH` by using the following command:
+
[source,terminal]
----
sudo mv rosa /usr/local/bin/rosa
----
. Run `rosa version` to verify a successful installation.
== Installing the OpenShift CLI
There are a few ways to install the OpenShift CLI (`oc`):
* *Option 1: Using the ROSA CLI:*
.. Run `rosa download oc`.
.. Once downloaded, unzip the file and move the executables into a directory in your `PATH`.
* *Option 2: Using the OpenShift documentation:*
.. Follow the directions on the xref:../../cli_reference/openshift_cli/getting-started-cli.adoc#installing-openshift-cli[documentation page]
* *Option 3: Using your OpenShift cluster:*
.. If you already have an OpenShift cluster, you can access the CLI tools page by clicking the *Question mark*, then *Command Line Tools*.
+
image::cloud_experts_getting_started_setup_cli_tools.png[]
.. Then, download the relevant tool for your operating system.
=== Using `oc` instead of `kubectl`
While `kubectl` can be used with an OpenShift cluster, `oc` is specific to OpenShift. It includes the standard set of features from `kubectl` as well as additional support for OpenShift functionality. For more information, see xref:../../cli_reference/openshift_cli/usage-oc-kubectl.adoc#usage-oc-kubectl[Usage of oc and kubectl commands].
== Configuring the AWS CLI
To configure the AWS CLI, follow these steps:
. Enter `aws configure` in the terminal.
. Enter your AWS access key ID and press enter.
. Enter your AWS secret access key and press enter.
. Enter the default region in which you want to deploy.
. Enter the desired output format, specifying either `table` or `json`.
.Example output
[source, terminal]
----
$ aws configure
AWS Access Key ID: AKIA0000000000000000
AWS Secret Access Key: NGvmP0000000000000000000000000
Default region name: us-east-1
Default output format: table
----
== Verifying the configuration
Verify that the configuration is correct by following these steps:
. Run the following command to query the AWS API:
+
[source,terminal]
----
aws sts get-caller-identity
----
. You should see a table or JSON file. Verify that the account information is correct.
+
.Example output
+
[source, terminal]
----
$ aws sts get-caller-identity
------------------------------------------------------------------------------
| GetCallerIdentity |
+--------------+----------------------------------------+--------------------+
| Account | Arn | UserId |
+--------------+----------------------------------------+--------------------+
| 000000000000| arn:aws:iam::00000000000:user/myuser | AIDA00000000000000|
+--------------+----------------------------------------+--------------------+
----
== Ensuring the ELB service role exists
[TIP]
====
Make sure that the service role for the ELB already exists, otherwise cluster deployment could fail.
====
* Run the following command to check for the ELB service role and create it if it is missing:
+
[source,terminal]
----
aws iam get-role --role-name "AWSServiceRoleForElasticLoadBalancing" || aws iam create-service-linked-role --aws-service-name "elasticloadbalancing.amazonaws.com"
----
=== Fixing ELB service role errors
. The following error during cluster creation means that an ELB service role does not exist:
+
.Example output
+
[source,terminal]
----
Error: Error creating network Load Balancer: AccessDenied: User: arn:aws:sts::970xxxxxxxxx:assumed-role/ManagedOpenShift-Installer-Role/163xxxxxxxxxxxxxxxx is not authorized to perform: iam:CreateServiceLinkedRole on resource: arn:aws:iam::970xxxxxxxxx:role/aws-service-role/elasticloadbalancing.amazonaws.com/AWSServiceRoleForElasticLoadBalancing"
----
. If you receive the above error during cluster creation, run the following command:
+
[source,terminal]
----
aws iam get-role --role-name "AWSServiceRoleForElasticLoadBalancing" || aws iam create-service-linked-role --aws-service-name "elasticloadbalancing.amazonaws.com"
----
== Logging in to your Red Hat account
. Enter `rosa login` in a terminal.
. It will prompt you to open a web browser and go to the link:https://console.redhat.com/openshift/token/rosa[Red Hat console].
. Log in, if necessary.
. Click *Load token*.
. Copy the token, paste it into the CLI prompt, and press enter. Alternatively, you can copy the full `rosa login --token=abc...` command and paste it in the terminal.
+
image::cloud-experts-getting-started-setup-token.png[]
== Verifying credentials
Verify that all the credentials are correct.
. Run `rosa whoami` in the terminal.
+
.Example output
[source,terminal]
----
AWS Account ID: 000000000000
AWS Default Region: us-east-2
AWS ARN: arn:aws:iam::000000000000:user/myuser
OCM API: https://api.openshift.com
OCM Account ID: 1DzGIdIhqEWy000000000000000
OCM Account Name: Your Name
OCM Account Username: you@domain.com
OCM Account Email: you@domain.com
OCM Organization ID: 1HopHfA20000000000000000000
OCM Organization Name: Red Hat
OCM Organization External ID: 0000000
----
. Check the information for accuracy before proceeding.
== Verifying quota
Verify that your AWS account has ample quota in the region in which you will be deploying your cluster.
* Run the following command:
+
[source,terminal]
----
rosa verify quota
----
+
.Example output
+
[source,terminal]
----
I: Validating AWS quota...
I: AWS quota ok.
----
* If cluster installation fails, validate the actual AWS resource usage against the xref:../../rosa_planning/rosa-sts-required-aws-service-quotas.adoc#rosa-sts-required-aws-service-quotas[AWS service quotas].
== Verifying the `oc` CLI
Verify that the `oc` CLI is installed correctly:
[source,terminal]
----
rosa verify openshift-client
----
You have now successfully set up you account and environment. You are ready to deploy your cluster.
//== Deploying a cluster
//In the next section you will deploy your cluster. There are two mechanisms to do so:
//- Using the ROSA CLI
//- Using the OCM Web User Interface
//Either way is perfectly fine for the purposes of this workshop. Though keep in mind that if you are using the OCM UI, there will be a few extra steps to set it up in order to deploy into your AWS account for the first time. This will not need to be repeated for subsequent deployments using the OCM UI for the same AWS account.
//Please select the desired mechanism in the left menu under "Deploy the cluster".
//*[ROSA]: Red Hat OpenShift Service on AWS
//*[STS]: AWS Security Token Service
//*[OCM]: OpenShift Cluster Manager

View File

@@ -1,272 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="rosa-mobb-cli-quickstart"]
= Create a {product-title} cluster using the CLI
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: rosa-mobb-cli-quickstart
toc::[]
//Mobb content metadata
//Brought into ROSA product docs 2023-09-12
//---
//date: '2021-06-10'
//title: ROSA Quickstart
//weight: 1
//aliases: [/docs/quickstart-rosa.md]
//Tags: ["AWS", "ROSA", "Quickstarts"]
//authors:
// - Steve Mirman
// - Paul Czarkowski
//---
A Quickstart guide to deploying a Red Hat OpenShift cluster on AWS using the CLI.
== Video Walkthrough
////
Introduction to ROSA by Charlotte Fung on [AWS YouTube channel](https://youtu.be/KRqXxek4GvQ)
<iframe width="560" height="315" src="https://www.youtube.com/embed/KRqXxek4GvQ" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
If you prefer a more visual medium, you can watch [Steve Mirman](https://twitter.com/stevemirman) walk through this quickstart on [YouTube](https://www.youtube.com/watch?v=IFNig_Z_p2Y).
<iframe width="560" height="315" src="https://www.youtube.com/embed/IFNig_Z_p2Y" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
////
== Prerequisites
=== AWS
You must have an AWS account with the link:https://console.aws.amazon.com/rosa/home?#/get-started[AWS ROSA Prerequisites] met.
image::rosa-aws-pre.png[AWS console rosa requisites]
**MacOS**
//See [AWS Docs](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-mac.html) for alternative install options.
* Install AWS CLI using the macOS command line:
[source,bash]
----
$ curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
sudo installer -pkg AWSCLIV2.pkg -target /
----
**Linux**
// See [AWS Docs](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-linux.html) for alternative install options.
* Install AWS CLI using the Linux command line:
[source,bash]
----
$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
----
**Windows**
// See [AWS Docs](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-windows.html) for alternative install options.
* Install AWS CLI using the Windows command line
[source,bash]
----
$ C:\> msiexec.exe /i https://awscli.amazonaws.com/AWSCLIV2.msi
----
////
**Docker**
> See [AWS Docs](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-docker.html) for alternative install options.
1. To run the AWS CLI version 2 Docker image, use the docker run command.
```bash
docker run --rm -it amazon/aws-cli command
```
////
=== Prepare AWS Account for OpenShift
* Configure the AWS CLI by running the following command:
[source,bash]
----
$ aws configure
----
2. You will be required to enter an `AWS Access Key ID` and an `AWS Secret Access Key` along with a default region name and output format
```bash
% aws configure
AWS Access Key ID []:
AWS Secret Access Key []:
Default region name [us-east-2]:
Default output format [json]:
```
The `AWS Access Key ID` and `AWS Secret Access Key` values can be obtained by logging in to the AWS console and creating an **Access Key** in the **Security Credentials** section of the IAM dashboard for your user
3. Validate your credentials
```bash
aws sts get-caller-identity
```
You should receive output similar to the following
```
{
"UserId": <your ID>,
"Account": <your account>,
"Arn": <your arn>
}
```
4. If this is a brand new AWS account that has never had a AWS Load Balancer installed in it, you should run the following
```bash
aws iam create-service-linked-role --aws-service-name \
"elasticloadbalancing.amazonaws.com"
```
### Get a Red Hat Offline Access Token
1. Log into cloud.redhat.com
2. Browse to https://cloud.redhat.com/openshift/token/rosa
3. Copy the **Offline Access Token** and save it for the next step
### Set up the OpenShift CLI (oc)
1. Download the OS specific OpenShift CLI from [Red Hat](https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/)
2. Unzip the downloaded file on your local machine
3. Place the extracted `oc` executable in your OS path or local directory
### Set up the {product-title} CLI
1. Download the OS specific ROSA CLI from [Red Hat](https://www.openshift.com/products/amazon-openshift/download)
2. Unzip the downloaded file on your local machine
3. Place the extracted `rosa` and `kubectl` executables in your OS path or local directory
4. Log in to {product-title}
```bash
rosa login
```
You will be prompted to enter in the **Red Hat Offline Access Token** you retrieved earlier and should receive the following message
```
Logged in as <email address> on 'https://api.openshift.com'
```
### Verify {product-title} privileges
Verify that {product-title} has the minimal permissions
```bash
rosa verify permissions
```
>Expected output: `AWS SCP policies ok`
Verify that {product-title} has the minimal quota
```bash
rosa verify quota
```
>Expected output: `AWS quota ok`
### Initialize {product-title}
Initialize the ROSA CLI to complete the remaining validation checks and configurations
```bash
rosa init
```
## Deploy {product-title}
### Interactive Installation
{product-title} can be installed using command-line parameters or in interactive mode. For an interactive installation run the following command
```bash
rosa create cluster --interactive --mode auto
```
As part of the interactive install you will be required to enter the following parameters or accept the default values (if applicable)
```
Cluster name:
Multiple availability zones (y/N):
AWS region: (select)
OpenShift version: (select)
Install into an existing VPC (y/N):
Compute nodes instance type (optional): (select)
Enable autoscaling (y/N):
Compute nodes [2]:
Additional Security Group IDs (optional): (select)
Machine CIDR [10.0.0.0/16]:
Service CIDR [172.30.0.0/16]:
Pod CIDR [10.128.0.0/14]:
Host prefix [23]:
Private cluster (y/N):
```
>Note: the installation process should take between 30 - 45 minutes
### Get the web console link to the {product-title} cluster
To get the web console link run the following command.
>Substitute your actual cluster name for `<cluster-name>`
```bash
rosa describe cluster --cluster=<cluster-name>
```
### Create cluster-admin user
By default, only the OpenShift SRE team will have access to the {product-title} cluster. To add a local admin user, run the following command to create the `cluster-admin` account in your cluster.
>Substitute your actual cluster name for `<cluster-name>`
```bash
rosa create admin --cluster=<cluster-name>
```
>Refresh your web browser and you should see the `cluster-admin` option to log in
## Delete {product-title}
Deleting a {product-title} cluster consists of two parts
1. Delete the cluster instance, including the removal of AWS resources.
>Substitute your actual cluster name for `<cluster-name>`
```bash
rosa delete cluster --cluster=<cluster-name>
```
Delete Cluster's operator-roles and oidc-provider as shown in the above delete cluster command's output. For e.g.
```bash
rosa delete operator-roles -c <cluster-name>
rosa delete oidc-provider -c <cluster-name>
```
2. Delete the CloudFormation stack, including the removal of the `osdCcsAdmin` user
```bash
rosa init --delete-stack
```

View File

@@ -1,231 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="rosa-mobb-prerequisites-tutorial"]
= Tutorial: {product-title} prerequisites
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: rosa-mobb-prerequisites-tutorial
toc::[]
//Mobb content metadata
//Brought into ROSA product docs 2023-09-18
//---
//date: '2021-06-10'
//title: ROSA Prerequisites
//weight: 1
//tags: ["AWS", "ROSA", "Quickstarts"]
//authors:
// - Steve Mirman
// - Paul Czarkowski
//---
//This file is not being built as of 2023-09-22 based on a conversation with Michael McNeill.
This document contains a set of prerequisites that must be run once before you can create your first {product-title} cluster.
== AWS
An AWS account with the link:https://console.aws.amazon.com/rosa/home?#/get-started[AWS {product-title} prerequisites] met.
image::rosa-aws-pre.png[AWS console {product-title} prerequisites]
== AWS CLI
.MacOS
* Install AWS CLI using the MacOS command line:
+
[source,terminal]
----
$ curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
$ sudo installer -pkg AWSCLIV2.pkg -target /
----
+
[NOTE]
====
See link:https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-mac.html[AWS Documentation] for alternative install options.
====
.Linux
* Install AWS CLI using the Linux command line:
+
[source,terminal]
----
$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
$ unzip awscliv2.zip
$ sudo ./aws/install
----
+
[NOTE]
====
See link:https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-linux.html[AWS Documentation] for alternative install options.
====
.Windows
* Install AWS CLI using the Windows command line:
+
[source,terminal]
----
$ C:\> msiexec.exe /i https://awscli.amazonaws.com/AWSCLIV2.msi
----
+
[NOTE]
====
See link:https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-windows.html[AWS Documentation] for alternative install options.
====
////
.Docker
* To run the AWS CLI version 2 Docker image, use the docker run command:
+
[source,terminal]
----
$ docker run --rm -it amazon/aws-cli command
----
+
[NOTE]
====
See link:https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-docker.html[AWS Documentation] for alternative install options.
====
////
== Prepare AWS Account for OpenShift
. Configure the AWS CLI by running:
+
[source,terminal]
----
$ aws configure
----
+
. You will be required to enter an `AWS Access Key ID` and an `AWS Secret Access Key` along with a default region name and output format:
+
[source,terminal]
----
$ aws configure
----
+
.Example output
[source,terminal]
----
AWS Access Key ID []:
AWS Secret Access Key []:
Default region name [us-east-2]:
Default output format [json]:
----
+
The `AWS Access Key ID` and `AWS Secret Access Key` values can be obtained by logging in to the AWS console and creating an *Access Key* in the *Security Credentials* section of the IAM dashboard for your user.
+
. Validate your credentials:
+
[source,terminal]
----
$ aws sts get-caller-identity
----
+
You should receive output similar to the following:
+
.Example output
[source,terminal]
----
{
"UserId": <your ID>,
"Account": <your account>,
"Arn": <your arn>
}
----
+
. If this is a new AWS account that has never had a AWS Load Balancer (ALB) installed in it, run the following:
+
[source,terminal]
----
$ aws iam create-service-linked-role --aws-service-name \
"elasticloadbalancing.amazonaws.com"
----
== Get a Red Hat Offline Access Token
. Log into {cluster-manager-url}.
. Navigate to link:https://cloud.redhat.com/openshift/token/rosa[OpenShift Cluster Manager API Token].
. Copy the *Offline Access Token* and save it for the next step.
== Set up the OpenShift CLI (oc)
. Download the operating system specific OpenShift CLI from link:https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/[Red Hat].
. Extract the downloaded file on your local machine.
. Place the extracted `oc` executable in your operating system path or local directory.
== Set up the ROSA CLI (rosa)
. Download the operating system specific ROSA CLI from link:https://www.openshift.com/products/amazon-openshift/download[Red Hat].
. Extract the downloaded file on your local machine.
. Place the extracted `rosa` and `kubectl` executables in your operating system path or local directory.
. Log in to {product-title}:
+
[source,terminal]
----
$ rosa login
----
+
You will be prompted to enter in the *Red Hat Offline Access Token* you retrieved earlier and should receive the following message:
+
[source,terminal]
----
Logged in as <email address> on 'https://api.openshift.com'
----
+
. Verify that {product-title} has the minimal quota:
+
[source,terminal]
----
$ rosa verify quota
----
+
Expected output:
+
[source,terminal]
----
AWS quota ok
----
== Associate your AWS account with your Red Hat account
To perform {product-title} cluster provisioning tasks, you must create `ocm-role` and `user-role` IAM resources in your AWS account and link them to your Red Hat organization.
. Create the `ocm-role` which the OpenShift Cluster Manager will use to be able to administer and create {product-title} clusters. If this has already been done for your OpenShift Cluster Manager Organization, you can skip to creating the user-role:
+
[TIP]
====
If you have multiple AWS accounts that you want to associate with your Red Hat Organization, you can use the `--profile` option to specify the AWS profile you want to associate.
====
+
[source,terminal]
----
$ rosa create ocm-role --mode auto --yes
----
+
. Create the User Role that allows OpenShift Cluster Manager to verify that users creating a cluster have access to the current AWS account:
+
[TIP]
====
If you have multiple AWS accounts that you want to associate with your Red Hat Organization, you can use the `--profile` option to specify the AWS profile you want to associate.
====
+
[source,terminal]
----
$ rosa create user-role --mode auto --yes
----
+
. Create the {product-title} Account Roles which give the {product-title} installer and machines permission to perform actions in your account:
+
[source,terminal]
----
$ rosa create account-roles --mode auto --yes
----
== Conclusion
You are now ready to create your first cluster.

View File

@@ -1,6 +1,5 @@
// Module included in the following assemblies:
//
// * rosa_cluster_admin/rosa_logging/rosa-accessing-the-service-logs.adoc
// * osd_cluster_admin/osd_logging/osd-accessing-the-service-logs.adoc
// * observability/logging/sd-accessing-the-service-logs.adoc

View File

@@ -2,7 +2,6 @@
//
// * osd_install_access_delete_cluster/config-identity-providers.adoc
// * rosa_install_access_delete_clusters/rosa-sts-config-identity-providers.adoc
// * rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-config-identity-providers.adoc
:_mod-docs-content-type: PROCEDURE
[id="config-github-idp_{context}"]

View File

@@ -2,7 +2,6 @@
//
// * osd_install_access_delete_cluster/config-identity-providers.adoc
// * rosa_install_access_delete_clusters/rosa-sts-config-identity-providers.adoc
// * rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-config-identity-providers.adoc
:_mod-docs-content-type: PROCEDURE
[id="config-gitlab-idp_{context}"]

View File

@@ -2,7 +2,6 @@
//
// * osd_install_access_delete_cluster/config-identity-providers.adoc
// * rosa_install_access_delete_clusters/rosa-sts-config-identity-providers.adoc
// * rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-config-identity-providers.adoc
:_mod-docs-content-type: PROCEDURE
[id="config-google-idp_{context}"]

View File

@@ -2,7 +2,6 @@
//
// * osd_install_access_delete_cluster/config-identity-providers.adoc
// * rosa_install_access_delete_clusters/rosa-sts-config-identity-providers.adoc
// * rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-config-identity-providers.adoc
ifeval::["{context}" == "config-identity-providers"]
:osd-distro:

View File

@@ -2,7 +2,6 @@
//
// * osd_install_access_delete_cluster/config-identity-providers.adoc
// * rosa_install_access_delete_clusters/rosa-sts-config-identity-providers.adoc
// * rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-config-identity-providers.adoc
:_mod-docs-content-type: PROCEDURE
[id="config-ldap-idp_{context}"]

View File

@@ -2,7 +2,6 @@
//
// * osd_install_access_delete_cluster/config-identity-providers.adoc
// * rosa_install_access_delete_clusters/rosa-sts-config-identity-providers.adoc
// * rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-config-identity-providers.adoc
:_mod-docs-content-type: PROCEDURE
[id="config-openid-idp_{context}"]

View File

@@ -3,7 +3,6 @@
// * scalability_and_performance/using-node-tuning-operator.adoc
// * post_installation_configuration/node-tasks.adoc
// * nodes/nodes/nodes-node-tuning-operator.adoc
// * nodes/nodes/rosa-tuning-config.adoc
ifeval::["{context}" == "rosa-tuning-config"]
:rosa-hcp-tuning:

View File

@@ -4,7 +4,6 @@
// * operators/operator-reference.adoc
// * post_installation_configuration/node-tasks.adoc
// * nodes/nodes/nodes-node-tuning-operator.adoc
// * nodes/nodes/rosa-tuning-config.adoc
ifeval::["{context}" == "operator-reference"]
:operators:

View File

@@ -2,7 +2,6 @@
//
// * storage/container_storage_interface/persistent-storage-csi-aws-efs.adoc
// * storage/container_storage_interface/osd-persistent-storage-csi-aws-efs.adoc
// * storage/persistent_storage/rosa-persistent-storage-aws-efs-csi.adoc
// * storage/container_storage_interface/persistent-storage-csi-smb-cifs.adoc
:_mod-docs-content-type: PROCEDURE
@@ -11,7 +10,7 @@
The {FeatureName} CSI Driver Operator (a Red{nbsp}Hat Operator) is not installed in {product-title} by default. Use the following procedure to install and configure the {FeatureName} CSI Driver Operator in your cluster.
// The following ifeval and restricted ifdef statements exclude STS and a note about avoiding
// The following ifeval and restricted ifdef statements exclude STS and a note about avoiding
// installing community operator content for CSI drivers other than EWS
ifeval::["{context}" == "persistent-storage-csi-aws-efs"]
@@ -66,7 +65,7 @@ endif::restricted[]
+
After the installation finishes, the {FeatureName} CSI Operator is listed in the *Installed Operators* section of the web console.
// The following ifeval statements exclude STS and a note about avoiding
// The following ifeval statements exclude STS and a note about avoiding
// installing community operator content for CSI drivers other than EWS
ifeval::["{context}" == "persistent-storage-csi-aws-efs"]

View File

@@ -1,6 +1,5 @@
// Module included in the following assemblies:
//
// * rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-accessing-cluster.adoc
// * rosa_install_access_delete_clusters/rosa-sts-accessing-cluster.adoc

View File

@@ -1,6 +1,5 @@
// Module included in the following assemblies:
//
// * rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-accessing-cluster.adoc
// * rosa_install_access_delete_clusters/rosa-sts-accessing-cluster.adoc

View File

@@ -1,6 +1,5 @@
// Module included in the following assemblies:
//
// * rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-accessing-cluster.adoc
// * rosa_install_access_delete_clusters/rosa-sts-accessing-cluster.adoc
// * using-rbac.adoc

View File

@@ -1,6 +1,5 @@
// Module included in the following assemblies:
//
// * rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-accessing-cluster.adoc
// * rosa_install_access_delete_clusters/rosa-sts-accessing-cluster.adoc
// * using-rbac.adoc

View File

@@ -1,6 +1,7 @@
// Module included in the following assemblies:
//
// * upgrading/rosa_upgrading/rosa-upgrading-sts.adoc
// * upgrading/rosa-upgrading-sts.adoc
:_mod-docs-content-type: PROCEDURE
[id="rosa-deleting-cluster-upgrade-cli_{context}"]
= Deleting a ROSA cluster upgrade with the ROSA CLI

View File

@@ -1,6 +1,7 @@
// Module included in the following assemblies:
//
// * upgrading/rosa_upgrading/rosa-upgrading-sts.adoc
// * upgrading/rosa-upgrading-sts.adoc
:_mod-docs-content-type: PROCEDURE
[id="rosa-deleting-cluster-upgrade-ocm_{context}"]
= Deleting an upgrade with the {cluster-manager} console

View File

@@ -1,7 +1,6 @@
// Module included in the following assemblies:
//
// * rosa_architecture/rosa-sts-about-iam-resources.adoc
// * rosa_planning/rosa-hcp-iam-resources.adoc
// * rosa_planning/rosa-sts-ocm-role.adoc
:_mod-docs-content-type: PROCEDURE

View File

@@ -10,7 +10,6 @@
// * rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc
// * rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc
// * rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc
// * rosa_planning/rosa-hcp-iam-resources.adoc
// * rosa_planning/rosa-hcp-prepare-iam-roles-resources.adoc
ifeval::["{context}" == "rosa-hcp-cluster-no-cni"]

View File

@@ -1,7 +1,6 @@
// Module included in the following assemblies:
//
// * rosa_upgrading/rosa-upgrading.adoc
// * rosa_upgrading/rosa-upgrading-sts.adoc
// * upgrading/rosa-upgrading-sts.adoc
ifeval::["{context}" == "rosa-upgrading-sts"]
:sts:

View File

@@ -1,7 +1,6 @@
// Module included in the following assemblies:
//
// * rosa_upgrading/rosa-upgrading.adoc
// * rosa_upgrading/rosa-upgrading-sts.adoc
// * upgrading/rosa-upgrading-sts.adoc
// adding this ifeval hcp-in-rosa when a hcp procedure appears in the rosa distro as well as the hcp distro

View File

@@ -7,7 +7,6 @@
//
// * storage/persistent_storage/persistent-storage-aws.adoc
// * storage/container_storage_interface/persistent-storage-csi-aws-efs.adoc
// * storage/persistent_storage/rosa-persistent-storage-aws-efs-csi.adoc
// * storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc
:_mod-docs-content-type: PROCEDURE

View File

@@ -2,7 +2,6 @@
//
// * osd_install_access_delete_cluster/config-identity-providers.adoc
// * rosa_install_access_delete_clusters/rosa-sts-config-identity-providers.adoc
// * rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-config-identity-providers.adoc
:_mod-docs-content-type: CONCEPT
[id="understanding-idp_{context}"]

View File

@@ -1,6 +1,5 @@
// Module included in the following assemblies:
//
// * rosa_cluster_admin/rosa_logging/rosa-accessing-the-service-logs.adoc
// * osd_cluster_admin/osd_logging/osd-accessing-the-service-logs.adoc
// * observability/logging/sd-accessing-the-service-logs.adoc

View File

@@ -1,19 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="rosa-tuning-config"]
= Using the Node Tuning Operator on {hcp-title} clusters
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: rosa-tuning-config
toc::[]
{product-title} supports the Node Tuning Operator to improve performance of your nodes on your clusters. Prior to creating a node tuning configuration, you must create a custom tuning specification.
include::modules/node-tuning-operator.adoc[leveloffset=+1]
include::modules/custom-tuning-specification.adoc[leveloffset=+1]
include::modules/rosa-creating-node-tuning.adoc[leveloffset=+1]
include::modules/rosa-modifying-node-tuning.adoc[leveloffset=+1]
include::modules/rosa-deleting-node-tuning.adoc[leveloffset=+1]

View File

@@ -1,10 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="rosa-viewing-logs"]
= Viewing cluster logs in the AWS Console
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: rosa-viewing-logs
toc::[]
You can view forwarded cluster logs in the AWS console.
include::modules/rosa-view-cloudwatch-logs.adoc[leveloffset=+1]

View File

@@ -1,126 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="learn_more_about_openshift"]
= Learn more about {product-title}
include::_attributes/common-attributes.adoc[]
:context: welcome-personas
toc::[]
Use the following sections to find content to help you learn about and use {product-title}.
[id="architect"]
== Architect
[options="header",cols="3*"]
|===
| Learn about {product-title} |Plan an {product-title} deployment |Additional resources
| link:https://www.openshift.com/blog/enterprise-kubernetes-with-openshift-part-one?extIdCarryOver=true&sc_cid=701f2000001Css5AAC[Enterprise Kubernetes with OpenShift]
| link:https://access.redhat.com/articles/4128421[Tested platforms]
| link:https://www.openshift.com/blog?hsLang=en-us[OpenShift blog]
| xref:../architecture/architecture.adoc#architecture[Architecture]
| xref:../security/container_security/security-understanding.adoc#understanding-security[Security and compliance]
| link:https://www.openshift.com/learn/whats-new[What's new in {product-title}]
|
| xref:../networking/understanding-networking.adoc#understanding-networking[Networking]
| link:https://access.redhat.com/support/policy/updates/openshift#ocp4_phases[{product-title} life cycle]
|
| xref:../backup_and_restore/index.adoc#backup-restore-overview[Backup and restore]
|
|===
[id="cluster-administrator"]
== Cluster Administrator
[options="header",cols="4*"]
|===
|Learn about {product-title} |Deploy {product-title} |Manage {product-title} |Additional resources
| link:https://www.openshift.com/blog/enterprise-kubernetes-with-openshift-part-one?extIdCarryOver=true&sc_cid=701f2000001Css5AAC[Enterprise Kubernetes with OpenShift]
| xref:../installing/overview/installing-preparing.adoc#installing-preparing[Installing {product-title}]
| xref:../support/remote_health_monitoring/using-insights-to-identify-issues-with-your-cluster.adoc#using-insights-to-identify-issues-with-your-cluster[Using Insights to identify issues with your cluster]
| xref:../support/getting-support.adoc#getting-support[Getting Support]
| xref:../architecture/architecture.adoc#architecture[Architecture]
| xref:../machine_configuration/index.adoc#machine-config-overview[Machine configuration overview]
| link:https://access.redhat.com/articles/4217411[OpenShift Knowledgebase articles]
| link:https://learn.openshift.com/?extIdCarryOver=true&sc_cid=701f2000001Css5AAC[OpenShift Interactive Learning Portal]
| xref:../networking/understanding-networking.adoc#understanding-networking[Networking]
| xref:../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[About {product-title} monitoring]
| link:https://access.redhat.com/support/policy/updates/openshift#ocp4_phases[{product-title} Life Cycle]
|
| xref:../storage/understanding-ephemeral-storage.adoc#understanding-ephemeral-storage[Storage]
|
|
|
| xref:../backup_and_restore/index.adoc#backup-restore-overview[Backup and restore]
|
|
|
| xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[Updating a cluster]
|
|
|===
[id="application_site_reliability_engineer"]
== Application Site Reliability Engineer (App SRE)
[options="header",cols="3*"]
|===
|Learn about {product-title} |Deploy and manage applications |Additional resources
| link:https://learn.openshift.com/?extIdCarryOver=true&sc_cid=701f2000001Css5AAC[OpenShift Interactive Learning Portal]
| xref:../applications/projects/working-with-projects.adoc#working-with-projects[Projects]
| xref:../support/getting-support.adoc#getting-support[Getting Support]
| xref:../architecture/architecture.adoc#architecture[Architecture]
| xref:../operators/understanding/olm-what-operators-are.adoc#olm-what-operators-are[Operators]
| link:https://access.redhat.com/articles/4217411[OpenShift Knowledgebase articles]
|
| link:https://access.redhat.com/support/policy/updates/openshift#ocp4_phases[{product-title} Life Cycle]
|
| link:https://www.openshift.com/blog/tag/logging[Blogs about logging]
|
|
| xref:../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[Monitoring]
|
|
|===
[id="Developer"]
== Developer
[options="header",cols="2*"]
|===
|Learn about application development in {product-title} |Deploy applications
| link:https://developers.redhat.com/products/openshift/getting-started#assembly-field-sections-13455[Getting started with OpenShift for developers (interactive tutorial)]
| xref:../applications/creating_applications/odc-creating-applications-using-developer-perspective.adoc#odc-creating-applications-using-developer-perspective[Creating applications]
| link:https://developers.redhat.com/[Red Hat Developers site]
| xref:../cicd/builds/understanding-image-builds.adoc#understanding-image-builds[Builds]
| link:https://developers.redhat.com/products/openshift-dev-spaces/overview[{openshift-dev-spaces-productname} (formerly Red Hat CodeReady Workspaces)]
| xref:../operators/understanding/olm-what-operators-are.adoc#olm-what-operators-are[Operators]
|
| xref:../openshift_images/index.adoc#overview-of-images[Images]
|
| xref:../cli_reference/odo-important-update.adoc#odo-important_update[Developer-focused CLI]
|
|===

View File

@@ -1,38 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="rosa-admission-plug-ins"]
= Admission plugins
include::_attributes/common-attributes.adoc[]
:context: admission-plug-ins
toc::[]
Admission plugins are used to help regulate how {product-title} functions.
// Concept modules
include::modules/admission-plug-ins-about.adoc[leveloffset=+1]
include::modules/admission-plug-ins-default.adoc[leveloffset=+1]
include::modules/admission-webhooks-about.adoc[leveloffset=+1]
include::modules/admission-webhook-types.adoc[leveloffset=+1]
// user (groups=["dedicated-admins" "system:authenticated:oauth" "system:authenticated"]) is attempting to grant RBAC permissions not currently held, clusterroles.rbac.authorization.k8s.io "system:openshift:online:my-webhook-server" not found, cannot get resource "rolebindings", cannot create resource "apiservices", cannot create resource "validatingwebhookconfigurations"
ifndef::openshift-rosa,openshift-dedicated[]
// Procedure module
include::modules/configuring-dynamic-admission.adoc[leveloffset=+1]
endif::openshift-rosa,openshift-dedicated[]
[role="_additional-resources"]
[id="admission-plug-ins-additional-resources"]
== Additional resources
ifndef::openshift-rosa,openshift-dedicated[]
* xref: /networking/hardware_networks/configuring-sriov-operator.adoc#configuring-sriov-operator[Configuring the SR-IOV Network Operator]
endif::openshift-rosa,openshift-dedicated[]
* xref:../nodes/scheduling/nodes-scheduler-taints-tolerations.adoc#nodes-scheduler-taints-tolerations_dedicating_nodes-scheduler-taints-tolerations[Controlling pod placement using node taints]
* xref:../nodes/pods/nodes-pods-priority.adoc#admin-guide-priority-preemption-names_nodes-pods-priority[Pod priority names]

View File

@@ -1,8 +1,4 @@
:_mod-docs-content-type: ASSEMBLY
// This assembly is the target of a symbolic link.
// The symbolic link at openshift-docs/rosa_planning/rosa-hcp-iam-resources.adoc
// points to this file's real location, which is
// openshift-docs/rosa_architecture/rosa-sts-about-iam-resources.adoc
ifndef::openshift-rosa-hcp[]
[id="rosa-sts-about-iam-resources"]
= About IAM resources for STS clusters

View File

@@ -1,76 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="dedicated-aws-access"]
= Accessing AWS infrastructure
include::_attributes/common-attributes.adoc[]
:context: dedicated-aws-access
toc::[]
Amazon Web Services (AWS) infrastructure access allows
link:https://access.redhat.com/node/3610411[Customer Portal Organization Administrators]
and cluster owners to enable AWS Identity and Access Management (IAM) users to
have federated access to the AWS Management Console for their {product-title}
cluster. Administrators can select between Network Management or Read-only
access options.
[id="dedicated-configuring-aws-access"]
== Configuring AWS infrastructure access
== Prerequisites
* An AWS account with IAM permissions.
[id="dedicated-aws-account-creation"]
=== Creating an AWS account with IAM permissions
Before you can configure access to AWS infrastructure, you will need to set up IAM permissions in your AWS account.
.Procedure
. Log in to your AWS account. If necessary, you can create a new AWS account by following link:https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/[AWS documentation].
. Create an IAM user with `STS:AllowAssumeRole` permissions within the AWS account.
.. Open the IAM dashboard of the AWS Management Console.
.. In the *Policies* section, click *Create Policy*.
.. Select the `JSON` tab and replace the existing text with the following:
+
[source,json]
----
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "*"
}
]
}
----
.. Click *Review Policy*.
.. Provide an appropriate name and description, then click *Create Policy*.
.. In the *Users* section, click *Add user*.
.. Provide an appropriate user name.
.. Select *AWS Management Console access* and other roles as needed.
.. Adjust the password requirements as necessary for your organization, then click *Next: Policy*.
.. Click the *Attach existing policies directly* option.
.. Search for and check the policy created in previous steps.
+
[NOTE]
====
It is not recommended to set a permissions boundary.
====
.. Click *Next: Tags*, then click *Next: Review*. Confirm the configuration is correct.
.. Click *Create user*, then click *Close* on the success page.
. Gather the IAM user's Amazon Resource Name (ARN). The ARN will have the following format: `arn:aws:iam::000111222333:user/username`.
[id=dedicated-aws-ocm-iam-role]
=== Granting the IAM role from {cluster-manager-first}
.Procedure
. Open the {product-title} Cluster Manager in your browser and select the cluster you want to allow AWS infrastructure access.
. Select the *Access control* tab, and scroll to the *AWS Infrastructure Access* section.
. Paste the AWS IAM ARN and select `Network Management` or `Read-only` permissions, then click *Grant role*.
. Copy the AWS OSD Console URL to your clipboard.
. Sign in to your AWS account with your Account ID or alias, IAM user name, and password.
. In a new browser tab, paste the AWS OSD Console URL that will be used to route to the AWS Switch Role page.
. Your account number and role will be filled in already. Choose a display name if necessary, then click *Switch Role*. You will now see *VPC* under *Recently visited services*.

View File

@@ -16,13 +16,6 @@ guide.
include::modules/dedicated-aws-vpc-peering-terms.adoc[leveloffset=+1]
include::modules/dedicated-aws-vpc-initiating-peering.adoc[leveloffset=+1]
ifdef::openshift-dedicated[]
[role="_additional-resources"]
.Additional resources
* xref:../../rosa_cluster_admin/cloud_infrastructure_access/dedicated-aws-access.adoc#dedicated-aws-ocm-iam-role[Logging into the Web Console for the AWS Account]
endif::[]
include::modules/dedicated-aws-vpc-accepting-peering.adoc[leveloffset=+1]
include::modules/dedicated-aws-vpc-configuring-routing-tables.adoc[leveloffset=+1]
include::modules/dedicated-aws-vpc-verifying-troubleshooting.adoc[leveloffset=+1]

View File

@@ -1,22 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="dedicated-aws-private-cluster"]
= Configuring a private cluster
include::_attributes/common-attributes.adoc[]
:context: dedicated-private-cluster
toc::[]
An {product-title} cluster can be made private so that internal applications can be hosted inside a corporate network. In addition, private clusters can be configured to have only internal API endpoints for increased security.
{product-title} administrators can choose between public and private cluster configuration from within *{cluster-manager}*. Privacy settings can be configured during cluster creation or after a cluster is established.
include::modules/dedicated-enable-private-cluster-new.adoc[leveloffset=+1]
include::modules/dedicated-enable-private-cluster-existing.adoc[leveloffset=+1]
include::modules/dedicated-enable-public-cluster.adoc[leveloffset=+1]
[NOTE]
====
Red Hat Service Reliability Engineers (SREs) can access a public or private cluster through the `cloud-ingress-operator` and existing ElasticSearch Load Balancer or Amazon S3 framework. SREs can access clusters through a secure endpoint to perform maintenance and service tasks.
====

View File

@@ -1,27 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
:context: dedicated-understanding-aws
[id="welcome-index"]
= Understanding cloud infrastructure access
include::_attributes/common-attributes.adoc[]
Amazon Web Services (AWS) infrastructure access permits
link:https://access.redhat.com/node/3610411[Customer Portal Organization Administrators]
and cluster owners to enable AWS Identity and Access Management (IAM) users to
have federated access to the AWS Management Console for their {product-title}
cluster.
[id="enabling-aws-access"]
== Enabling AWS access
AWS access can be granted for customer AWS users, and private cluster access can be implemented to suit the needs of your {product-title} environment.
Get started with xref:../../rosa_cluster_admin/cloud_infrastructure_access/dedicated-aws-access.adoc#dedicated-aws-access[Accessing AWS infrastructure] for your {product-title} cluster. By creating an AWS user and account and providing that user with access to the {product-title} AWS account.
After you have access to the {product-title} AWS account, use one or more of the following methods to establish a private connection to your cluster:
- xref:../../rosa_cluster_admin/cloud_infrastructure_access/dedicated-aws-peering.adoc#dedicated-aws-peering[Configuring AWS VPC peering]: Enable VPC peering to route network traffic between two private IP addresses.
- xref:../../rosa_cluster_admin/cloud_infrastructure_access/dedicated-aws-vpn.adoc#dedicated-aws-vpn[Configuring AWS VPN]: Establish a Virtual Private Network to securely connect your private network to your Amazon Virtual Private Cloud.
- xref:../../rosa_cluster_admin/cloud_infrastructure_access/dedicated-aws-dc.adoc#dedicated-aws-dc[Configuring AWS Direct Connect]: Configure AWS Direct Connect to establish a dedicated network connection between your private network and an AWS Direct Connect location.
After configuring your cloud infrastructure access, learn more about xref:../../rosa_cluster_admin/cloud_infrastructure_access/dedicated-aws-private-cluster.adoc#dedicated-aws-private-cluster[Configuring a private cluster].

View File

@@ -1 +0,0 @@
../../_attributes/

View File

@@ -1 +0,0 @@
../images

View File

@@ -1 +0,0 @@
../modules

View File

@@ -1,22 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="rosa-accessing-the-service-logs"]
= Accessing the service logs for ROSA clusters
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: rosa-accessing-the-service-logs
toc::[]
[role="_abstract"]
You can view the service logs for your {product-title} (ROSA) clusters by using {cluster-manager-first}. The service logs detail cluster events such as load balancer quota updates and scheduled maintenance upgrades. The logs also show cluster resource changes such as the addition or deletion of users, groups, and identity providers.
// Commented out while the OpenShift Cluster Manager CLI is in Developer Preview:
//You can view the service logs for your {product-title} (ROSA) clusters by using {cluster-manager-first} or the {cluster-manager} CLI (`ocm`). The service logs detail cluster events such as load balancer quota updates and scheduled maintenance upgrades. The logs also show cluster resource changes such as the addition or deletion of users, groups, and identity providers.
Additionally, you can add notification contacts for a ROSA cluster. Subscribed users receive emails about cluster events that require customer action, known cluster incidents, upgrade maintenance, and other topics.
// Commented out while the OpenShift Cluster Manager CLI is in Developer Preview:
//include::modules/viewing-the-service-logs.adoc[leveloffset=+1]
//include::modules/viewing-the-service-logs-ocm.adoc[leveloffset=+2]
//include::modules/viewing-the-service-logs-cli.adoc[leveloffset=+2]
include::modules/viewing-the-service-logs-ocm.adoc[leveloffset=+1]
include::modules/adding-cluster-notification-contacts.adoc[leveloffset=+1]

View File

@@ -1,10 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
include::_attributes/attributes-openshift-dedicated.adoc[]
[id="rosa-viewing-logs"]
= Viewing cluster logs in the AWS Console
:context: rosa-viewing-logs
toc::[]
View forwarded cluster logs in the AWS console.
include::modules/rosa-view-cloudwatch-logs.adoc[leveloffset=+1]

View File

@@ -1 +0,0 @@
../snippets

View File

@@ -1,28 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
include::_attributes/attributes-openshift-dedicated.adoc[]
[id="rosa-accessing-cluster"]
= Accessing a ROSA cluster
:context: rosa-accessing-cluster
toc::[]
As a best practice, access your {product-title} (ROSA) cluster using an identity provider (IDP) account. However, the cluster administrator who created the cluster can access it using the quick access procedure.
This document describes how to access a cluster and set up an IDP using the `rosa` CLI. Alternatively, you can set up an IDP account using {cluster-manager} console.
include::snippets/rosa-sts.adoc[]
include::modules/rosa-accessing-your-cluster-quick.adoc[leveloffset=+1]
include::modules/rosa-accessing-your-cluster.adoc[leveloffset=+1]
include::modules/rosa-create-cluster-admins.adoc[leveloffset=+1]
include::modules/rosa-create-dedicated-cluster-admins.adoc[leveloffset=+1]
[id="additional-resources-cluster-access"]
[role="_additional-resources"]
== Additional resources
* xref:../../rosa_install_access_delete_clusters/rosa-sts-config-identity-providers.adoc#rosa-sts-config-identity-providers[Configuring identity providers]
* xref:../../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-getting-started-workflow.adoc#rosa-understanding-the-deployment-workflow[Understanding the ROSA deployment workflow]

View File

@@ -1,28 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
include::_attributes/attributes-openshift-dedicated.adoc[]
[id="rosa-config-identity-providers"]
= Configuring identity providers
:context: rosa-config-identity-providers
toc::[]
After your {product-title} (ROSA) cluster is created, you must configure identity providers to determine how users log in to access the cluster.
The following topics describe how to configure an identity provider using {cluster-manager} console. Alternatively, you can use the `rosa` CLI to create an identity provider and access the cluster.
include::snippets/rosa-sts.adoc[]
include::modules/understanding-idp.adoc[leveloffset=+1]
include::modules/identity-provider-parameters.adoc[leveloffset=+2]
include::modules/config-github-idp.adoc[leveloffset=+1]
include::modules/config-gitlab-idp.adoc[leveloffset=+1]
include::modules/config-google-idp.adoc[leveloffset=+1]
include::modules/config-ldap-idp.adoc[leveloffset=+1]
include::modules/config-openid-idp.adoc[leveloffset=+1]
include::modules/config-htpasswd-idp.adoc[leveloffset=+1]
[id="additional-resources-idps"]
[role="_additional-resources"]
== Additional resources
* xref:../../rosa_install_access_delete_clusters/rosa-sts-accessing-cluster.adoc#rosa-sts-accessing-cluster[Accessing a cluster]
* xref:../../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-getting-started-workflow.adoc#rosa-understanding-the-deployment-workflow[Understanding the ROSA deployment workflow]

View File

@@ -1 +0,0 @@
../rosa_architecture/rosa-sts-about-iam-resources.adoc

View File

@@ -7,7 +7,7 @@ include::_attributes/attributes-openshift-dedicated.adoc[]
toc::[]
This document details the tested cluster maximums for {hcp-title-first} clusters, along with information about the test environment and configuration used to test the maximums. For {hcp-title} clusters, the control plane is fully managed in the service AWS account and will automatically scale with the cluster.
This document details the tested cluster maximums for {hcp-title-first} clusters, along with information about the test environment and configuration used to test the maximums. For {hcp-title} clusters, the control plane is fully managed in the service AWS account and will automatically scale with the cluster.
include::modules/sd-hcp-planning-cluster-maximums.adoc[leveloffset=+1]

View File

@@ -1,83 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="rosa-persistent-storage-aws-efs-csi"]
= Setting up AWS Elastic File Service CSI Driver Operator
include::_attributes//attributes-openshift-dedicated.adoc[]
:context: rosa-persistent-storage-aws-efs-csi
toc::[]
//Content similar to persistent-storage-csi-aws-efs.adoc and osd-persistent-storage-aws-efs-csi.adoc. Modules are reused.
[IMPORTANT]
====
This procedure is specific to the Amazon Web Services Elastic File System (AWS EFS) CSI Driver Operator, which is only applicable for {product-title} 4.10 and later versions.
====
== Overview
{product-title} is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for AWS Elastic File Service (EFS).
Familiarity with link:https://access.redhat.com/documentation/en-us/openshift_container_platform/latest/html-single/storage/index#persistent-storage-overview_understanding-persistent-storage[persistent storage] and link:https://access.redhat.com/documentation/en-us/openshift_container_platform/latest/html-single/storage/index#persistent-storage-csi[configuring CSI volumes] is recommended when working with a CSI Operator and driver.
After installing the AWS EFS CSI Driver Operator, {product-title} installs the AWS EFS CSI Operator and the AWS EFS CSI driver by default in the `openshift-cluster-csi-drivers` namespace. This allows the AWS EFS CSI Driver Operator to create CSI-provisioned PVs that mount to AWS EFS assets.
* The _AWS EFS CSI Driver Operator_, after being installed, does not create a storage class by default to use to create persistent volume claims (PVCs). However, you can manually create the AWS EFS `StorageClass`.
The AWS EFS CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand.
This eliminates the need for cluster administrators to pre-provision storage.
* The _AWS EFS CSI driver_ enables you to create and mount AWS EFS PVs.
[NOTE]
====
AWS EFS only supports regional volumes, not zonal volumes.
====
include::modules/persistent-storage-csi-about.adoc[leveloffset=+1]
:FeatureName: AWS EFS
include::modules/persistent-storage-csi-olm-operator-install.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../../storage/persistent_storage/rosa-persistent-storage-aws-efs-csi.adoc#efs-sts_rosa-persistent-storage-aws-efs-csi[Configuring AWS EFS CSI Driver with STS]
include::modules/persistent-storage-csi-efs-sts.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../../storage/persistent_storage/rosa-persistent-storage-aws-efs-csi.adoc#persistent-storage-csi-olm-operator-install_rosa-persistent-storage-aws-efs-csi[Installing the AWS EFS CSI Driver Operator]
* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/latest/html-single/authentication_and_authorization/index#cco-ccoctl-configuring_cco-mode-sts[Configuring the Cloud Credential Operator utility]
:StorageClass: AWS EFS
:Provisioner: efs.csi.aws.com
include::modules/storage-create-storage-class.adoc[leveloffset=+1]
include::modules/persistent-storage-csi-efs-create-volume.adoc[leveloffset=+1]
include::modules/persistent-storage-csi-dynamic-provisioning-aws-efs.adoc[leveloffset=+1]
If you have problems setting up dynamic provisioning, see xref:../../storage/persistent_storage/rosa-persistent-storage-aws-efs-csi.adoc#efs-troubleshooting_rosa-persistent-storage-aws-efs-csi[AWS EFS troubleshooting].
include::modules/persistent-storage-csi-efs-static-pv.adoc[leveloffset=+1]
If you have problems setting up static PVs, see xref:../../storage/persistent_storage/rosa-persistent-storage-aws-efs-csi.adoc#efs-troubleshooting_rosa-persistent-storage-aws-efs-csi[AWS EFS troubleshooting].
include::modules/persistent-storage-csi-efs-security.adoc[leveloffset=+1]
include::modules/persistent-storage-csi-efs-troubleshooting.adoc[leveloffset=+1]
:FeatureName: AWS EFS
include::modules/persistent-storage-csi-olm-operator-uninstall.adoc[leveloffset=+1]
[role="_additional-resources"]
== Additional resources
* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/latest/html-single/storage/index#persistent-storage-csi[Configuring CSI volumes]

View File

@@ -1,37 +0,0 @@
[id="rosa-updating-cluster-prepare"]
= Preparing to upgrade ROSA to 4.9
include::_attributes/common-attributes.adoc[]
ifdef::openshift-dedicated,openshift-rosa[]
include::_attributes/attributes-openshift-dedicated.adoc[]
endif::[]
:context: rosa-updating-cluster-prepare
toc::[]
Upgrading your {product-title} clusters to OpenShift 4.9 requires you to evaluate and migrate your APIs as the latest version of Kubernetes has removed a significant number of APIs.
Before you can upgrade your {product-title} clusters, you must update the required tools to the appropriate version.
include::modules/rosa-upgrading-preparing-4-8-to-4-9.adoc[leveloffset=+1]
include::modules/upgrade-49-acknowledgement.adoc[leveloffset=+2]
// Removed Kubernetes APIs
include::modules/osd-update-preparing-list.adoc[leveloffset=+2]
[id="rosa-evaluating-cluster-removed-apis"]
== Evaluating your cluster for removed APIs
There are several methods to help administrators identify where APIs that will be removed are in use. However, {product-title} cannot identify all instances, especially workloads that are idle or external tools that are used. It is the responsibility of the administrator to properly evaluate all workloads and other integrations for instances of removed APIs.
// Reviewing alerts to identify uses of removed APIs
include::modules/osd-update-preparing-evaluate-alerts.adoc[leveloffset=+2]
// Using APIRequestCount to identify uses of removed APIs
include::modules/osd-update-preparing-evaluate-apirequestcount.adoc[leveloffset=+2]
// Using APIRequestCount to identify which workloads are using the removed APIs
include::modules/osd-update-preparing-evaluate-apirequestcount-workloads.adoc[leveloffset=+2]
// Migrating instances of removed APIs
include::modules/osd-update-preparing-migrate.adoc[leveloffset=+1]

View File

@@ -1,36 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="rosa-upgrading"]
= Upgrading ROSA Classic clusters
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: rosa-upgrading
toc::[]
[id="rosa-lifecycle-policy_{context}"]
== Life cycle policies and planning
To plan an upgrade, review the xref:../rosa_architecture/rosa_policy_service_definition/rosa-life-cycle.adoc#rosa-life-cycle[{product-title} update life cycle]. The life cycle page includes release definitions, support and upgrade requirements, installation policy information and life cycle dates.
Upgrades are manually initiated or automatically scheduled. Red Hat Site Reliability Engineers (SREs) monitor upgrade progress and remedy any issues encountered.
[id="rosa-sts-upgrading-a-cluster"]
== Upgrading a ROSA cluster
There are three methods to upgrade {product-title} (ROSA) clusters:
* Individual upgrades through the ROSA CLI (`rosa`)
* Individual upgrades through the {cluster-manager-url}
* Recurring upgrades through the {cluster-manager-url}
[NOTE]
====
For steps to upgrade a ROSA cluster that uses the AWS Security Token Service (STS), see xref:../upgrading/rosa-upgrading-sts.adoc#rosa-upgrading-sts[Upgrading ROSA clusters with STS].
====
[NOTE]
====
When following a scheduled upgrade policy, there might be a delay of an hour or more before the upgrade process begins, even if it is an immediate upgrade. Additionally, the duration of the upgrade might vary based on your workload configuration.
====
include::modules/rosa-upgrading-cli-tutorial.adoc[leveloffset=+2]
include::modules/rosa-upgrading-manual-ocm.adoc[leveloffset=+2]
include::modules/rosa-upgrading-automatic-ocm.adoc[leveloffset=+2]