1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 03:47:04 +01:00

OSDOCS-15926 - Removing unused OSD files

(cherry picked from commit d9774fb40a)
This commit is contained in:
Olga Tikhomirova
2025-10-02 00:16:24 -07:00
parent 5c9b62397e
commit 3903bf2d75
69 changed files with 8 additions and 2690 deletions

View File

@@ -61,10 +61,6 @@ Topics:
File: policy-understand-availability
- Name: Update life cycle
File: osd-life-cycle
# Created a new assembly in ROSA/OSD. In OCP, the assembly is in a book that is not in ROSA/OSD
# - Name: About admission plugins
# File: osd-admission-plug-ins
# Distros: openshift-dedicated
---
Name: Architecture
Dir: architecture
@@ -135,9 +131,6 @@ Topics:
File: creating-a-gcp-cluster-sa
- Name: Creating a cluster on Google Cloud with a Red Hat cloud account
File: creating-a-gcp-cluster-redhat-account
#- Name: Configuring your identity providers
# File: config-identity-providers
#- Name: Revoking privileges and access to an OpenShift Dedicated cluster
# File: osd-revoking-cluster-privileges
- Name: Deleting an OpenShift Dedicated cluster on Google Cloud

View File

@@ -1,26 +0,0 @@
// Module included in the following assemblies:
//
// * assemblies/quickstart-osd.adoc
:_mod-docs-content-type: PROCEDURE
[id="add-user_{context}"]
= Adding a user
Administrator roles are managed using a `dedicated-admins` group on the cluster. You can add and remove users from {cluster-manager-first}.
.Procedure
. Navigate to the *Cluster List* page and select the cluster you want to add users to.
. Click the *Access control* tab.
. Under the *Cluster administrative users* heading, click *Add User*.
. Enter the user ID you want to add.
. Click *Add user*.
.Verification
* You now see the user listed under the *Cluster administrative users* heading.

View File

@@ -1,36 +0,0 @@
// Module included in the following assemblies:
//
// * osd_cluster_admin/osd_logging/osd-accessing-the-service-logs.adoc
// * observability/logging/sd-accessing-the-service-logs.adoc
:_mod-docs-content-type: PROCEDURE
[id="adding-cluster-notification-contacts_{context}"]
= Adding cluster notification contacts
You can add notification contacts for your
ifdef::openshift-dedicated[]
{product-title}
endif::openshift-dedicated[]
ifdef::openshift-rosa[]
{product-title} (ROSA)
endif::openshift-rosa[]
cluster. When an event occurs that triggers a cluster notification email, subscribed users are notified.
.Procedure
. Navigate to {cluster-manager-url} and select your cluster.
. On the *Support* tab, under the *Notification contacts* heading, click *Add notification contact*.
. Enter the Red Hat username or email of the contact you want to add.
+
[NOTE]
====
The username or email address must relate to a user account in the Red Hat organization where the cluster is deployed.
====
. Click *Add contact*.
.Verification
* You see a confirmation message when you have successfully added the contact. The user appears under the *Notification contacts* heading on the *Support* tab.

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * networking/configuring-cluster-wide-proxy.adoc
// * networking/ovn_kubernetes_network_provider/configuring-cluster-wide-proxy.adoc
:_mod-docs-content-type: PROCEDURE
[id="configuring-a-proxy-after-installation-cli_{context}"]

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * networking/configuring-cluster-wide-proxy.adoc
// * networking/ovn_kubernetes_network_provider/configuring-cluster-wide-proxy.adoc
:_mod-docs-content-type: PROCEDURE
[id="configuring-a-proxy-after-installation-ocm_{context}"]

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * networking/configuring-cluster-wide-proxy.adoc
// * networking/ovn_kubernetes_network_provider/configuring-cluster-wide-proxy.adoc
:_mod-docs-content-type: PROCEDURE
[id="configuring-a-proxy-during-installation-cli_{context}"]

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * networking/configuring-cluster-wide-proxy.adoc
// * networking/ovn_kubernetes_network_provider/configuring-cluster-wide-proxy.adoc
:_mod-docs-content-type: CONCEPT
[id="configuring-a-proxy-during-installation-ocm_{context}"]

View File

@@ -1,27 +0,0 @@
// Module included in the following assemblies:
//
// * osd_architecture/osd-architecture.adoc
:_mod-docs-content-type: CONCEPT
[id="container-benefits_{context}"]
= The benefits of containerized applications
Applications were once expected to be installed on operating systems that included all of the dependencies for the application. However, containers provide a standard way to package your application code, configurations, and dependencies into a single unit that can run as a resource-isolated process on a compute server. To run your app in Kubernetes on {product-title}, you must first containerize your app by creating a container image that you store in a container registry.
[id="operating-system-benefits_{context}"]
== Operating system benefits
Containers use small, dedicated Linux operating systems without a kernel. The file system, networking, cgroups, process tables, and namespaces are separate from the host Linux system, but the containers can integrate with the
hosts seamlessly when necessary. Being based on Linux allows containers to use all the advantages that come with the open source development model of rapid innovation.
Because each container uses a dedicated operating system, you can deploy applications that require conflicting software dependencies on the same host. Each container carries its own dependent software and manages its own interfaces, such as networking and file systems, so applications never need to compete for those assets.
[id="deployment-scaling-benefits_{context}"]
== Deployment benefits
If you employ rolling upgrades between major releases of your application, you can continuously improve your applications without downtime and still maintain compatibility with the current release.
You can also deploy and test a new version of an application alongside the existing version. Deploy the new application version in addition to the current version. If the container passes your tests, simply deploy more new containers and remove the old ones. 
Since all the software dependencies for an application are resolved within the container itself, you can use a generic operating system on each host in your data center. You do not need to configure a specific operating system for each application host. When your data center needs more capacity, you can deploy another generic host system.

View File

@@ -1,15 +0,0 @@
// Module included in the following assemblies:
//
// * getting_started/accessing-your-services.adoc
:_mod-docs-content-type: PROCEDURE
[id="dedicated-accessing-your-cluster_{context}"]
= Accessing your cluster
Use the following steps to access your {product-title} cluster.
.Procedure
. From {cluster-manager-url}, click on the cluster you want to access.
. Click *Launch Console*.

View File

@@ -1,19 +0,0 @@
// Module included in the following assemblies:
//
// * osd_cluster_admin/dedicated-admin-role.adoc
[id="dedicated-admin-granting-permissions_{context}"]
= Granting permissions to users or groups
To grant permissions to other users or groups, you can add, or _bind_, a role to
them using the following commands:
[source,terminal]
----
$ oc adm policy add-role-to-user <role> <user_name>
----
[source,terminal]
----
$ oc adm policy add-role-to-group <role> <group_name>
----

View File

@@ -1,16 +0,0 @@
// Module included in the following assemblies:
//
// * osd_cluster_admin/cluster-admin-role.adoc
:_mod-docs-content-type: PROCEDURE
[id="dedicated-cluster-admin-enable_{context}"]
= Enabling the cluster-admin role for your cluster
The cluster-admin role must be enabled at the cluster level before it can be assigned to a user.
.Prerequisites
. Open a technical support case with Red Hat to request that `cluster-admin` be enabled for your cluster.
.Procedure
. In {cluster-manager}, select the cluster you want to assign cluster-admin privileges.
. Under the *Actions* dropdown menu, select *Allow cluster-admin access*.

View File

@@ -1,27 +0,0 @@
// Module included in the following assemblies:
//
// * osd_cluster_admin/cluster-admin-role.adoc
:_mod-docs-content-type: PROCEDURE
[id="dedicated-cluster-admin-grant_{context}"]
= Granting the cluster-admin role to users
After enabling cluster-admin rights on your cluster, you can assign the role to users.
.Prerequisites
* Cluster access with cluster owner permissions
.Procedure
. In {cluster-manager}, select the cluster you want to assign cluster-admin privileges.
. Under the *Access Control* tab, locate the *Cluster Administrative Users* section. Click *Add user*.
. After determining an appropriate User ID, select *cluster-admin* from the *Group* selection, then click *Add user*.
+
[NOTE]
====
Cluster-admin user creation can take several minutes to complete.
====
+
[NOTE]
====
Existing dedicated-admin users cannot elevate their role to cluster-admin. A new user must be created with the cluster-admin role assigned.
====

View File

@@ -1,145 +0,0 @@
// Module included in the following assemblies:
//
// * logging/dedicated-cluster-deploying.adoc
:_mod-docs-content-type: PROCEDURE
[id="dedicated-cluster-install-deploy_{context}"]
= Installing OpenShift Logging and OpenShift Elasticsearch Operators
You can use the {product-title} console to install OpenShift Logging by deploying instances of
the OpenShift Logging and OpenShift Elasticsearch Operators. The Red Hat OpenShift Logging Operator
creates and manages the components of the logging stack. The OpenShift Elasticsearch Operator
creates and manages the Elasticsearch cluster used by OpenShift Logging.
[NOTE]
====
The OpenShift Logging solution requires that you install both the
Red Hat OpenShift Logging Operator and OpenShift Elasticsearch Operator. When you deploy an instance
of the Red Hat OpenShift Logging Operator, it also deploys an instance of the OpenShift Elasticsearch
Operator.
====
Your OpenShift Dedicated cluster includes 600 GiB of persistent storage that is
exclusively available for deploying Elasticsearch for OpenShift Logging.
Elasticsearch is a memory-intensive application. Each Elasticsearch node needs
16Gi of memory for both memory requests and limits. Each Elasticsearch node can
operate with a lower memory setting, though this is not recommended for
production deployments.
.Procedure
. Install the OpenShift Elasticsearch Operator from the software catalog:
.. In the {product-title} web console, click *Ecosystem* -> *Software Catalog*.
.. Choose *OpenShift Elasticsearch Operator* from the list of available Operators, and click *Install*.
.. On the *Install Operator* page, under *A specific namespace on the cluster* select *openshift-logging*.
Then, click *Install*.
. Install the Red Hat OpenShift Logging Operator from the software catalog:
.. In the {product-title} web console, click *Ecosystem* -> *Software Catalog*.
.. Choose *Red Hat OpenShift Logging* from the list of available Operators, and click *Install*.
.. On the *Install Operator* page, under *A specific namespace on the cluster* select *openshift-logging*.
Then, click *Install*.
. Verify the operator installations:
.. Switch to the *Ecosystem* -> *Installed Operators* page.
.. Ensure that *Red Hat OpenShift Logging* and *OpenShift Elasticsearch* Operators are listed in the
*openshift-logging* project with a *Status* of *InstallSucceeded*.
+
[NOTE]
====
During installation an operator might display a *Failed* status. If the operator then installs with an *InstallSucceeded* message,
you can safely ignore the *Failed* message.
====
+
If either operator does not appear as installed, to troubleshoot further:
+
* Switch to the *Ecosystem* -> *Installed Operators* page and inspect
the *Status* column for any errors or failures.
* Switch to the *Workloads* → *Pods* page and check the logs in each pod in the
`openshift-logging` project that is reporting issues.
. Create and deploy an OpenShift Logging instance:
.. Switch to the *Ecosystem* -> *Installed Operators* page.
.. Click the installed *Red Hat OpenShift Logging* Operator.
.. Under the *Details* tab, in the *Provided APIs* section, in the
*Cluster Logging* box, click *Create Instance*.
. Select *YAML View* and paste the following YAML definition into the window
that displays.
+
.Cluster Logging custom resource (CR)
[source,yaml]
----
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "instance"
namespace: "openshift-logging"
spec:
managementState: "Managed"
logStore:
type: "elasticsearch"
elasticsearch:
nodeCount: 3
storage:
storageClassName: gp2
size: "200Gi"
redundancyPolicy: "SingleRedundancy"
nodeSelector:
node-role.kubernetes.io/worker: ""
resources:
limits:
memory: "16Gi"
requests:
memory: "16Gi"
visualization:
type: "kibana"
kibana:
replicas: 1
nodeSelector:
node-role.kubernetes.io/worker: ""
collection:
logs:
type: "fluentd"
fluentd: {}
nodeSelector:
node-role.kubernetes.io/worker: ""
----
.. Click *Create* to deploy the logging instance, which creates the Cluster
Logging and Elasticsearch custom resources.
. Verify that the pods for the OpenShift Logging instance deployed:
.. Switch to the *Workloads* → *Pods* page.
.. Select the *openshift-logging* project.
+
You should see several pods for OpenShift Logging, Elasticsearch, Fluentd, and Kibana similar to the following list:
+
* cluster-logging-operator-cb795f8dc-xkckc
* elasticsearch-cdm-b3nqzchd-1-5c6797-67kfz
* elasticsearch-cdm-b3nqzchd-2-6657f4-wtprv
* elasticsearch-cdm-b3nqzchd-3-588c65-clg7g
* fluentd-2c7dg
* fluentd-9z7kk
* fluentd-br7r2
* fluentd-fn2sb
* fluentd-pb2f8
* fluentd-zqgqx
* kibana-7fb4fd4cc9-bvt4p
. Access the OpenShift Logging interface, *Kibana*, from the *Observe* →
*Logging* page of the {product-title} web console.

View File

@@ -1,51 +0,0 @@
// Module included in the following assemblies:
//
// * welcome/accessing-your-services.adoc
[id="dedicated-configuring-your-application-routes_{context}"]
= Configuring your application routes
When your cluster is provisioned, an Elastic Load Balancing (ELB) load balancer is created
to route application traffic into the cluster. The domain for your ELB is configured to route application traffic via
`http(s)://*.<cluster-id>.<shard-id>.p1.openshiftapps.com`. The `<shard-id>` is a
random four-character string assigned to your cluster at creation time.
If you want to use custom domain names for your application routes, {product-title} supports
CNAME records in your DNS configuration that point to
`elb.apps.<cluster-id>.<shard-id>.p1.openshiftapps.com`. While `elb` is recommended as a
reminder for where this record is pointing, you can use any string for this
value. You can create these CNAME records for each custom route you have, or you
can create a wildcard CNAME record. For example:
[source,text]
----
*.openshift.example.com CNAME elb.apps.my-example.a1b2.p1.openshiftapps.com
----
This allows you to create routes like *_app1.openshift.example.com_* and
*_app2.openshift.example.com_* without having to update your DNS every time.
////
Customers with configured VPC peering or VPN connections have the option of
requesting a second ELB, so that application routes can be configured as
internal-only or externally available. The domain for this ELB will be identical
to the first, with a different `<shard-id>` value. By default, application
routes are handled by the internal-only router. To expose an application or
service externally, you must create a new route with a specific label,
`route=external`.
To expose a new route for an existing service, apply the label `route=external`
and define a hostname that contains the secondary, public router shard ID:
----
$ oc expose service <service-name> -l route=external --name=<custom-route-name> --hostname=<custom-hostname>.<shard-id>.<cluster-id>.openshiftapps.com
----
Alternatively, you can use a custom domain:
----
$ oc expose service <service-name> -l route=external --name=<custom-route-name> --hostname=<custom-domain>
----
////

View File

@@ -1,38 +0,0 @@
// Module included in the following assemblies:
//
// * getting_started/accessing-your-services.adoc
:_mod-docs-content-type: PROCEDURE
[id="dedicated-creating-your-cluster_{context}"]
= Creating your cluster
Use the following steps to create your {product-title} cluster.
.Procedure
. Log in to {cluster-manager-url}.
. Select *Create Cluster* -> *Red Hat OpenShift Dedicated*.
. Enter your *Cluster name*, number of *Compute nodes*, and select an *AWS Region*.
. Select your *Node Type*. The number and types of nodes available to you depend
upon your {product-title} subscription.
. If you want to configure your networking IP ranges under *Advanced Options*, the
following are the default ranges available to use:
.. Node CIDR: 10.0.0.0/16
.. Service CIDR: 172.30.0.0/16
.. Pod CIDR: 10.128.0.0/14
. Add your Identity provider by clicking the *Add OAuth Configuration* link or using the *Access Control* tab.
. Add a _Dedicated Admin_ user by clicking the *Access Control* tab, then *Add User*.
. Input the user's name, then click *Add*.
In the *Overview* tab under the *Details* heading will have a *Status*
indicator. This will indicate that your cluster is *Ready* for use.

View File

@@ -1,24 +0,0 @@
// Module included in the following assemblies:
//
// * rosa_cluster_admin/cloud_infrastructure_access/dedicated-aws-private-cluster.adoc
:_mod-docs-content-type: PROCEDURE
[id="dedicated-enable-private-cluster-existing"]
= Enabling private cluster on an existing cluster
You can enable private clusters after a cluster has been created:
.Prerequisites
* AWS VPC Peering, VPN, DirectConnect, or TransitGateway has been configured to allow private access.
.Procedure
. Access your cluster in {cluster-manager}.
. Navigate to the *Networking* tab.
. Select *Make API private* under *Master API endpoint* and click *Change settings*.
+
[NOTE]
====
Transitioning your cluster between private and public can take several minutes to complete.
====

View File

@@ -1,25 +0,0 @@
// Module included in the following assemblies:
//
// * rosa_cluster_admin/cloud_infrastructure_access/dedicated-aws-private-cluster.adoc
:_mod-docs-content-type: PROCEDURE
[id="dedicated-enable-private-cluster-new"]
= Enabling private cluster on a new cluster
You can enable private cluster settings when creating a new cluster:
.Prerequisites
* AWS VPC Peering, VPN, DirectConnect, or TransitGateway has been configured to allow private access.
.Procedure
. In {cluster-manager-first}, click *Create cluster* and select *{product-title}*.
. Configure your cluster details, then select *Advanced* in the Networking section.
. Determine your CIDR requirements for your network and input the required fields.
+
[IMPORTANT]
====
CIDR configurations cannot be changed later. Confirm your selections with your network administrator before proceeding.
====
. Under *Cluster Privacy*, select *Private*.

View File

@@ -1,20 +0,0 @@
// Module included in the following assemblies:
//
// * rosa_cluster_admin/cloud_infrastructure_access/dedicated-aws-private-cluster.adoc
:_mod-docs-content-type: PROCEDURE
[id="dedicated-enable-public-cluster"]
= Enabling public cluster on a private cluster
You can set a private cluster to public facing:
.Procedure
. Access your cluster in {cluster-manager-first}.
. Navigate to the *Networking* tab.
. Deselect *Make API private* under *Master API endpoint* and click *Change settings*.
+
[NOTE]
====
Transitioning your cluster between private and public can take several minutes to complete.
====

View File

@@ -1,148 +0,0 @@
// Module included in the following assemblies:
//
// * welcome/accessing-your-services.adoc
[id="dedicated-exposing-TCP-services_{context}"]
= Exposing TCP services
{product-title} routes expose applications by proxying traffic through
HTTP/HTTPS(SNI)/TLS(SNI) to pods and services. A
link:https://kubernetes.io/docs/concepts/services-networking/#loadbalancer[LoadBalancer]
service creates an Elastic Load Balancing (ELB) load balancer for your {product-title}
cluster, enabling direct TCP access to applications exposed by your LoadBalancer
service.
[NOTE]
====
LoadBalancer services require an additional purchase. Contact your sales team if
you are interested in using LoadBalancer services for your {product-title}
cluster.
====
== Checking your LoadBalancer Quota
By purchasing LoadBalancer services, you are provided with a quota of
LoadBalancers available for your {product-title} cluster.
[source,terminal]
----
$ oc describe clusterresourcequota loadbalancer-quota
----
.Example output
[source,text]
----
Name: loadbalancer-quota
Labels: <none>
...
Resource Used Hard
-------- ---- ----
services.loadbalancers 0 4
----
== Exposing TCP service
You can expose your applications over an external LoadBalancer service, enabling
access over the public internet.
[source,terminal]
----
$ oc expose dc httpd-example --type=LoadBalancer --name=lb-service
----
.Example output
[source,text]
----
service/lb-service created
----
== Creating an internal-only TCP service
You can alternatively expose your applications internally only, enabling access
only through AWS VPC Peering or a VPN connection.
[source,terminal]
----
$ oc expose dc httpd-example --type=LoadBalancer --name=internal-lb --dry-run -o yaml | awk '1;/metadata:/{ print " annotations:\n service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"" }' | oc create -f -
----
.Example output
[source,terminal]
----
service/internal-lb created
----
== Enabling LoadBalancer access logs
You may, optionally, create an S3 bucket within your own AWS account, and configure the LoadBalancer service to send access logs to this S3 bucket at predefined intervals.
=== Prerequisites
You must first create the S3 bucket within your own AWS account, in the same AWS region that your {product-title} cluster is deployed. This S3 bucket can be configured with all public access blocked, including system permissions. After your S3 bucket is created, you must attach a policy to your bucket as https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html#attach-bucket-policy[outlined by AWS].
=== Configuring the LoadBalancer service
Update and apply the following annotations to your service YAML definition, prior to creating the object in your cluster.
[source,yaml]
----
metadata:
name: my-service
annotations:
# Specifies whether access logs are enabled for the load balancer
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true"
# The interval for publishing the access logs. You can specify an interval of either 5 or 60 (minutes).
service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "60"
# The name of the Amazon S3 bucket where the access logs are stored
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: "my-bucket"
# The logical hierarchy you created for your Amazon S3 bucket, for example `my-bucket-prefix/prod`
# This must match the prefix specified in the S3 policy
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: "my-bucket-prefix/prod"
----
=== Creating the LoadBalancer service
Once the annotations have been saved into a YAML file, you can create it from the command line:
[source,terminal]
----
$ oc create -f loadbalancer.yaml
----
.Example output
[source,text]
----
service/my-service created
----
== Using your TCP Service
Once your LoadBalancer service is created, you can access your service by using
the URL provided to you by {product-title}. The `LoadBalancer Ingress` value is
a URL unique to your service that remains static as long as the service is not
deleted. If you prefer to use a custom domain, you can create a CNAME DNS record
for this URL.
[source,terminal]
----
$ oc describe svc lb-service
----
.Example output
[source,text]
----
Name: lb-service
Namespace: default
Labels: app=httpd-example
Annotations: <none>
Selector: name=httpd-example
Type: LoadBalancer
IP: 10.120.182.252
LoadBalancer Ingress: a5387ba36201e11e9ba901267fd7abb0-1406434805.us-east-1.elb.amazonaws.com
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31409/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
----

View File

@@ -1,88 +0,0 @@
// Module included in the following assemblies:
//
// * osd_cluster_admin/dedicated-admin-role.adoc
[id="dedicated-admin-logging-in-verifying-permissions_{context}"]
= Logging in and verifying permissions
You can log in as an {product-title} cluster administration via the web console
or CLI, just as you would if you were an application developer.
When you log in to the web console, all user-created projects across the cluster
are visible from the main *Projects* page.
Use the standard `oc login` command to log in with the CLI:
[source,terminal]
----
$ oc login <your_instance_url>
----
All projects are visible using:
[source,terminal]
----
$ oc get projects
----
When your account has the `dedicated-admins-cluster` cluster role bound to it,
you are automatically bound to the `dedicated-admins-project` for any new
projects that are created by users in the cluster.
To verify if your account has administrator privileges, run the following
command against a user-created project to view its default role bindings. If you
are a cluster administrator, you will see your account listed under subjects for
the `dedicated-admins-project-0` and `dedicated-admins-project-1` role bindings
for the project:
[source,terminal]
----
$ oc describe rolebinding.rbac -n <project_name>
----
.Example output
[source,text]
----
Name: admin
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: admin
Subjects:
Kind Name Namespace
---- ---- ---------
User fred@example.com <1>
Name: dedicated-admins-project
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: dedicated-admins-project
Subjects:
Kind Name Namespace
---- ---- ---------
User alice@example.com <2>
User bob@example.com <2>
...
----
<1> The `fred@example.com` user is a normal, project-scoped administrator for
this project.
<2> The `alice@example.com` and `bob@example.com` users are cluster
administrators.
To view details on your increased permissions, and the sets of
verbs and resources associated with the `dedicated-admins-cluster` and
`dedicated-admins-project` roles, run the following:
[source,terminal]
----
$ oc describe clusterrole.rbac dedicated-admins-cluster
----
[source,terminal]
----
$ oc describe clusterrole.rbac dedicated-admins-project
----

View File

@@ -1,20 +0,0 @@
// Module included in the following assemblies:
//
// * osd_cluster_admin/dedicated-admin-role.adoc
[id="dedicated-managing-dedicated-administrators_{context}"]
= Managing {product-title} administrators
Administrator roles are managed using a `dedicated-admins` group on the cluster. Existing members of this group can edit membership via {cluster-manager-url}.
[id="dedicated-administrators-adding-user_{context}"]
== Adding a user
. Navigate to the *Cluster Details* page and *Access Control* tab.
. Click *Add user*. (first user only)
. Enter the user name and select the group (*dedicated-admins*)
. Click *Add*.
[id="dedicated-administrators-removing-user_{context}"]
== Removing a user
. Navigate to the *Cluster Details* page and *Users* tab.
. Click the *X* to the right of the user / group combination to be deleted..

View File

@@ -1,10 +0,0 @@
// Module included in the following assemblies:
//
// * osd_cluster_admin/dedicated-admin-role.adoc
[id="dedicated-managing-quotas-and-limit-ranges_{context}"]
= Managing quotas and limit ranges
As an administrator, you are able to view, create, and modify quotas and limit
ranges on other projects. This allows you to better constrain how compute
resources and objects are consumed by users across the cluster.

View File

@@ -1,71 +0,0 @@
// Module included in the following assemblies:
//
// * osd_cluster_admin/dedicated-admin-role.adoc
[id="dedicated-managing-service-accounts_{context}"]
= Managing service accounts
Service accounts are API objects that exist within each project. To manage
service accounts, you can use the `oc` command with the `sa` or `serviceaccount`
object type or use the web console.
The *dedicated-admin* service creates the *dedicated-admins* group. This group is
granted the roles at the cluster or individual project level. Users can be
assigned to this group and group membership defines who has OpenShift Dedicated
administrator access. However, by design, service accounts cannot be added to
regular groups.
Instead, the dedicated-admin service creates a special project for this purpose
named *dedicated-admin*. The service account group for this project is granted
OpenShift Dedicated *admin* roles, granting OpenShift Dedicated administrator
access to all service accounts within the *dedicated-admin* project. These service
accounts can then be used to perform any actions that require OpenShift
Dedicated administrator access.
Users that are members of the *dedicated-admins* group, and thus have been granted
the *dedicated-admin* role, have `edit` access to the *dedicated-admin* project. This
allows these users to manage the service accounts in this project and create new
ones as needed.
To get a list of existing service accounts in the current project, run:
[source,terminal]
----
$ oc get sa
----
.Example output
[source,text]
----
NAME SECRETS AGE
builder 2 2d
default 2 2d
deployer 2 2d
----
To create a new service account, run:
[source,terminal]
----
$ oc create sa <service-account-name>
----
As soon as a service account is created, two secrets are automatically added to
it:
* an API token
* credentials for the OpenShift Container Registry
These can be seen by describing the service account:
[source,terminal]
----
$ oc describe sa <service-account-name>
----
The system ensures that service accounts always have an API token and registry
credentials.
The generated API token and registry credentials do not expire, but they can be
revoked by deleting the secret. When the secret is deleted, a new one is
automatically generated to take its place.

View File

@@ -1,16 +0,0 @@
// Module included in the following assemblies:
//
// * getting_started/scaling_your_cluster.adoc
[id="dedicated-scaling-your-cluster_{context}"]
= Scaling your cluster
To scale your {product-title} cluster:
. From {cluster-manager-url}, click on the cluster you want to resize.
. Click *Actions*, then *Scale Cluster*.
. Select how many compute nodes are required, then click *Apply*.
Scaling occurs automatically. In the *Overview* tab under the *Details* heading,the *Status* indicator shows that your cluster is *Ready* for use.

View File

@@ -1,90 +0,0 @@
// Module included in the following assemblies:
//
// * storage/expanding-persistent-volume.adoc
:_mod-docs-content-type: PROCEDURE
[id="dedicated-storage-expanding-filesystem-pvc_{context}"]
= Expanding {product-title} Persistent Volume Claims (PVCs)
Expanding PVCs based on volume types that need file system re-sizing,
such as AWS EBS, is a two-step process.
This process involves expanding volume objects in the cloud provider and
then expanding the file system on the actual node. These steps occur automatically
after the PVC object is edited and might require a pod restart to take effect.
Expanding the file system on the node only happens when a new pod is started
with the volume.
.Prerequisites
* The controlling StorageClass must have `allowVolumeExpansion` set
to `true`. This is the default configuration in {product-title}.
+
[IMPORTANT]
====
Decreasing the size of an Amazon Elastic Block Store (EBS) volume is not supported. However, you
can create a smaller volume and then migrate your data to it by using a
tool such as `oc rsync`. After modifying a volume, you must wait at least six hours before
making additional modifications to the same volume.
====
.Procedure
. Edit the PVC and request a new size by editing the `spec.resources.requests.storage`
value. The following `oc patch` command will change the PVC's size:
+
----
$ oc patch pvc <pvc_name> -p '{"spec": {"resources": {"requests": {"storage": "8Gi"}}}}'
----
. After the cloud provider object has finished re-sizing, the PVC might be set to
`FileSystemResizePending`. The following command is used to check
the condition:
+
----
$ oc describe pvc mysql
Name: mysql
Namespace: my-project
StorageClass: gp2
Status: Bound
Volume: pvc-5fa3feb4-7115-4735-8652-8ebcfec91bb9
Labels: app=cakephp-mysql-persistent
template=cakephp-mysql-persistent
template.openshift.io/template-instance-owner=6c7f7c56-1037-4105-8c08-55a6261c39ca
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
volume.kubernetes.io/selected-node: ip-10-0-128-221.us-east-2.compute.internal
volume.kubernetes.io/storage-resizer: kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi <1>
Access Modes: RWO
VolumeMode: Filesystem
Conditions: <2>
Type Status LastProbeTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
FileSystemResizePending True <Timestamp> <Timestamp> Waiting for user to (re-)start a Pod to
finish file system resize of volume on node.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 36m persistentvolume-controller waiting for first consumer to be created before binding
Normal ProvisioningSucceeded 36m persistentvolume-controller Successfully provisioned volume
pvc-5fa3feb4-7115-4735-8652-8ebcfec91bb9 using
kubernetes.io/aws-ebs
Mounted By: mysql-1-q4nz7 <3>
----
<1> The current capacity of the PVC.
<2> Any relevant conditions are displayed here.
<3> The pod that is currently mounting this volume
. If the output of the previous command included a message to restart the pod, delete the mounting pod that it specified:
+
----
$ oc delete pod mysql-1-q4nz7
----
. After the pod is running, the newly requested size is available and the
`FileSystemResizePending` condition is removed from the PVC.

View File

@@ -1,268 +0,0 @@
// Module included in the following assemblies:
//
// * osd_install_access_delete_cluster/creating-a-gcp-cluster.adoc
// * I do not believe this is in use, confirm with Mark Letalien.
:_mod-docs-content-type: PROCEDURE
[id="osd-create-gcp-cluster-ccs1_{context}"]
= Creating a cluster on {gcp-short} with CCS
.Procedure
. Log in to {cluster-manager-url} and click *Create cluster*.
. On the *Create an OpenShift cluster* page, select *Create cluster* in the *Red Hat OpenShift Dedicated* row.
. Under *Billing model*, configure the subscription type and infrastructure type:
.. Select a subscription type. For information about {product-title} subscription options, see link:https://access.redhat.com/documentation/en-us/openshift_cluster_manager/1-latest/html-single/managing_clusters/index#assembly-cluster-subscriptions[Cluster subscriptions and registration] in the {cluster-manager} documentation.
+
[NOTE]
====
The subscription types that are available to you depend on your {product-title} subscriptions and resource quotas.
Red Hat recommends deploying your cluster with the On-Demand subscription type purchased through the {GCP} Marketplace. This option provides flexible, consumption-based billing, consuming additional capacity is frictionless, and no Red Hat intervention is required.
For more information, contact your sales representative or Red Hat support.
====
+
.. Select the *Customer Cloud Subscription* infrastructure type to deploy {product-title} in an existing cloud provider account that you own.
.. Click *Next*.
. Select *Run on {gcp-full}*.
. Select *Service Account* as the Authentication type.
+
[NOTE]
====
Red Hat recommends using Workload Identity Federation as the Authentication type. For more information, see xref:../osd_gcp_clusters/creating-a-gcp-cluster-with-workload-identity-federation.adoc#osd-creating-a-cluster-on-gcp-with-workload-identity-federation[Creating a cluster on {gcp-short} with Workload Identity Federation authentication].
====
+
. Review and complete the listed *Prerequisites*.
. Select the checkbox to acknowledge that you have read and completed all of the prerequisites.
. Provide your {gcp-short} service account private key in JSON format. You can either click *Browse* to locate and attach a JSON file or add the details in the *Service account JSON* field.
. Click *Next* to validate your cloud provider account and go to the *Cluster details* page.
. On the *Cluster details* page, provide a name for your cluster and specify the cluster details:
.. Add a *Cluster name*.
.. Optional: Cluster creation generates a domain prefix as a subdomain for your provisioned cluster on `openshiftapps.com`. If the cluster name is less than or equal to 15 characters, that name is used for the domain prefix. If the cluster name is longer than 15 characters, the domain prefix is randomly generated to a 15 character string.
+
To customize the subdomain, select the *Create customize domain prefix* checkbox, and enter your domain prefix name in the *Domain prefix* field. The domain prefix cannot be longer than 15 characters, must be unique within your organization, and cannot be changed after cluster creation.
.. Select a cluster version from the *Version* drop-down menu.
[IMPORTANT]
====
Clusters configured with Private Service Connect (PSC) are only supported on OpenShift Dedicated version 4.17 and later. For more information regarding PSC, see _Private Service Overview_ in the _Additional resources_ section.
====
+
.. Select a cloud provider region from the *Region* drop-down menu.
.. Select a *Single zone* or *Multi-zone* configuration.
+
.. Optional: Select *Enable Secure Boot for Shielded VMs* to use Shielded VMs when installing your cluster. For more information, see link:https://cloud.google.com/security/products/shielded-vm[Shielded VMs].
+
[IMPORTANT]
====
To successfully create a cluster, you must select *Enable Secure Boot support for Shielded VMs* if your organization has the policy constraint `constraints/compute.requireShieldedVm` enabled. For more information regarding {gcp-short} organizational policy constraints, see link:https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints[Organization policy constraints].
====
+
.. Leave *Enable user workload monitoring* selected to monitor your own projects in isolation from Red Hat Site Reliability Engineer (SRE) platform metrics. This option is enabled by default.
.. Optional: Expand *Advanced Encryption* to make changes to encryption settings.
... Accept the default setting *Use default KMS Keys* to use your default AWS KMS key, or select *Use Custom KMS keys* to use a custom KMS key.
.... With *Use Custom KMS keys* selected, enter the AWS Key Management Service (KMS) custom key Amazon Resource Name (ARN) ARN in the *Key ARN* field.
The key is used for encrypting all control plane, infrastructure, worker node root volumes, and persistent volumes in your cluster.
+
... Select *Use custom KMS keys* to use custom KMS keys. If you prefer not to use custom KMS keys, leave the default setting *Use default KMS Keys*.
+
[IMPORTANT]
====
To use custom KMS keys, the IAM service account `osd-ccs-admin` must be granted the *Cloud KMS CryptoKey Encrypter/Decrypter* role. For more information about granting roles on a resource, see link:https://cloud.google.com/kms/docs/iam#granting_roles_on_a_resource[Granting roles on a resource].
====
+
With *Use Custom KMS keys* selected:
.... Select a key ring location from the *Key ring location* drop-down menu.
.... Select a key ring from the *Key ring* drop-down menu.
.... Select a key name from the *Key name* drop-down menu.
.... Provide the *KMS Service Account*.
+
... Optional: Select *Enable FIPS cryptography* if you require your cluster to be FIPS validated.
+
[NOTE]
====
If *Enable FIPS cryptography* is selected, *Enable additional etcd encryption* is enabled by default and cannot be disabled. You can select *Enable additional etcd encryption* without selecting *Enable FIPS cryptography*.
====
+
... Optional: Select *Enable additional etcd encryption* if you require etcd key value encryption. With this option, the etcd key values are encrypted, but the keys are not. This option is in addition to the control plane storage encryption that encrypts the etcd volumes in {product-title} clusters by default.
+
[NOTE]
====
By enabling additional etcd encryption, you will incur a performance overhead of approximately 20%. The overhead is a result of introducing this second layer of encryption, in addition to the default control plane storage encryption that encrypts the etcd volumes. Consider enabling etcd encryption only if you specifically require it for your use case.
====
+
.. Click *Next*.
. On the *Default machine pool* page, select a *Compute node instance type* from the drop-down menu.
. Optional: Select the *Enable autoscaling* checkbox to enable autoscaling.
.. Click *Edit cluster autoscaling settings* to make changes to the autoscaling settings.
.. Once you have made your desired changes, click *Close*.
.. Select a minimum and maximum node count. Node counts can be selected by engaging the available plus and minus signs or inputting the desired node count into the number input field.
. Select a *Compute node count* from the drop-down menu.
+
[NOTE]
====
If you are using multiple availability zones, the compute node count is per zone. After your cluster is created, you can change the number of compute nodes in your cluster, but you cannot change the compute node instance type in a machine pool. The number and types of nodes available to you depend on your {product-title} subscription.
====
+
. Optional: Expand *Add node labels* to add labels to your nodes. Click *Add additional label* to add an additional node label and select *Next*.
+
[IMPORTANT]
====
This step refers to labels within Kubernetes, not {gcp-full}. For more information regarding Kubernetes labels, see link:https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/[Labels and Selectors].
====
+
. On the *Network configuration* page, select *Public* or *Private* to use either public or private API endpoints and application routes for your cluster.
If you select *Private* and selected {product-title} version 4.17 or later as your cluster version, *Use Private Service Connect* is selected by default. Private Service Connect (PSC) is {gcp-full}s security-enhanced networking feature. You can disable PSC by clicking the *Use Private Service Connect* checkbox.
+
[NOTE]
====
Red Hat recommends using Private Service Connect when deploying a private {product-title} cluster on {gcp-full}. Private Service Connect ensures there is a secured, private connectivity between Red Hat infrastructure, Site Reliability Engineering (SRE) and private {product-title} clusters.
====
[IMPORTANT]
====
If you are using private API endpoints, you cannot access your cluster until you update the network settings in your cloud provider account.
====
+
. Optional: To install the cluster in an existing {gcp-short} Virtual Private Cloud (VPC):
.. Select *Install into an existing VPC*.
+
[IMPORTANT]
====
Private Service Connect is supported only with *Install into an existing VPC*.
====
+
.. If you are installing into an existing VPC and you want to enable an HTTP or HTTPS proxy for your cluster, select *Configure a cluster-wide proxy*.
+
[IMPORTANT]
====
In order to configure a cluster-wide proxy for your cluster, you must first create the Cloud network address translation (NAT) and a Cloud router. See the _Additional resources_ section for more information.
====
. Accept the default application ingress settings, or to create your own custom settings, select *Custom Settings*.
.. Optional: Provide route selector.
.. Optional: Provide excluded namespaces.
.. Select a namespace ownership policy.
.. Select a wildcard policy.
+
For more information about custom application ingress settings, click on the information icon provided for each setting.
. Click *Next*.
. Optional: To install the cluster into a {gcp-short} Shared VPC:
+
[IMPORTANT]
====
To install a cluster into a Shared VPC, you must use {product-title} version 4.13.15 or later. Additionally, the VPC owner of the host project must enable a project as a host project in their {gcp-full} console. For more information, see link:https://cloud.google.com/vpc/docs/provisioning-shared-vpc#set-up-shared-vpc[Enable a host project].
====
.. Select *Install into {gcp-short} Shared VPC*.
.. Specify the *Host project ID*. If the specified host project ID is incorrect, cluster creation fails.
+
[IMPORTANT]
====
Once you complete the steps within the cluster configuration wizard and click *Create Cluster*, the cluster will go into the "Installation Waiting" state. At this point, you must contact the VPC owner of the host project, who must assign the dynamically-generated service account the following roles: *Compute Network Administrator*, *Compute Security Administrator*, *Project IAM Admin*, and *DNS Administrator*.
The VPC owner of the host project has 30 days to grant the listed permissions before the cluster creation fails.
For information about Shared VPC permissions, see link:https://cloud.google.com/vpc/docs/provisioning-shared-vpc#migs-service-accounts[Provision Shared VPC].
====
+
. If you opted to install the cluster in an existing {gcp-short} VPC, provide your *Virtual Private Cloud (VPC) subnet settings* and select *Next*.
You must have created the Cloud network address translation (NAT) and a Cloud router. See the "Additional resources" section for information about Cloud NATs and Google VPCs.
+
[NOTE]
====
If you are installing a cluster into a Shared VPC, the VPC name and subnets are shared from the host project.
====
. If you opted to configure a cluster-wide proxy, provide your proxy configuration details on the *Cluster-wide proxy* page:
+
.. Enter a value in at least one of the following fields:
** Specify a valid *HTTP proxy URL*.
** Specify a valid *HTTPS proxy URL*.
** In the *Additional trust bundle* field, provide a PEM encoded X.509 certificate bundle. The bundle is added to the trusted certificate store for the cluster nodes. An additional trust bundle file is required if you use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the {op-system-first} trust bundle. This requirement applies regardless of whether the proxy is transparent or requires explicit configuration using the `http-proxy` and `https-proxy` arguments.
+
.. Click *Next*.
+
For more information about configuring a proxy with {product-title}, see _Configuring a cluster-wide proxy_.
. In the *CIDR ranges* dialog, configure custom classless inter-domain routing (CIDR) ranges or use the defaults that are provided.
+
[NOTE]
====
If you are installing into a VPC, the *Machine CIDR* range must match the VPC subnets.
====
+
[IMPORTANT]
====
CIDR configurations cannot be changed later. Confirm your selections with your network administrator before proceeding.
====
. On the *Cluster update strategy* page, configure your update preferences:
.. Choose a cluster update method:
** Select *Individual updates* if you want to schedule each update individually. This is the default option.
** Select *Recurring updates* to update your cluster on your preferred day and start time, when updates are available.
+
[NOTE]
====
You can review the end-of-life dates in the update lifecycle documentation for {product-title}. For more information, see link:https://access.redhat.com/documentation/en-us/openshift_dedicated/4/html/introduction_to_openshift_dedicated/policies-and-service-definition#osd-life-cycle[OpenShift Dedicated update life cycle].
====
+
.. Provide administrator approval based on your cluster update method:
** Individual updates: If you select an update version that requires approval, provide an administrators acknowledgment and click *Approve and continue*.
** Recurring updates: If you selected recurring updates for your cluster, provide an administrators acknowledgment and click *Approve and continue*. {cluster-manager} does not start scheduled y-stream updates for minor versions without receiving an administrators acknowledgment.
+
.. If you opted for recurring updates, select a preferred day of the week and upgrade start time in UTC from the drop-down menus.
.. Optional: You can set a grace period for *Node draining* during cluster upgrades. A *1 hour* grace period is set by default.
.. Click *Next*.
+
[NOTE]
====
In the event of critical security concerns that significantly impact the security or stability of a cluster, Red Hat Site Reliability Engineering (SRE) might schedule automatic updates to the latest z-stream version that is not impacted. The updates are applied within 48 hours after customer notifications are provided. For a description of the critical impact security rating, see link:https://access.redhat.com/security/updates/classification[Understanding Red Hat security ratings].
====
. Review the summary of your selections and click *Create cluster* to start the cluster installation. The installation takes approximately 30-40 minutes to complete.
+
. Optional: On the *Overview* tab, you can enable the delete protection feature by selecting *Enable*, which is located directly under *Delete Protection: Disabled*. This will prevent your cluster from being deleted. To disable delete protection, select *Disable*.
By default, clusters are created with the delete protection feature disabled.
+
[NOTE]
====
If you delete a cluster that was installed into a {gcp-short} Shared VPC, inform the VPC owner of the host project to remove the IAM policy roles granted to the service account that was referenced during cluster creation.
====
.Verification
* You can monitor the progress of the installation in the *Overview* page for your cluster. You can view the installation logs on the same page. Your cluster is ready when the *Status* in the *Details* section of the page is listed as *Ready*.

View File

@@ -1,238 +0,0 @@
// Module included in the following assemblies:
//
// * osd_install_access_delete_cluster/creating-a-gcp-cluster.adoc
:_mod-docs-content-type: PROCEDURE
[id="osd-create-cluster-gcp-account_{context}"]
= Creating a cluster on {gcp-short} with {gcp-full} Marketplace
When creating an {product-title} (OSD) cluster on {gcp-full} through the {cluster-manager-first} {hybrid-console-second}, customers can select {gcp-full} Marketplace as their preferred billing model. This billing model allows Red Hat customers to take advantage of their link:https://cloud.google.com/docs/cuds[Google Committed Use Discounts (CUD)] towards {product-title} purchased through the {gcp-full} Marketplace. Additionally, OSD pricing is consumption-based and customers are billed directly through their {gcp-full} account.
.Procedure
. Log in to {cluster-manager-url} and click *Create cluster*.
. In the *Cloud* tab, click *Create cluster* in the *Red Hat OpenShift Dedicated* row.
. Under *Billing model*, configure the subscription type and infrastructure type:
.. Select the *On-Demand* subscription type.
.. From the drop-down menu, select *{gcp-full} Marketplace*.
.. Select the *Customer Cloud Subscription* infrastructure type.
.. Click *Next*.
. On the *Cloud provider* page, select *Run on {gcp-full}*.
. Select either *Service account* or *Workload Identity Federation* as the Authentication type.
+
[NOTE]
====
For more information about authentication types, click the question icon located next to *Authentication type*.
====
+
. Review and complete the listed *Prerequisites*.
. Select the checkbox to acknowledge that you have read and completed all of the prerequisites.
. If you selected *Service account* as the Authentication type, provide your {gcp-short} service account private key in JSON format. You can either click *Browse* to locate and attach a JSON file or add the details in the *Service account JSON* field.
. If you selected *Workload Identity Federation* as the Authentication type, you will first need to create a new WIF configuration.
Open a terminal window and run the following `ocm` CLI command.
+
[source,terminal]
----
$ ocm gcp create wif-config --name <wif_name> \ <1>
--project <gcp_project_id> <2>
----
<1> Replace `<wif_name>` with the name of your WIF configuration.
<2> Replace `<gcp_project_id>` with the ID of the {GCP} project where the WIF configuration will be implemented.
+
. Select a configured WIF configuration from the *WIF configuration* drop-down list. If you want to select the WIF configuration you created in the last step, click *Refresh* first.
. Click *Next* to validate your cloud provider account and go to the *Cluster details* page.
. On the *Cluster details* page, provide a name for your cluster and specify the cluster details:
.. Add a *Cluster name*.
.. Optional: Cluster creation generates a domain prefix as a subdomain for your provisioned cluster on `openshiftapps.com`. If the cluster name is less than or equal to 15 characters, that name is used for the domain prefix. If the cluster name is longer than 15 characters, the domain prefix is randomly generated as a 15-character string.
+
To customize the subdomain, select the *Create custom domain prefix* checkbox, and enter your domain prefix name in the *Domain prefix* field. The domain prefix cannot be longer than 15 characters, must be unique within your organization, and cannot be changed after cluster creation.
.. Select a cluster version from the *Version* drop-down menu.
+
[NOTE]
====
Workload Identity Federation (WIF) is only supported on {product-title} version 4.17 and later.
====
+
.. Select a cloud provider region from the *Region* drop-down menu.
.. Select a *Single zone* or *Multi-zone* configuration.
+
.. Optional: Select *Enable Secure Boot for Shielded VMs* to use Shielded VMs when installing your cluster. For more information, see link:https://cloud.google.com/security/products/shielded-vm[Shielded VMs].
+
[IMPORTANT]
====
To successfully create a cluster, you must select *Enable Secure Boot support for Shielded VMs* if your organization has the policy constraint `constraints/compute.requireShieldedVm` enabled. For more information regarding {gcp-short} organizational policy constraints, see link:https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints[Organization policy constraints].
====
+
.. Leave *Enable user workload monitoring* selected to monitor your own projects in isolation from Red Hat Site Reliability Engineer (SRE) platform metrics. This option is enabled by default.
. Optional: Expand *Advanced Encryption* to make changes to encryption settings.
.. Select *Use Custom KMS keys* to use custom KMS keys. If you prefer not to use custom KMS keys, leave the default setting *Use default KMS Keys*.
+
[IMPORTANT]
====
To use custom KMS keys, the IAM service account `osd-ccs-admin` must be granted the *Cloud KMS CryptoKey Encrypter/Decrypter* role. For more information about granting roles on a resource, see link:https://cloud.google.com/kms/docs/iam#granting_roles_on_a_resource[Granting roles on a resource].
====
+
With *Use Custom KMS keys* selected:
... Select a key ring location from the *Key ring location* drop-down menu.
... Select a key ring from the *Key ring* drop-down menu.
... Select a key name from the *Key name* drop-down menu.
... Provide the *KMS Service Account*.
+
.. Optional: Select *Enable FIPS cryptography* if you require your cluster to be FIPS validated.
+
[NOTE]
====
If *Enable FIPS cryptography* is selected, *Enable additional etcd encryption* is enabled by default and cannot be disabled. You can select *Enable additional etcd encryption* without selecting *Enable FIPS cryptography*.
====
.. Optional: Select *Enable additional etcd encryption* if you require etcd key value encryption. With this option, the etcd key values are encrypted, but the keys are not. This option is in addition to the control plane storage encryption that encrypts the etcd volumes in {product-title} clusters by default.
+
[NOTE]
====
By enabling etcd encryption for the key values in etcd, you incur a performance overhead of approximately 20%. The overhead is a result of introducing this second layer of encryption, in addition to the default control plane storage encryption that encrypts the etcd volumes. Consider enabling etcd encryption only if you specifically require it for your use case.
====
+
. Click *Next*.
. On the *Default machine pool* page, select a *Compute node instance type* and a *Compute node count*. The number and types of nodes that are available depend on your {product-title} subscription. If you are using multiple availability zones, the compute node count is per zone.
+
[NOTE]
====
After your cluster is created, you can change the number of compute nodes, but you cannot change the compute node instance type in a created machine pool. You can add machine pools after installation that use a customized instance type. The number and types of nodes available to you depend on your {product-title} subscription.
====
. Optional: Expand *Add node labels* to add labels to your nodes. Click *Add additional label* to add more node labels.
+
[IMPORTANT]
====
This step refers to labels within Kubernetes, not {gcp-full}. For more information regarding Kubernetes labels, see link:https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/[Labels and Selectors].
====
+
. Click *Next*.
. In the *Cluster privacy* dialog, select *Public* or *Private* to use either public or private API endpoints and application routes for your cluster. If you select *Private*, *Use Private Service Connect* is selected by default. Private Service Connect (PSC) is {gcp-full}s security-enhanced networking feature. You can disable PSC by clicking the *Use Private Service Connect* checkbox.
+
[NOTE]
====
Red Hat recommends using Private Service Connect when deploying a private {product-title} cluster on {gcp-full}. Private Service Connect ensures there is a secured, private connectivity between Red Hat infrastructure, Site Reliability Engineering (SRE) and private {product-title} clusters.
====
//Once PSC docs are live add link from note above.
+
. Optional: To install the cluster in an existing {gcp-short} Virtual Private Cloud (VPC):
.. Select *Install into an existing VPC*.
+
[IMPORTANT]
====
Private Service Connect is supported only with *Install into an existing VPC*.
====
+
.. If you are installing into an existing VPC and you want to enable an HTTP or HTTPS proxy for your cluster, select *Configure a cluster-wide proxy*.
+
[IMPORTANT]
====
In order to configure a cluster-wide proxy for your cluster, you must first create the Cloud network address translation (NAT) and a Cloud router. See the _Additional resources_ section for more information.
====
+
. Accept the default application ingress settings, or to create your own custom settings, select *Custom Settings*.
.. Optional: Provide route selector.
.. Optional: Provide excluded namespaces.
.. Select a namespace ownership policy.
.. Select a wildcard policy.
+
For more information about custom application ingress settings, click on the information icon provided for each setting.
. Click *Next*.
. Optional: To install the cluster into a {gcp-short} Shared VPC:
+
[IMPORTANT]
====
To install a cluster into a Shared VPC, you must use {product-title} version 4.13.15 or later. Additionally, the VPC owner of the host project must enable a project as a host project in their {gcp-full} console. For more information, see link:https://cloud.google.com/vpc/docs/provisioning-shared-vpc#set-up-shared-vpc[Enable a host project].
====
.. Select *Install into {gcp-short} Shared VPC*.
.. Specify the *Host project ID*. If the specified host project ID is incorrect, cluster creation fails.
+
[IMPORTANT]
====
Once you complete the steps within the cluster configuration wizard and click *Create Cluster*, the cluster will go into the "Installation Waiting" state. At this point, you must contact the VPC owner of the host project, who must assign the dynamically-generated service account the following roles: *Compute Network Administrator*, *Compute Security Administrator*, *Project IAM Admin*, and *DNS Administrator*.
The VPC owner of the host project has 30 days to grant the listed permissions before the cluster creation fails.
For information about Shared VPC permissions, see link:https://cloud.google.com/vpc/docs/provisioning-shared-vpc#migs-service-accounts[Provision Shared VPC].
====
+
. If you opted to install the cluster in an existing {gcp-short} VPC, provide your *Virtual Private Cloud (VPC) subnet settings* and select *Next*.
+
[NOTE]
====
If you are installing a cluster into a Shared VPC, the VPC name and subnets are shared from the host project.
====
+
. Click *Next*.
. If you opted to configure a cluster-wide proxy, provide your proxy configuration details on the *Cluster-wide proxy* page:
+
.. Enter a value in at least one of the following fields:
** Specify a valid *HTTP proxy URL*.
** Specify a valid *HTTPS proxy URL*.
** In the *Additional trust bundle* field, provide a PEM encoded X.509 certificate bundle. The bundle is added to the trusted certificate store for the cluster nodes. An additional trust bundle file is required if you use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the {op-system-first} trust bundle. This requirement applies regardless of whether the proxy is transparent or requires explicit configuration using the `http-proxy` and `https-proxy` arguments.
+
.. Click *Next*.
+
For more information about configuring a proxy with {product-title}, see _Configuring a cluster-wide proxy_.
+
. In the *CIDR ranges* dialog, configure custom classless inter-domain routing (CIDR) ranges or use the defaults that are provided.
+
[IMPORTANT]
====
CIDR configurations cannot be changed later. Confirm your selections with your network administrator before proceeding.
If the cluster privacy is set to *Private*, you cannot access your cluster until you configure private connections in your cloud provider.
====
. On the *Cluster update strategy* page, configure your update preferences:
.. Choose a cluster update method:
** Select *Individual updates* if you want to schedule each update individually. This is the default option.
** Select *Recurring updates* to update your cluster on your preferred day and start time, when updates are available.
+
[NOTE]
====
You can review the end-of-life dates in the update lifecycle documentation for {product-title}. For more information, see link:https://access.redhat.com/documentation/en-us/openshift_dedicated/4/html/introduction_to_openshift_dedicated/policies-and-service-definition#osd-life-cycle[OpenShift Dedicated update life cycle].
====
+
.. Provide administrator approval based on your cluster update method:
** Individual updates: If you select an update version that requires approval, provide an administrators acknowledgment and click *Approve and continue*.
** Recurring updates: If you selected recurring updates for your cluster, provide an administrators acknowledgment and click *Approve and continue*. {cluster-manager} does not start scheduled y-stream updates for minor versions without receiving an administrators acknowledgment.
+
.. If you opted for recurring updates, select a preferred day of the week and upgrade start time in UTC from the drop-down menus.
.. Optional: You can set a grace period for *Node draining* during cluster upgrades. A *1 hour* grace period is set by default.
.. Click *Next*.
+
[NOTE]
====
In the event of critical security concerns that significantly impact the security or stability of a cluster, Red Hat Site Reliability Engineering (SRE) might schedule automatic updates to the latest z-stream version that is not impacted. The updates are applied within 48 hours after customer notifications are provided. For a description of the critical impact security rating, see link:https://access.redhat.com/security/updates/classification[Understanding Red Hat security ratings].
====
. Review the summary of your selections and click *Create cluster* to start the cluster installation. The installation takes approximately 30-40 minutes to complete.
+
. Optional: On the *Overview* tab, you can enable the delete protection feature by selecting *Enable*, which is located directly under *Delete Protection: Disabled*. This will prevent your cluster from being deleted. To disable delete protection, select *Disable*.
By default, clusters are created with the delete protection feature disabled.
+
.Verification
* You can monitor the progress of the installation in the *Overview* page for your cluster. You can view the installation logs on the same page. Your cluster is ready when the *Status* in the *Details* section of the page is listed as *Ready*.
ifeval::["{context}" == "osd-creating-a-cluster-on-aws"]
:!osd-on-aws:
endif::[]
ifeval::["{context}" == "osd-creating-a-cluster-on-gcp"]
:!osd-on-gcp:
endif::[]

View File

@@ -1,239 +0,0 @@
// Module included in the following assemblies:
//
// * osd_install_access_delete_cluster/creating-a-gcp-cluster.adoc
:_mod-docs-content-type: PROCEDURE
[id="osd-create-cluster-rhm-gcp-account_{context}"]
= Creating a cluster on {gcp-short} with Red Hat Marketplace
When creating an {product-title} (OSD) cluster on {gcp-full} through the {cluster-manager-first} {hybrid-console-second}, customers can select Red Hat Marketplace as their preferred billing model.
OSD pricing is consumption-based and customers are billed directly through their Red Hat Marketplace account.
.Procedure
. Log in to {cluster-manager-url} and click *Create cluster*.
. In the *Cloud* tab, click *Create cluster* in the *Red Hat OpenShift Dedicated* row.
. Under *Billing model*, configure the subscription type and infrastructure type:
.. Select the *On-Demand* subscription type.
.. From the drop-down menu, select *Red Hat Marketplace*.
.. Click *Next*.
. On the *Cloud provider* page, select *Run on {gcp-full}*.
. Select either *Service account* or *Workload Identity Federation* as the Authentication type.
+
[NOTE]
====
For more information about authentication types, click the question icon located next to *Authentication type*.
====
+
. Review and complete the listed *Prerequisites*.
. Select the checkbox to acknowledge that you have read and completed all of the prerequisites.
. If you selected *Service account* as the Authentication type, provide your {gcp-short} service account private key in JSON format. You can either click *Browse* to locate and attach a JSON file or add the details in the *Service account JSON* field.
. If you selected *Workload Identity Federation* as the Authentication type, you will first need to create a new WIF configuration.
Open a terminal window and run the following `ocm` CLI command.
+
[source,terminal]
----
$ ocm gcp create wif-config --name <wif_name> \ <1>
--project <gcp_project_id> <2>
----
<1> Replace `<wif_name>` with the name of your WIF configuration.
<2> Replace `<gcp_project_id>` with the ID of the {GCP} project where the WIF configuration will be implemented.
+
. Select a configured WIF configuration from the *WIF configuration* drop-down list. If you want to select the WIF configuration you created in the last step, click *Refresh* first.
.. Click *Next* to validate your cloud provider account and go to the *Cluster details* page.
. On the *Cluster details* page, provide a name for your cluster and specify the cluster details:
.. Add a *Cluster name*.
.. Optional: Cluster creation generates a domain prefix as a subdomain for your provisioned cluster on `openshiftapps.com`. If the cluster name is less than or equal to 15 characters, that name is used for the domain prefix. If the cluster name is longer than 15 characters, the domain prefix is randomly generated as a 15-character string.
+
To customize the subdomain, select the *Create custom domain prefix* checkbox, and enter your domain prefix name in the *Domain prefix* field. The domain prefix cannot be longer than 15 characters, must be unique within your organization, and cannot be changed after cluster creation.
.. Select a cluster version from the *Version* drop-down menu.
+
[NOTE]
====
Workload Identity Federation (WIF) is only supported on {product-title} version 4.17 and later.
====
+
.. Select a cloud provider region from the *Region* drop-down menu.
.. Select a *Single zone* or *Multi-zone* configuration.
+
.. Optional: Select *Enable Secure Boot for Shielded VMs* to use Shielded VMs when installing your cluster. For more information, see link:https://cloud.google.com/security/products/shielded-vm[Shielded VMs].
+
[IMPORTANT]
====
To successfully create a cluster, you must select *Enable Secure Boot support for Shielded VMs* if your organization has the policy constraint `constraints/compute.requireShieldedVm` enabled. For more information regarding {gcp-short} organizational policy constraints, see link:https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints[Organization policy constraints].
====
+
.. Leave *Enable user workload monitoring* selected to monitor your own projects in isolation from Red Hat Site Reliability Engineer (SRE) platform metrics. This option is enabled by default.
. Optional: Expand *Advanced Encryption* to make changes to encryption settings.
.. Select *Use Custom KMS keys* to use custom KMS keys. If you prefer not to use custom KMS keys, leave the default setting *Use default KMS Keys*.
+
[IMPORTANT]
====
To use custom KMS keys, the IAM service account `osd-ccs-admin` must be granted the *Cloud KMS CryptoKey Encrypter/Decrypter* role. For more information about granting roles on a resource, see link:https://cloud.google.com/kms/docs/iam#granting_roles_on_a_resource[Granting roles on a resource].
====
+
With *Use Custom KMS keys* selected:
... Select a key ring location from the *Key ring location* drop-down menu.
... Select a key ring from the *Key ring* drop-down menu.
... Select a key name from the *Key name* drop-down menu.
... Provide the *KMS Service Account*.
+
.. Optional: Select *Enable FIPS cryptography* if you require your cluster to be FIPS validated.
+
[NOTE]
====
If *Enable FIPS cryptography* is selected, *Enable additional etcd encryption* is enabled by default and cannot be disabled. You can select *Enable additional etcd encryption* without selecting *Enable FIPS cryptography*.
====
.. Optional: Select *Enable additional etcd encryption* if you require etcd key value encryption. With this option, the etcd key values are encrypted, but not the keys. This option is in addition to the control plane storage encryption that encrypts the etcd volumes in {product-title} clusters by default.
+
[NOTE]
====
By enabling etcd encryption for the key values in etcd, you incur a performance overhead of approximately 20%. The overhead is a result of introducing this second layer of encryption, in addition to the default control plane storage encryption that encrypts the etcd volumes. Consider enabling etcd encryption only if you specifically require it for your use case.
====
+
. Click *Next*.
. On the *Default machine pool* page, select a *Compute node instance type* and a *Compute node count*. The number and types of nodes that are available depend on your {product-title} subscription. If you are using multiple availability zones, the compute node count is per zone.
+
[NOTE]
====
After your cluster is created, you can change the number of compute nodes, but you cannot change the compute node instance type in a created machine pool. You can add machine pools after installation that use a customized instance type. The number and types of nodes available to you depend on your {product-title} subscription.
====
. Optional: Expand *Add node labels* to add labels to your nodes. Click *Add additional label* to add more node labels.
+
[IMPORTANT]
====
This step refers to labels within Kubernetes, not {gcp-full}. For more information regarding Kubernetes labels, see link:https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/[Labels and Selectors].
====
+
. Click *Next*.
. In the *Cluster privacy* dialog, select *Public* or *Private* to use either public or private API endpoints and application routes for your cluster. If you select *Private*, *Use Private Service Connect* is selected by default. Private Service Connect (PSC) is {gcp-full}s security-enhanced networking feature. You can disable PSC by clicking the *Use Private Service Connect* checkbox.
+
[NOTE]
====
Red Hat recommends using Private Service Connect when deploying a private {product-title} cluster on {gcp-full}. Private Service Connect ensures there is a secured, private connectivity between Red Hat infrastructure, Site Reliability Engineering (SRE) and private {product-title} clusters.
====
//Once PSC docs are live add link from note above.
+
. Optional: To install the cluster in an existing {gcp-short} Virtual Private Cloud (VPC):
.. Select *Install into an existing VPC*.
+
[IMPORTANT]
====
Private Service Connect is supported only with *Install into an existing VPC*.
====
+
.. If you are installing into an existing VPC and you want to enable an HTTP or HTTPS proxy for your cluster, select *Configure a cluster-wide proxy*.
+
[IMPORTANT]
====
In order to configure a cluster-wide proxy for your cluster, you must first create the Cloud network address translation (NAT) and a Cloud router. See the _Additional resources_ section for more information.
====
+
. Accept the default application ingress settings, or to create your own custom settings, select *Custom Settings*.
.. Optional: Provide route selector.
.. Optional: Provide excluded namespaces.
.. Select a namespace ownership policy.
.. Select a wildcard policy.
+
For more information about custom application ingress settings, click on the information icon provided for each setting.
. Click *Next*.
. Optional: To install the cluster into a {gcp-short} shared VPC:
+
[IMPORTANT]
====
To install a cluster into a {gcp-short} shared VPC, you must use {product-title} version 4.13.15 or later. Additionally, the VPC owner of the host project must enable a project as a host project in their {gcp-full} console. For more information, see link:https://cloud.google.com/vpc/docs/provisioning-shared-vpc#set-up-shared-vpc[Enable a host project].
====
.. Select *Install into {gcp-short} Shared VPC*.
.. Specify the *Host project ID*. If the specified host project ID is incorrect, cluster creation fails.
+
[IMPORTANT]
====
Once you complete the steps within the cluster configuration wizard and click *Create Cluster*, the cluster will go into the "Installation Waiting" state. At this point, you must contact the VPC owner of the host project, who must assign the dynamically-generated service account the following roles: *Compute Network Administrator*, *Compute Security Administrator*, *Project IAM Admin*, and *DNS Administrator*.
The VPC owner of the host project has 30 days to grant the listed permissions before the cluster creation fails.
For information about Shared VPC permissions, see link:https://cloud.google.com/vpc/docs/provisioning-shared-vpc#migs-service-accounts[Provision Shared VPC].
====
+
. If you opted to install the cluster into an existing VPC, provide your *Virtual Private Cloud (VPC) subnet settings* and select *Next*.
+
[NOTE]
====
If you are installing a cluster into a {gcp-short} Shared VPC, the VPC name and subnets are shared from the host project.
====
+
. Click *Next*.
. If you opted to configure a cluster-wide proxy, provide your proxy configuration details on the *Cluster-wide proxy* page:
.. Enter a value in at least one of the following fields:
** Specify a valid *HTTP proxy URL*.
** Specify a valid *HTTPS proxy URL*.
** In the *Additional trust bundle* field, provide a PEM encoded X.509 certificate bundle. The bundle is added to the trusted certificate store for the cluster nodes. An additional trust bundle file is required if you use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the {op-system-first} trust bundle. This requirement applies regardless of whether the proxy is transparent or requires explicit configuration using the `http-proxy` and `https-proxy` arguments.
+
.. Click *Next*.
+
For more information about configuring a proxy with {product-title}, see _Configuring a cluster-wide proxy_.
+
. In the *CIDR ranges* dialog, configure custom classless inter-domain routing (CIDR) ranges or use the defaults that are provided.
+
[IMPORTANT]
====
CIDR configurations cannot be changed later. Confirm your selections with your network administrator before proceeding.
If the cluster privacy is set to *Private*, you cannot access your cluster until you configure private connections in your cloud provider.
====
. On the *Cluster update strategy* page, configure your update preferences:
.. Choose a cluster update method:
** Select *Individual updates* if you want to schedule each update individually. This is the default option.
** Select *Recurring updates* to update your cluster on your preferred day and start time, when updates are available.
+
[NOTE]
====
You can review the end-of-life dates in the update lifecycle documentation for {product-title}. For more information, see link:https://access.redhat.com/documentation/en-us/openshift_dedicated/4/html/introduction_to_openshift_dedicated/policies-and-service-definition#osd-life-cycle[OpenShift Dedicated update life cycle].
====
+
.. Provide administrator approval based on your cluster update method:
** Individual updates: If you select an update version that requires approval, provide an administrators acknowledgment and click *Approve and continue*.
** Recurring updates: If you selected recurring updates for your cluster, provide an administrators acknowledgment and click *Approve and continue*. {cluster-manager} does not start scheduled y-stream updates for minor versions without receiving an administrators acknowledgment.
+
.. If you opted for recurring updates, select a preferred day of the week and upgrade start time in UTC from the drop-down menus.
.. Optional: You can set a grace period for *Node draining* during cluster upgrades. A *1 hour* grace period is set by default.
.. Click *Next*.
+
[NOTE]
====
In the event of critical security concerns that significantly impact the security or stability of a cluster, Red Hat Site Reliability Engineering (SRE) might schedule automatic updates to the latest z-stream version that is not impacted. The updates are applied within 48 hours after customer notifications are provided. For a description of the critical impact security rating, see link:https://access.redhat.com/security/updates/classification[Understanding Red Hat security ratings].
====
. Review the summary of your selections and click *Create cluster* to start the cluster installation. The installation takes approximately 30-40 minutes to complete.
+
. Optional: On the *Overview* tab, you can enable the delete protection feature by selecting *Enable*, which is located directly under *Delete Protection: Disabled*. This will prevent your cluster from being deleted. To disable delete protection, select *Disable*.
By default, clusters are created with the delete protection feature disabled.
+
.Verification
* You can monitor the progress of the installation in the *Overview* page for your cluster. You can view the installation logs on the same page. Your cluster is ready when the *Status* in the *Details* section of the page is listed as *Ready*.
ifeval::["{context}" == "osd-creating-a-cluster-on-aws"]
:!osd-on-aws:
endif::[]
ifeval::["{context}" == "osd-creating-a-cluster-on-gcp"]
:!osd-on-gcp:
endif::[]

View File

@@ -1,170 +0,0 @@
// Module included in the following assemblies:
//
// * storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc
// The OCP version of this procedure is persistent-storage-csi-efs-sts.
:_mod-docs-content-type: PROCEDURE
[id="efs-sts_{context}"]
= Configuring AWS EFS CSI Driver Operator with Secure Token Service
This procedure explains how to configure the link:https://github.com/openshift/aws-efs-csi-driver-operator[AWS EFS CSI Driver Operator] (a Red Hat operator) with {product-title} on AWS Secure Token Service (STS).
Perform this procedure before you have installed the AWS EFS CSI Operator, but not yet installed the link:https://github.com/openshift/aws-efs-csi-driver[AWS EFS CSI driver] as part of the _Installing the AWS EFS CSI Driver Operator_ procedure.
[IMPORTANT]
====
If you perform this procedure after installing the driver and creating volumes, your volumes will fail to mount into pods.
====
.Prerequisites
* You have access to the cluster as a user with the cluster-admin role.
* AWS account credentials
* You have installed the AWS EFS CSI Operator.
.Procedure
. Prepare the AWS account:
.. Create an IAM policy JSON file with the following content:
+
[source,json]
----
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"elasticfilesystem:DescribeAccessPoints",
"elasticfilesystem:DescribeFileSystems",
"elasticfilesystem:DescribeMountTargets",
"ec2:DescribeAvailabilityZones",
"elasticfilesystem:TagResource"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"elasticfilesystem:CreateAccessPoint"
],
"Resource": "*",
"Condition": {
"StringLike": {
"aws:RequestTag/efs.csi.aws.com/cluster": "true"
}
}
},
{
"Effect": "Allow",
"Action": "elasticfilesystem:DeleteAccessPoint",
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/efs.csi.aws.com/cluster": "true"
}
}
}
]
}
----
.. Create an IAM trust JSON file with the following content:
+
--
[source,json]
----
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<your_aws_account_ID>:oidc-provider/<openshift_oidc_provider>" <1>
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"<openshift_oidc_provider>:sub": [ <2>
"system:serviceaccount:openshift-cluster-csi-drivers:aws-efs-csi-driver-operator",
"system:serviceaccount:openshift-cluster-csi-drivers:aws-efs-csi-driver-controller-sa"
]
}
}
}
]
}
----
<1> Specify your AWS account ID and the OpenShift OIDC provider endpoint. Obtain the endpoint by running the following command:
+
[source,terminal]
----
$ rosa describe cluster \
-c $(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}') \
-o yaml | awk '/oidc_endpoint_url/ {print $2}' | cut -d '/' -f 3,4
----
+
<2> Specify the OpenShift OIDC endpoint again.
--
.. Create the IAM role:
+
[source,terminal]
----
ROLE_ARN=$(aws iam create-role \
--role-name "<your_cluster_name>-aws-efs-csi-operator" \
--assume-role-policy-document file://<your_trust_file_name>.json \
--query "Role.Arn" --output text); echo $ROLE_ARN
----
+
Save the output. You will use it in the next steps.
.. Create the IAM policy:
+
[source,terminal]
----
POLICY_ARN=$(aws iam create-policy \
--policy-name "<your_rosa_cluster_name>-rosa-efs-csi" \
--policy-document file://<your_policy_file_name>.json \
--query 'Policy.Arn' --output text); echo $POLICY_ARN
----
+
.. Attach the IAM policy to the IAM role:
+
[source,terminal]
----
$ aws iam attach-role-policy \
--role-name "<your_rosa_cluster_name>-aws-efs-csi-operator" \
--policy-arn $POLICY_ARN
----
+
. Create a `Secret` YAML file for the driver operator:
+
[source,yaml]
----
apiVersion: v1
kind: Secret
metadata:
name: aws-efs-cloud-credentials
namespace: openshift-cluster-csi-drivers
stringData:
credentials: |-
[default]
sts_regional_endpoints = regional
role_arn = <role_ARN> <1>
web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token
----
<1> Replace `role_ARN` with the output you saved while creating the role.
. Create the secret:
+
[source,terminal]
----
$ oc apply -f aws-efs-cloud-credentials.yaml
----
+
You are now ready to install the AWS EFS CSI driver.

View File

@@ -1,7 +1,7 @@
// Module included in the following assemblies:
//
// * osd-architecture-models-gcp.adoc
// * osd_install_access_delete_cluster/creating-a-gcp-psc-enabled-private-cluster.adoc
// * osd_gcp_clusters/creating-a-gcp-psc-enabled-private-cluster.adoc
:_mod-docs-content-type: CONCEPT
[id="osd-understanding-private-service-connect_{context}"]

View File

@@ -1,10 +0,0 @@
// Module included in the following assemblies:
//
// * upgrading/rosa-updating-cluster-prepare.adoc
// * upgrading/osd-updating-cluster-prepare.adoc
:_mod-docs-content-type: CONCEPT
[id="update-preparing-evaluate-alerts_{context}"]
= Reviewing alerts to identify uses of removed APIs
The `APIRemovedInNextReleaseInUse` alert tells you that there are removed APIs in use on your cluster. If this alert is firing in your cluster, review the alert; take action to clear the alert by migrating manifests and API clients to use the new API version. You can use the `APIRequestCount` API to get more information about which APIs are in use and which workloads are using removed APIs.

View File

@@ -1,45 +0,0 @@
// Module included in the following assemblies:
//
// * upgrading/rosa-updating-cluster-prepare.adoc
// * upgrading/osd-updating-cluster-prepare.adoc
:_mod-docs-content-type: PROCEDURE
[id="update-preparing-evaluate-apirequestcount-workloads_{context}"]
= Using APIRequestCount to identify which workloads are using the removed APIs
You can examine the `APIRequestCount` resource for a given API version to help identify which workloads are using the API.
.Prerequisites
* You must have access to the cluster as a user with the `cluster-admin` role.
.Procedure
* Run the following command and examine the `username` and `userAgent` fields to help identify the workloads that are using the API:
+
[source,terminal]
----
$ oc get apirequestcounts <resource>.<version>.<group> -o yaml
----
+
For example:
+
[source,terminal]
----
$ oc get apirequestcounts ingresses.v1beta1.networking.k8s.io -o yaml
----
+
You can also use `-o jsonpath` to extract the `username` values from an `APIRequestCount` resource:
+
[source,terminal]
----
$ oc get apirequestcounts ingresses.v1beta1.networking.k8s.io -o jsonpath='{range ..username}{$}{"\n"}{end}' | sort | uniq
----
+
.Example output
[source,terminal]
----
user1
user2
app:serviceaccount:delta
----

View File

@@ -1,58 +0,0 @@
// Module included in the following assemblies:
//
// * upgrading/rosa-updating-cluster-prepare.adoc
// * upgrading/osd-updating-cluster-prepare.adoc
:_mod-docs-content-type: PROCEDURE
[id="update-preparing-evaluate-apirequestcount_{context}"]
= Using APIRequestCount to identify uses of removed APIs
You can use the `APIRequestCount` API to track API requests and review if any of them are using one of the removed APIs.
.Prerequisites
* You must have access to the cluster as a user with the `cluster-admin` role.
.Procedure
* Run the following command and examine the `REMOVEDINRELEASE` column of the output to identify the removed APIs that are currently in use:
+
[source,terminal]
----
$ oc get apirequestcounts
----
+
.Example output
[source,terminal]
----
NAME REMOVEDINRELEASE REQUESTSINCURRENTHOUR REQUESTSINLAST24H
cloudcredentials.v1.operator.openshift.io 32 111
ingresses.v1.networking.k8s.io 28 110
ingresses.v1beta1.extensions 1.22 16 66
ingresses.v1beta1.networking.k8s.io 1.22 0 1
installplans.v1alpha1.operators.coreos.com 93 167
...
----
+
[NOTE]
====
You can safely ignore the following entries that appear in the results:
* `system:serviceaccount:kube-system:generic-garbage-collector` appears in the results because it walks through all registered APIs searching for resources to remove.
* `system:kube-controller-manager` appears in the results because it walks through all resources to count them while enforcing quotas.
====
+
You can also use `-o jsonpath` to filter the results:
+
[source,terminal]
----
$ oc get apirequestcounts -o jsonpath='{range .items[?(@.status.removedInRelease!="")]}{.status.removedInRelease}{"\t"}{.metadata.name}{"\n"}{end}'
----
+
.Example output
[source,terminal]
----
1.22 certificatesigningrequests.v1beta1.certificates.k8s.io
1.22 ingresses.v1beta1.extensions
1.22 ingresses.v1beta1.networking.k8s.io
----

View File

@@ -1,106 +0,0 @@
// Module included in the following assemblies:
//
// * upgrading/rosa-updating-cluster-prepare.adoc
// * upgrading/osd-updating-cluster-prepare.adoc
:_mod-docs-content-type: REFERENCE
[id="update-preparing-list_{context}"]
= Removed Kubernetes APIs
// TODO: Keep michael's section in the release notes (which this duplicates), or link to this from his RN section?
{product-title} 4.9 uses Kubernetes 1.22, which removed the following deprecated `v1beta1` APIs. You must migrate manifests and API clients to use the `v1` API version. For more information about migrating removed APIs, see the link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-22[Kubernetes documentation].
.`v1beta1` APIs removed from Kubernetes 1.22
[cols="2,2,1",options="header",]
|===
|Resource |API |Notable changes
|APIService
|apiregistration.k8s.io/v1beta1
|No
|CertificateSigningRequest
|certificates.k8s.io/v1beta1
|link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#certificatesigningrequest-v122[Yes]
|ClusterRole
|rbac.authorization.k8s.io/v1beta1
|No
|ClusterRoleBinding
|rbac.authorization.k8s.io/v1beta1
|No
|CSIDriver
|storage.k8s.io/v1beta1
|No
|CSINode
|storage.k8s.io/v1beta1
|No
|CustomResourceDefinition
|apiextensions.k8s.io/v1beta1
|link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#customresourcedefinition-v122[Yes]
|Ingress
|extensions/v1beta1
|link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#ingress-v122[Yes]
|Ingress
|networking.k8s.io/v1beta1
|link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#ingress-v122[Yes]
|IngressClass
|networking.k8s.io/v1beta1
|No
|Lease
|coordination.k8s.io/v1beta1
|No
|LocalSubjectAccessReview
|authorization.k8s.io/v1beta1
|link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#subjectaccessreview-resources-v122[Yes]
|MutatingWebhookConfiguration
|admissionregistration.k8s.io/v1beta1
|link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#webhook-resources-v122[Yes]
|PriorityClass
|scheduling.k8s.io/v1beta1
|No
|Role
|rbac.authorization.k8s.io/v1beta1
|No
|RoleBinding
|rbac.authorization.k8s.io/v1beta1
|No
|SelfSubjectAccessReview
|authorization.k8s.io/v1beta1
|link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#subjectaccessreview-resources-v122[Yes]
|StorageClass
|storage.k8s.io/v1beta1
|No
|SubjectAccessReview
|authorization.k8s.io/v1beta1
|link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#subjectaccessreview-resources-v122[Yes]
|TokenReview
|authentication.k8s.io/v1beta1
|No
|ValidatingWebhookConfiguration
|admissionregistration.k8s.io/v1beta1
|link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#webhook-resources-v122[Yes]
|VolumeAttachment
|storage.k8s.io/v1beta1
|No
|===

View File

@@ -1,10 +0,0 @@
// Module included in the following assemblies:
//
// * upgrading/rosa-updating-cluster-prepare.adoc
// * upgrading/osd-updating-cluster-prepare.adoc
:_mod-docs-content-type: CONCEPT
[id="update-preparing-migrate_{context}"]
= Migrating instances of removed APIs
For information on how to migrate removed Kubernetes APIs, see the link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-22[Deprecated API Migration Guide] in the Kubernetes documentation.

View File

@@ -13,7 +13,7 @@ To create CSI-provisioned persistent volumes that mount to these supported stora
ifndef::openshift-rosa,openshift-rosa-hcp[]
[IMPORTANT]
====
The AWS EFS and GCP Filestore CSI drivers are not installed by default, and must be installed manually. For instructions on installing the AWS EFS CSI driver, see link:https://access.redhat.com/documentation/en-us/openshift_dedicated/4/html/storage/using-container-storage-interface-csi#osd-persistent-storage-aws-efs-csi[Setting up AWS Elastic File Service CSI Driver Operator]. For instructions on installing the GCP Filestore CSI driver, see link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.13/html/storage/using-container-storage-interface-csi#persistent-storage-csi-google-cloud-file-overview[Google Compute Platform Filestore CSI Driver Operator].
The AWS EFS and GCP Filestore CSI drivers are not installed by default, and must be installed manually. For instructions on installing the AWS EFS CSI driver, see link:https://docs.redhat.com/documentation/openshift_dedicated/4/html/storage/using-container-storage-interface-csi#persistent-storage-efs-csi-driver-operator-setup_persistent-storage-csi-aws-efs[Setting up AWS Elastic File Service CSI Driver Operator]. For instructions on installing the GCP Filestore CSI driver, see link:https://docs.redhat.com/documentation/openshift_container_platform/{product-version}/html/storage/using-container-storage-interface-csi#persistent-storage-csi-google-cloud-file-overview[Google Cloud Filestore CSI Driver Operator].
====
endif::openshift-rosa,openshift-rosa-hcp[]

View File

@@ -1,7 +1,6 @@
// Module included in the following assemblies:
//
// * storage/container_storage_interface/persistent-storage-csi-aws-efs.adoc
// * storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc
:_mod-docs-content-type: PROCEDURE
[id="csi-dynamic-provisioning-aws-efs_{context}"]

View File

@@ -2,7 +2,6 @@
//
// * storage/persistent_storage/persistent-storage-csi-aws-efs.adoc
// * storage/container_storage_interface/persistent-storage-csi-aws-efs.adoc
// * storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc
:_mod-docs-content-type: REFERENCE
[id="efs-security_{context}"]

View File

@@ -2,7 +2,6 @@
//
// * storage/persistent_storage/persistent-storage-csi-aws-efs.adoc
// * storage/container_storage_interface/persistent-storage-csi-aws-efs.adoc
// * storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc
:_mod-docs-content-type: PROCEDURE
[id="efs-create-static-pv_{context}"]

View File

@@ -2,7 +2,6 @@
//
// * storage/persistent_storage/persistent-storage-csi-aws-efs.adoc
// * storage/container_storage_interface/persistent-storage-csi-aws-efs.adoc
// * storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc
:_mod-docs-content-type: REFERENCE
[id="efs-troubleshooting_{context}"]

View File

@@ -2,7 +2,6 @@
//
// * storage/container_storage_interface/persistent-storage-csi-aws-efs.adoc
// * storage/container_storage_interface/persistent-storage-csi-aws-efs.adoc
// * storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc
:_mod-docs-content-type: PROCEDURE
[id="persistent-storage-csi-olm-operator-uninstall_{context}"]

View File

@@ -1,10 +0,0 @@
// Module included in the following assemblies:
//
// * osd_install_access_delete_cluster/creating-a-gcp-psc-enabled-private-cluster.adoc
:_mod-docs-content-type: PROCEDURE
[id="private-service-connect-create"]
= Creating a private cluster with Private Service Connect
Private Service Connect is supported with the Customer Cloud Subscription (CCS) infrastructure type only. To create an {product-title} on {GCP} using PSC, see
xref:../osd_gcp_clusters/creating-a-gcp-cluster.adoc#osd-create-gcp-cluster-ccs_osd-creating-a-cluster-on-gcp[Creating a cluster on {gcp-short} with CCS].

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * osd_install_access_delete_cluster/creating-a-gcp-psc-enabled-private-cluster.adoc
// * osd_gcp_clusters/creating-a-gcp-psc-enabled-private-cluster.adoc
:_mod-docs-content-type: PROCEDURE
[id="private-service-connect-prereqs"]

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * osd_install_access_delete_cluster/creating-a-gcp-psc-enabled-private-cluster.adoc
// * osd_gcp_clusters/creating-a-gcp-psc-enabled-private-cluster.adoc
// * architecture/osd-architecture-models-gcp.adoc
:_mod-docs-content-type: CONCEPT

View File

@@ -1,64 +0,0 @@
// Module included in the following assemblies:
//
// * security/security_profiles_operator/spo-troubleshooting.adoc
:_mod-docs-content-type: PROCEDURE
[id="spo-memory-profiling_{context}"]
= Enable CPU and memory profiling
You can enable the CPU and memory profiling endpoints for debugging purposes.
.Procedure
. To use the profiling support, patch the `spod` configuration and set the `enableProfiling` value by running the following command:
+
[source,terminal]
----
$ oc -n openshift-security-profiles patch spod \
spod --type=merge -p '{"spec":{"enableProfiling":true}}'
----
+
.Example output
[source,terminal]
----
securityprofilesoperatordaemon.security-profiles-operator.x-k8s.io/spod patched
----
. Verify the `openshift-security-profiles` container is serving the profile endpoint by running the following command:
+
[source,terminal]
----
$ oc logs --selector name=spod -c openshift-security-profiles | grep "Starting profiling"
----
+
.Example output
[source,terminal]
----
I1202 15:14:40.276363 2185724 main.go:226] "msg"="Starting profiling server" "endpoint"="localhost:6060"
----
. Verify the `log-enricher` container is serving the profile endpoint by running the following command:
+
[source,terminal]
----
$ oc logs --selector name=spod -c log-enricher | grep "Starting profiling"
----
+
.Example output
[source,terminal]
----
I1202 15:14:40.364046 2185814 main.go:226] "msg"="Starting profiling server" "endpoint"="localhost:6061"
----
. Verify the `bpf-recorder` container is serving the profile endpoint by running the following command:
+
[source,terminal]
----
$ oc logs --selector name=spod -c bpf-recorder | grep "Starting profiling"
----
+
.Example output
[source,terminal]
----
I1202 15:14:40.457506 2185914 main.go:226] "msg"="Starting profiling server" "endpoint"="localhost:6062"
----

View File

@@ -1,7 +1,6 @@
// Module included in the following assemblies:
//
// * storage/persistent_storage/persistent-storage-aws-efs-csi.adoc
// * storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc
:_mod-docs-content-type: PROCEDURE
[id="storage-create-storage-class-cli_{context}"]

View File

@@ -1,7 +1,6 @@
// Module included in the following assemblies:
//
// * storage/persistent_storage/persistent-storage-aws-efs-csi.adoc
// * storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc
:_mod-docs-content-type: PROCEDURE
[id="storage-create-storage-class-console_{context}"]

View File

@@ -7,7 +7,6 @@
//
// * storage/persistent_storage/persistent-storage-aws.adoc
// * storage/container_storage_interface/persistent-storage-csi-aws-efs.adoc
// * storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc
:_mod-docs-content-type: PROCEDURE
[id="storage-create-storage-class_{context}"]

View File

@@ -1,143 +0,0 @@
// Module included in the following assemblies:
//
// * osd_cluster_admin/osd_logging/osd-accessing-the-service-logs.adoc
// * observability/logging/sd-accessing-the-service-logs.adoc
:_mod-docs-content-type: PROCEDURE
[id="viewing-the-service-logs-cli_{context}"]
= Viewing the service logs by using the CLI
You can view the service logs for
ifdef::openshift-dedicated[]
{product-title}
endif::openshift-dedicated[]
ifdef::openshift-rosa[]
{product-title} (ROSA)
endif::openshift-rosa[]
clusters by using the {cluster-manager} CLI (`ocm`).
You can view the logs for a specific cluster or for all available clusters in your Red Hat organization. You can also filter the service logs, for example by severity or by log ID.
.Prerequisites
* You have installed an {product-title} cluster.
* You are the cluster owner or you have the cluster editor role.
* You have installed and configured the latest {cluster-manager} CLI (`ocm`) on your installation host.
+
[NOTE]
====
You can download the latest version of the {cluster-manager} CLI (`ocm`) on the link:https://console.redhat.com/openshift/downloads[{cluster-manager} downloads] page.
====
.Procedure
. View the service logs for a cluster:
.. List the clusters in your Red Hat organization:
+
[source,terminal]
----
$ ocm list clusters
----
+
.Example output
[source,terminal]
----
ID NAME API URL OPENSHIFT_VERSION PRODUCT ID CLOUD_PROVIDER REGION ID STATE
ifdef::openshift-dedicated[]
1t1398ndq653vjf317a32cfjvee771dc mycluster https://api.mycluster.cdrj.p1.openshiftapps.com:6443 4.10.18 osd aws us-east-1 ready
endif::openshift-dedicated[]
ifdef::openshift-rosa[]
1t1398ndq653vjf317a32cfjvee771dc mycluster https://api.mycluster.cdrj.p1.openshiftapps.com:6443 4.10.18 rosa aws us-east-1 ready
endif::openshift-rosa[]
----
+
.. Obtain the external cluster ID for your cluster:
+
[source,terminal]
----
$ ocm describe cluster <cluster_name> <1>
----
<1> Replace `<cluster_name>` with the name of your cluster.
+
.Example output
[source,terminal]
----
ID: 1t1298nhq824vjf347q12cpjvee771hc
External ID: f3f1a6c1-2b2b-4a55-854c-fd65e26b737b
...
----
+
.. View the service logs for your cluster:
+
[source,terminal]
----
$ ocm get /api/service_logs/v1/cluster_logs --parameter search="cluster_uuid = '<external_cluster_id>'" <1>
----
<1> Replace `<external_cluster_id>` with the external cluster ID that you obtained in the preceding step.
+
.Example output
[source,terminal]
----
{
"kind": "ClusterLogList",
"page": 1,
"size": 1,
"total": 1,
"items": [
{
"id": "1AyuZCfRwUEwkUEbyKJqjUsdRdj",
"kind": "ClusterLog",
"href": "/api/service_logs/v1/cluster_logs/1AyuZCfRwUEwkUEbyKJqjUsdRdj",
"timestamp": "2022-06-23T14:23:19.078551Z",
"severity": "Info",
"service_name": "AccountManager",
"cluster_uuid": "f3f1a6c1-2b2b-4a55-854c-fd65e26b737b",
"summary": "Cluster registered successfully",
"description": "Cluster installation completed and the cluster registered successfully.",
"event_stream_id": "3ByuXECLcWsfFvVMIOhiH8YCxEk",
"created_by": "service-account-ocm-ams-service",
"created_at": "2022-06-23T14:23:19.10425Z",
"username": "service-account-telemeter-service"
}
]
}
----
. View the service logs for all available clusters in your Red Hat organization:
+
.Example output
[source,terminal]
----
$ ocm get /api/service_logs/v1/cluster_logs
----
. View the service logs for all available clusters in your Red Hat organization and sort the results by cluster ID:
+
.Example output
[source,terminal]
----
$ ocm get /api/service_logs/v1/cluster_logs --parameter orderBy="cluster_uuid"
----
. Filter the service logs by severity:
+
.Example output
[source,terminal]
----
$ ocm get /api/service_logs/v1/cluster_logs --parameter search="severity = '<severity>'" <1>
----
<1> Replace `<severity>` with the severity type. The available values are `Debug`, `Info`, `Warning`, `Error`, and `Fatal`.
+
[NOTE]
====
You can include multiple search filters in your parameter specification. For example, you can filter the service logs for a specific cluster by severity by using `--parameter search="cluster_uuid = '<external_cluster_id>' and severity = '<severity>'"`.
====
. View a specific service log entry by specifying the log ID:
+
.Example output
[source,terminal]
----
$ ocm get /api/service_logs/v1/cluster_logs/<log_id> <1>
----
<1> Replace `<log_id>` with the ID of the log entry.

View File

@@ -1,38 +0,0 @@
// Module included in the following assemblies:
//
// * osd_cluster_admin/osd_logging/osd-accessing-the-service-logs.adoc
// * observability/logging/sd-accessing-the-service-logs.adoc
:_mod-docs-content-type: PROCEDURE
[id="viewing-the-service-logs-ocm_{context}"]
= Viewing the service logs by using {cluster-manager}
You can view the service logs for
ifdef::openshift-dedicated[]
an {product-title}
endif::openshift-dedicated[]
ifdef::openshift-rosa[]
a {product-title} (ROSA)
endif::openshift-rosa[]
cluster by using {cluster-manager-first}.
.Prerequisites
* You have installed
ifdef::openshift-dedicated[]
an {product-title}
endif::openshift-dedicated[]
ifdef::openshift-rosa[]
a ROSA
endif::openshift-rosa[]
cluster.
.Procedure
. Navigate to {cluster-manager-url} and select your cluster.
. In the *Overview* page for your cluster, view the service logs in the *Cluster history* section.
. Optional: Filter the cluster service logs by *Description* or *Severity* from the drop-down menu. You can filter further by entering a specific item in the search bar.
. Optional: Click *Download history* to download the service logs for your cluster in JSON or CSV format.

View File

@@ -1,10 +0,0 @@
// Module included in the following assemblies:
//
// * osd_cluster_admin/osd_logging/osd-accessing-the-service-logs.adoc
// * observability/logging/sd-accessing-the-service-logs.adoc
:_mod-docs-content-type: PROCEDURE
[id="viewing-the-service-logs_{context}"]
= Viewing the service logs
You can view the service logs for your clusters by using {cluster-manager-first} or the {cluster-manager} CLI (`ocm`).

View File

@@ -1,9 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
:context: dedicated-cluster-deploying
[id="dedicated-cluster-deploying"]
= Installing the Red Hat OpenShift Logging Operator and OpenShift Elasticsearch Operator
include::_attributes/common-attributes.adoc[]
toc::[]
include::modules/dedicated-cluster-install-deploy.adoc[leveloffset=+1]

View File

@@ -1,63 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
:context: dedicated-cluster-logging
[id="dedicated-cluster-logging"]
= Configuring {logging}
include::_attributes/common-attributes.adoc[]
toc::[]
As a cluster administrator, you can deploy the {logging} to aggregate logs for a range of services.
{product-title} clusters can perform logging tasks using the OpenShift Elasticsearch Operator.
The {logging} is configurable using a `ClusterLogging` custom resource (CR)
deployed in the `openshift-logging` project namespace.
The Red Hat OpenShift Logging Operator watches for changes to `ClusterLogging` CR, creates
any missing logging components, and adjusts the logging environment accordingly.
The `ClusterLogging` CR is based on the `ClusterLogging` custom resource
definition (CRD), which defines a complete OpenShift Logging environment and
includes all the components of the logging stack to collect, store, and visualize
logs.
The `retentionPolicy` parameter in the `ClusterLogging` custom resource (CR) defines how long the internal Elasticsearch log store retains logs.
.Sample `ClusterLogging` custom resource (CR)
[source,yaml]
----
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "instance"
namespace: "openshift-logging"
spec:
managementState: "Managed"
logStore:
type: "elasticsearch"
elasticsearch:
nodeCount: 3
storage:
storageClassName: "gp2"
size: "200Gi"
redundancyPolicy: "SingleRedundancy"
nodeSelector:
node-role.kubernetes.io/worker: ""
resources:
limits:
memory: 16G
request:
memory: 16G
visualization:
type: "kibana"
kibana:
replicas: 1
nodeSelector:
node-role.kubernetes.io/worker: ""
collection:
logs:
type: "fluentd"
fluentd: {}
nodeSelector:
node-role.kubernetes.io/worker: ""
----

View File

@@ -1,33 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="sd-accessing-the-service-logs"]
= Accessing the service logs for {product-title} clusters
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: sd-accessing-the-service-logs
toc::[]
[role="_abstract"]
You can view the service logs for your {product-title}
ifdef::openshift-rosa[]
(ROSA)
endif::[]
clusters by using the {cluster-manager-first}. The service logs detail cluster events such as load balancer quota updates and scheduled maintenance upgrades. The logs also show cluster resource changes such as the addition or deletion of users, groups, and identity providers.
// Commented out while the OpenShift Cluster Manager CLI is in Developer Preview:
//You can view the service logs for your {product-title} (ROSA) clusters by using {cluster-manager-first} or the {cluster-manager} CLI (`ocm`). The service logs detail cluster events such as load balancer quota updates and scheduled maintenance upgrades. The logs also show cluster resource changes such as the addition or deletion of users, groups, and identity providers.
Additionally, you can add notification contacts for
ifdef::openshift-rosa[]
a ROSA
endif::[]
ifdef::openshift-dedicated[]
an {product-title}
endif::[]
cluster. Subscribed users receive emails about cluster events that require customer action, known cluster incidents, upgrade maintenance, and other topics.
// Commented out while the OpenShift Cluster Manager CLI is in Developer Preview:
//include::modules/viewing-the-service-logs.adoc[leveloffset=+1]
//include::modules/viewing-the-service-logs-ocm.adoc[leveloffset=+2]
//include::modules/viewing-the-service-logs-cli.adoc[leveloffset=+2]
include::modules/viewing-the-service-logs-ocm.adoc[leveloffset=+1]
include::modules/adding-cluster-notification-contacts.adoc[leveloffset=+1]

View File

@@ -1,38 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="osd-admission-plug-ins"]
= Admission plugins
include::_attributes/common-attributes.adoc[]
:context: admission-plug-ins
toc::[]
Admission plugins are used to help regulate how {product-title} functions.
// Concept modules
include::modules/admission-plug-ins-about.adoc[leveloffset=+1]
include::modules/admission-plug-ins-default.adoc[leveloffset=+1]
include::modules/admission-webhooks-about.adoc[leveloffset=+1]
include::modules/admission-webhook-types.adoc[leveloffset=+1]
// user (groups=["dedicated-admins" "system:authenticated:oauth" "system:authenticated"]) is attempting to grant RBAC permissions not currently held, clusterroles.rbac.authorization.k8s.io "system:openshift:online:my-webhook-server" not found, cannot get resource "rolebindings", cannot create resource "apiservices", cannot create resource "validatingwebhookconfigurations"
ifndef::openshift-rosa,openshift-dedicated[]
// Procedure module
include::modules/configuring-dynamic-admission.adoc[leveloffset=+1]
endif::openshift-rosa,openshift-dedicated[]
[role="_additional-resources"]
[id="admission-plug-ins-additional-resources"]
== Additional resources
ifndef::openshift-rosa,openshift-dedicated[]
* xref: /networking/hardware_networks/configuring-sriov-operator.adoc#configuring-sriov-operator[Limiting custom network resources managed by the SR-IOV network device plugin]
endif::openshift-rosa,openshift-dedicated[]
* xref:../nodes/scheduling/nodes-scheduler-taints-tolerations.adoc#nodes-scheduler-taints-tolerations_dedicating_nodes-scheduler-taints-tolerations[Defining tolerations that enable taints to qualify which pods should be scheduled on a node]
* xref:../nodes/pods/nodes-pods-priority.adoc#admin-guide-priority-preemption-names_nodes-pods-priority[Pod priority class validation]

View File

@@ -1,13 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="cluster-administrator-role"]
= The cluster-admin role
include::_attributes/common-attributes.adoc[]
:context: cluster-administrator
toc::[]
As an administrator of {product-title} with Customer Cloud Subscriptions (link:https://www.openshift.com/dedicated/ccs[CCS]), you can request additional permissions and access to the *cluster-admin* role within your organization's cluster. While logged into an account with the cluster-admin role, users have increased permissions to run privileged security contexts and install additional Operators for their environment.
include::modules/dedicated-cluster-admin-enable.adoc[leveloffset=+1]
include::modules/dedicated-cluster-admin-grant.adoc[leveloffset=+1]

View File

@@ -1,54 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="dedicated-administrator-role"]
= The dedicated-admin role
include::_attributes/common-attributes.adoc[]
:context: dedicated-administrator
toc::[]
As an administrator of an {product-title} cluster, your account has additional
permissions and access to all user-created projects in your organization's
cluster. While logged in to an account with this role, the basic developer CLI
(the `oc` command) allows you increased visibility and management capabilities
over objects across projects, while the administrator CLI (commands under the
`oc adm` command) allow you to complete additional operations.
[NOTE]
====
While your account does have these increased permissions, the actual cluster
maintenance and host configuration is still performed by the OpenShift
Operations Team. If you would like to request a change to your cluster that you
cannot perform using the administrator CLI, open a support case on the
link:https://access.redhat.com/support/[Red Hat Customer Portal].
====
include::modules/dedicated-logging-in-and-verifying-permissions.adoc[leveloffset=+1]
include::modules/dedicated-managing-dedicated-administrators.adoc[leveloffset=+1]
include::modules/dedicated-admin-granting-permissions.adoc[leveloffset=+1]
include::modules/dedicated-managing-service-accounts.adoc[leveloffset=+1]
include::modules/dedicated-managing-quotas-and-limit-ranges.adoc[leveloffset=+1]
[id="osd-installing-operators-from-software-catalog_{context}"]
== Installing Operators from the software catalog
{product-title} administrators can install Operators from a curated list
provided by the software catalog. This makes the Operator available to all developers
on your cluster to create Custom Resources and applications using that Operator.
[NOTE]
====
Privileged and custom Operators cannot be installed.
====
Administrators can only install Operators to the default `openshift-operators`
namespace, except for the Red Hat OpenShift Logging Operator, which requires the
`openshift-logging` namespace.
[role="_additional-resources"]
.Additional resources
* xref:../operators/admin/olm-adding-operators-to-cluster.adoc#olm-adding-operators-to-a-cluster[Adding Operators to a cluster]

View File

@@ -1 +0,0 @@
../../_attributes

View File

@@ -1 +0,0 @@
../../images

View File

@@ -1 +0,0 @@
../../modules

View File

@@ -1,22 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="osd-accessing-the-service-logs"]
= Accessing the service logs for OpenShift Dedicated clusters
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: osd-accessing-the-service-logs
toc::[]
[role="_abstract"]
You can view the service logs for your {product-title} clusters by using {cluster-manager-first}. The service logs detail cluster events such as load balancer quota updates and scheduled maintenance upgrades. The logs also show cluster resource changes such as the addition or deletion of users, groups, and identity providers.
// Commented out while the OpenShift Cluster Manager CLI is in Developer Preview:
//You can view the service logs for your {product-title} clusters by using {cluster-manager-first} or the {cluster-manager} CLI (`ocm`). The service logs detail cluster events such as load balancer quota updates and scheduled maintenance upgrades. The logs also show cluster resource changes such as the addition or deletion of users, groups, and identity providers.
Additionally, you can add notification contacts for an {product-title} cluster. Subscribed users receive emails about cluster events that require customer action, known cluster incidents, upgrade maintenance, and other topics.
// Commented out while the OpenShift Cluster Manager CLI is in Developer Preview:
//include::modules/viewing-the-service-logs.adoc[leveloffset=+1]
//include::modules/viewing-the-service-logs-ocm.adoc[leveloffset=+2]
//include::modules/viewing-the-service-logs-cli.adoc[leveloffset=+2]
include::modules/viewing-the-service-logs-ocm.adoc[leveloffset=+1]
include::modules/adding-cluster-notification-contacts.adoc[leveloffset=+1]

View File

@@ -1 +0,0 @@
../../snippets

View File

@@ -1,25 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
include::_attributes/attributes-openshift-dedicated.adoc[]
[id="config-identity-providers"]
= Configuring identity providers
:context: config-identity-providers
toc::[]
After your {product-title} cluster is created, you must configure identity providers to determine how users log in to access the cluster.
include::modules/understanding-idp.adoc[leveloffset=+1]
include::modules/identity-provider-parameters.adoc[leveloffset=+2]
include::modules/config-github-idp.adoc[leveloffset=+1]
include::modules/config-gitlab-idp.adoc[leveloffset=+1]
include::modules/config-google-idp.adoc[leveloffset=+1]
include::modules/config-ldap-idp.adoc[leveloffset=+1]
include::modules/config-openid-idp.adoc[leveloffset=+1]
include::modules/config-htpasswd-idp.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../osd_architecture/osd_policy/osd-service-definition.adoc#cluster-admin-user_osd-service-definition[Customer administrator user]
include::modules/access-cluster.adoc[leveloffset=+1]

View File

@@ -20,9 +20,7 @@ include::modules/service-account-auth-overview.adoc[leveloffset=+1]
include::modules/osd-create-cluster-ccs.adoc[leveloffset=+1]
//include::modules/osd-create-cluster-gcp-account.adoc[leveloffset=+1]
// include::modules/osd-create-cluster-red-hat-account.adoc[leveloffset=+1]
//include::modules/osd-create-cluster-rhm-gcp-account.adoc[leveloffset=+1]
[id="additional-resources_{context}"]
== Additional resources

View File

@@ -1,31 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="osd-creating-a-cluster-on-aws"]
= Creating a cluster on AWS
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: osd-creating-a-cluster-on-aws
toc::[]
[role="_abstract"]
You can deploy {product-title} on {AWS} by using your own AWS account through the Customer Cloud Subscription (CCS) model or by using an AWS infrastructure account that is owned by Red Hat.
[id="osd-creating-a-cluster-on-aws-prerequisites_{context}"]
== Prerequisites
* You reviewed the xref:../osd_architecture/osd-understanding.adoc#osd-understanding[introduction to {product-title}] and the documentation on xref:../architecture/index.adoc#architecture-overview[architecture concepts].
* You reviewed the xref:../osd_getting_started/osd-understanding-your-cloud-deployment-options.adoc#osd-understanding-your-cloud-deployment-options[{product-title} cloud deployment options].
include::modules/osd-create-cluster-ccs-aws.adoc[leveloffset=+1]
[id="additional-resources_{context}"]
== Additional resources
* For information about configuring a proxy with {product-title}, see xref:../networking/ovn_kubernetes_network_provider/configuring-cluster-wide-proxy.adoc#configuring-a-cluster-wide-proxy[Configuring a cluster-wide proxy].
* For details about the AWS service control policies required for CCS deployments, see xref:../osd_planning/aws-ccs.adoc#ccs-aws-scp_aws-ccs[Minimum required service control policy (SCP)].
* For information about persistent storage for {product-title}, see the xref:../osd_architecture/osd_policy/osd-service-definition.adoc#sdpolicy-storage_osd-service-definition[Storage] section in the {product-title} service definition.
* For information about load balancers for {product-title}, see the xref:../osd_architecture/osd_policy/osd-service-definition.adoc#load-balancers_osd-service-definition[Load balancers] section in the {product-title} service definition.
* For more information about etcd encryption, see the xref:../osd_architecture/osd_policy/osd-service-definition.adoc#etcd-encryption_osd-service-definition[etcd encryption service definition].
* For information about the end-of-life dates for {product-title} versions, see the xref:../osd_architecture/osd_policy/osd-life-cycle.adoc#osd-life-cycle[{product-title} update life cycle].
* For information about the requirements for custom additional security groups, see xref:../osd_planning/aws-ccs.adoc#osd-security-groups-custom_aws-ccs[Additional custom security groups].
* For information about configuring identity providers, see xref:../authentication/sd-configuring-identity-providers.adoc#sd-configuring-identity-providers[Configuring identity providers].
* For information about revoking cluster privileges, see xref:../authentication/osd-revoking-cluster-privileges.adoc#osd-revoking-cluster-privileges[Revoking privileges and access to an {product-title} cluster].

View File

@@ -1,12 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="osd-deleting-a-cluster"]
= Deleting an {product-title} cluster
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: osd-deleting-a-cluster
toc::[]
[role="_abstract"]
As cluster owner, you can delete your {product-title} clusters.
include::modules/deleting-cluster.adoc[leveloffset=+1]

View File

@@ -9,5 +9,3 @@ toc::[]
Troubleshoot the Security Profiles Operator to diagnose a problem or provide information in a bug report.
include::modules/spo-inspecting-seccomp-profiles.adoc[leveloffset=+1]
// include::modules/spo-memory-profiling.adoc[leveloffset=+2]

View File

@@ -1,7 +1,6 @@
// Text snippet included in the following modules:
// * OSD files
// * modules/create-wif-cluster-ocm.adoc
// * modules/osd-create-cluster-ccs-gcp.adoc
// * modules/osd-create-cluster-ccs-aws.adoc
// * modules/ccs-gcp-provisioned.adoc
// * modules/ccs-aws-provisioned.adoc

View File

@@ -1,37 +0,0 @@
[id="osd-updating-cluster-prepare"]
= Preparing to upgrade {product-title} to 4.9
include::_attributes/common-attributes.adoc[]
ifdef::openshift-dedicated,openshift-rosa[]
include::_attributes/attributes-openshift-dedicated.adoc[]
endif::[]
:context: osd-updating-cluster-prepare
toc::[]
Upgrading your {product-title} clusters to OpenShift 4.9 requires you to evaluate and migrate your APIs as the latest version of Kubernetes has removed a significant number of APIs.
Before you can upgrade your {product-title} clusters, you must update the required tools to the appropriate version.
include::modules/upgrade-49-acknowledgement.adoc[leveloffset=+1]
// Removed Kubernetes APIs
// Removed Kubernetes APIs
include::modules/osd-update-preparing-list.adoc[leveloffset=+1]
[id="osd-evaluating-cluster-removed-apis"]
== Evaluating your cluster for removed APIs
There are several methods to help administrators identify where APIs that will be removed are in use. However, {product-title} cannot identify all instances, especially workloads that are idle or external tools that are used. It is the responsibility of the administrator to properly evaluate all workloads and other integrations for instances of removed APIs.
// Reviewing alerts to identify uses of removed APIs
include::modules/osd-update-preparing-evaluate-alerts.adoc[leveloffset=+2]
// Using APIRequestCount to identify uses of removed APIs
include::modules/osd-update-preparing-evaluate-apirequestcount.adoc[leveloffset=+2]
// Using APIRequestCount to identify which workloads are using the removed APIs
include::modules/osd-update-preparing-evaluate-apirequestcount-workloads.adoc[leveloffset=+2]
// Migrating instances of removed APIs
include::modules/osd-update-preparing-migrate.adoc[leveloffset=+1]