diff --git a/cloud_experts_tutorials/cloud-experts-rosa-hcp-activation-and-account-linking-tutorial.adoc b/cloud_experts_tutorials/cloud-experts-rosa-hcp-activation-and-account-linking-tutorial.adoc index 87a2f52f5a..a365020dac 100644 --- a/cloud_experts_tutorials/cloud-experts-rosa-hcp-activation-and-account-linking-tutorial.adoc +++ b/cloud_experts_tutorials/cloud-experts-rosa-hcp-activation-and-account-linking-tutorial.adoc @@ -68,7 +68,7 @@ image::rosa-continue-rh-6.png[] + image::rosa-login-rh-account-7.png[] + -* You can also register for a new Red{nbsp}Hat account or reset your password on this page. +* You can also register for a new Red{nbsp}Hat account or reset your password on this page. * Make sure to log in to the Red{nbsp}Hat account that you plan to associate with the AWS account where you have activated {hcp-title} in previous steps. * Only a single AWS account that will be used for service billing can be associated with a Red{nbsp}Hat account. Typically an organizational AWS account that has other AWS accounts, such as developer accounts, linked would be the one that is to be billed, rather than individual AWS end user accounts. * Red{nbsp}Hat accounts belonging to the same Red{nbsp}Hat organization will be linked with the same AWS account. Therefore, you can manage who has access to creating {hcp-title} clusters on the Red{nbsp}Hat organization account level. @@ -133,7 +133,7 @@ Do not share your unique token. . The final prerequisite before your first cluster deployment is making sure the necessary account-wide roles and policies are created. The `rosa` CLI can help with that by using the command shown under step _2.2. To create the necessary account-wide roles and policies quickly…_ on the web console. The alternative to that is manual creation of these roles and policies. -. After logging in, creating the account roles, and verifying your identity using the `rosa whoami` command, your terminal will look similar to this: +. After logging in, creating the account roles, and verifying your identity using the `rosa whoami` command, your terminal will look similar to this: + image::rosa-whoami-14.png[] @@ -174,7 +174,7 @@ image::rosa-deploy-ui-19.png[] + image::rosa-deploy-ui-hcp-20.png[] -. The next step *Accounts and roles* allows you specifying the infrastructure AWS account, into which the the ROSA cluster will be deployed and where the resources will be consumed and managed: +. The next step *Accounts and roles* allows you specifying the infrastructure AWS account, into which the ROSA cluster will be deployed and where the resources will be consumed and managed: + image::rosa-ui-account-21.png[] + @@ -191,12 +191,9 @@ image::rosa-ui-billing-22.png[] * You can see an indicator whether the ROSA contract is enabled for a given AWS billing account or not. * In case you would like to use an AWS account that does not have a contract enabled yet, you can either use the _Connect ROSA to a new AWS billing account_ to reach the ROSA AWS console page, where you can activate it after logging in using the respective AWS account by following steps described earlier in this tutorial, or ask the administrator of the AWS account to do that for you. -The following steps past the billing AWS account selection are beyond the scope of this tutorial. +The following steps past the billing AWS account selection are beyond the scope of this tutorial. .Additional resources * For information on using the CLI to create a cluster, see xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-hcp-sts-creating-a-cluster-cli_rosa-hcp-sts-creating-a-cluster-quickly[Creating a ROSA with HCP cluster using the CLI]. * See link:https://cloud.redhat.com/learning/learn:getting-started-red-hat-openshift-service-aws-rosa/resource/resources:how-deploy-cluster-red-hat-openshift-service-aws-using-console-ui[this learning path] for more details on how to complete ROSA cluster deployment using the web console. - - - diff --git a/modules/agent-installer-architectures.adoc b/modules/agent-installer-architectures.adoc index ad6635367d..68afb0c3cd 100644 --- a/modules/agent-installer-architectures.adoc +++ b/modules/agent-installer-architectures.adoc @@ -8,23 +8,23 @@ Before installing an {product-title} cluster using the Agent-based Installer, you can verify the supported architecture on which you can install the cluster. This procedure is optional. -.Prerequisites: +.Prerequisites * You installed the {oc-first}. * You have downloaded the installation program. -.Procedure: +.Procedure . Log in to the {oc-first}. -. Check your release payload by running the following command: +. Check your release payload by running the following command: [source,terminal] + ---- $ ./openshift-install version ---- + -.Example output +.Example output [source,terminal] ---- ./openshift-install 4.16.0 @@ -49,4 +49,4 @@ $ oc adm release info -o jsonpath="{ .metadata.metadata}" <1> {"release.openshift.io architecture":"multi"} ---- + -If you are using the release image with the `multi` payload, you can install the cluster on different architectures such as `arm64`, `amd64`, `s390x`, and `ppc64le`. Otherwise, you can install the cluster only on the `release architecture` displayed in the output of the `openshift-install version` command. \ No newline at end of file +If you are using the release image with the `multi` payload, you can install the cluster on different architectures such as `arm64`, `amd64`, `s390x`, and `ppc64le`. Otherwise, you can install the cluster only on the `release architecture` displayed in the output of the `openshift-install version` command. diff --git a/modules/cleaning-crio-storage.adoc b/modules/cleaning-crio-storage.adoc index 7c948efa44..f2aa0b6bd1 100644 --- a/modules/cleaning-crio-storage.adoc +++ b/modules/cleaning-crio-storage.adoc @@ -31,7 +31,7 @@ can't stat lower layer ... because it does not exist. Going through storage to Follow this process to completely wipe the CRI-O storage and resolve the errors. -.Prerequisites: +.Prerequisites * You have access to the cluster as a user with the `cluster-admin` role. * You have installed the OpenShift CLI (`oc`). diff --git a/modules/cnf-configuring-kubelet-nro.adoc b/modules/cnf-configuring-kubelet-nro.adoc index 5c89b2462a..805d1c5f94 100644 --- a/modules/cnf-configuring-kubelet-nro.adoc +++ b/modules/cnf-configuring-kubelet-nro.adoc @@ -8,7 +8,7 @@ The recommended way to configure a single NUMA node policy is to apply a performance profile. Another way is by creating and applying a `KubeletConfig` custom resource (CR), as shown in the following procedure. -.Procedure +.Procedure . Create the `KubeletConfig` custom resource (CR) that configures the pod admittance policy for the machine profile: @@ -41,7 +41,7 @@ spec: memory: "512Mi" topologyManagerPolicy: "single-numa-node" <5> ---- -<1> Adjust this label to match the the `machineConfigPoolSelector` in the `NUMAResourcesOperator` CR. +<1> Adjust this label to match the `machineConfigPoolSelector` in the `NUMAResourcesOperator` CR. <2> For `cpuManagerPolicy`, `static` must use a lowercase `s`. <3> Adjust this based on the CPU on your nodes. <4> For `memoryManagerPolicy`, `Static` must use an uppercase `S`. @@ -56,5 +56,5 @@ $ oc create -f nro-kubeletconfig.yaml + [NOTE] ==== -Applying performance profile or `KubeletConfig` automatically triggers rebooting of the nodes. If no reboot is triggered, you can troubleshoot the issue by looking at the labels in `KubeletConfig` that address the node group. +Applying performance profile or `KubeletConfig` automatically triggers rebooting of the nodes. If no reboot is triggered, you can troubleshoot the issue by looking at the labels in `KubeletConfig` that address the node group. ==== diff --git a/modules/cnf-image-based-upgrade-rollback.adoc b/modules/cnf-image-based-upgrade-rollback.adoc index cdc361319d..142186b157 100644 --- a/modules/cnf-image-based-upgrade-rollback.adoc +++ b/modules/cnf-image-based-upgrade-rollback.adoc @@ -44,7 +44,7 @@ $ oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{"spec": {"stage": "R The {lcao} reboots the cluster with the previously installed version of {product-title} and restores the applications. -- -. If you are satisfied with the changes, finalize the the rollback by patching the value of the `stage` field to `Idle` in the `ImageBasedUpgrade` CR by running the following command: +. If you are satisfied with the changes, finalize the rollback by patching the value of the `stage` field to `Idle` in the `ImageBasedUpgrade` CR by running the following command: + -- [source,terminal] @@ -56,4 +56,4 @@ $ oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{"spec": {"stage": "I ==== If you move to the `Idle` stage after a rollback, the {lcao} cleans up resources that can be used to troubleshoot a failed upgrade. ==== --- \ No newline at end of file +-- diff --git a/modules/customize-certificates-add-service-serving.adoc b/modules/customize-certificates-add-service-serving.adoc index 3cb4d86fb9..a4991bda43 100644 --- a/modules/customize-certificates-add-service-serving.adoc +++ b/modules/customize-certificates-add-service-serving.adoc @@ -18,7 +18,7 @@ Because the generated certificates contain wildcard subjects for headless servic * Do not accept the service CA as a trusted CA for connections that are directed to individual pods and must not be impersonated by other pods. These connections must be configured to trust the CA that was used to generate the individual TLS certificates. ==== -.Prerequisites: +.Prerequisites * You must have a service defined. diff --git a/modules/ipi-install-troubleshooting-investigating-an-unavailable-kubernetes-api.adoc b/modules/ipi-install-troubleshooting-investigating-an-unavailable-kubernetes-api.adoc index da84830e2a..cb3e39e1a9 100644 --- a/modules/ipi-install-troubleshooting-investigating-an-unavailable-kubernetes-api.adoc +++ b/modules/ipi-install-troubleshooting-investigating-an-unavailable-kubernetes-api.adoc @@ -1,4 +1,4 @@ -// This module is included in the following assemblies: +// This module is included in the following assemblies: // // installing/installing_bare_metal_ipi/ipi-install-troubleshooting.adoc @@ -17,7 +17,7 @@ When the Kubernetes API is unavailable, check the control plane nodes to ensure $ sudo crictl logs $(sudo crictl ps --pod=$(sudo crictl pods --name=etcd-member --quiet) --quiet) ---- -. If the previous command fails, ensure that Kublet created the `etcd` pods by running the following command: +. If the previous command fails, ensure that Kubelet created the `etcd` pods by running the following command: + [source,terminal] ---- diff --git a/modules/kmm-validation-status.adoc b/modules/kmm-validation-status.adoc index 970b8f5aab..34a3c66c28 100644 --- a/modules/kmm-validation-status.adoc +++ b/modules/kmm-validation-status.adoc @@ -25,4 +25,4 @@ A `PreflightValidationOCP` resource reports the status and progress of each modu * `true`: Verified * `false`: Verification failed * `error`: Error during the verification process -* `unknown`: Verfication has not started +* `unknown`: Verification has not started diff --git a/modules/lvms-installing-logical-volume-manager-operator-using-rhacm.adoc b/modules/lvms-installing-logical-volume-manager-operator-using-rhacm.adoc index 507a590d7c..a65cc48969 100644 --- a/modules/lvms-installing-logical-volume-manager-operator-using-rhacm.adoc +++ b/modules/lvms-installing-logical-volume-manager-operator-using-rhacm.adoc @@ -16,7 +16,7 @@ The `Policy` CR that is created to install {lvms} is also applied to the cluster .Prerequisites * You have access to the {rh-rhacm} cluster using an account with `cluster-admin` and Operator installation permissions. * You have dedicated disks that {lvms} can use on each cluster. -* The cluster must be be managed by {rh-rhacm}. +* The cluster must be managed by {rh-rhacm}. .Procedure @@ -129,8 +129,8 @@ spec: $ oc create -f -n ---- + -Upon creating the `Policy` CR, the following custom resources are created on the clusters that match the selection criteria configured in the `PlacementRule` CR: +Upon creating the `Policy` CR, the following custom resources are created on the clusters that match the selection criteria configured in the `PlacementRule` CR: * `Namespace` * `OperatorGroup` -* `Subscription` \ No newline at end of file +* `Subscription` diff --git a/modules/lvms-restoring-volume-snapshots.adoc b/modules/lvms-restoring-volume-snapshots.adoc index 5aab9a9fb6..716426644a 100644 --- a/modules/lvms-restoring-volume-snapshots.adoc +++ b/modules/lvms-restoring-volume-snapshots.adoc @@ -6,7 +6,7 @@ [id="lvms-restoring-volume-snapshots_{context}"] = Restoring volume snapshots -To restore a volume snapshot, you must create a persistent volume claim (PVC) with the `dataSource.name` field set to the name of the volume snapshot. +To restore a volume snapshot, you must create a persistent volume claim (PVC) with the `dataSource.name` field set to the name of the volume snapshot. The restored PVC is independent of the volume snapshot and the source PVC. @@ -43,9 +43,9 @@ spec: ---- <1> Specify the storage size of the restored PVC. The storage size of the requested PVC must be greater than or equal to the stoage size of the volume snapshot that you want to restore. If a larger PVC is required, you can also resize the PVC after restoring the volume snapshot. <2> Set this field to the value of the `storageClassName` field in the source PVC of the volume snapshot that you want to restore. -<3> Set this field to the name of the volume snapshot that you want to restore. +<3> Set this field to the name of the volume snapshot that you want to restore. -. Create the PVC in the namespace where you created the the volume snapshot by running the following command: +. Create the PVC in the namespace where you created the volume snapshot by running the following command: + [source,terminal] ---- diff --git a/modules/microshift-audit-logs-config-intro.adoc b/modules/microshift-audit-logs-config-intro.adoc index a83b7609dc..c5a7051289 100644 --- a/modules/microshift-audit-logs-config-intro.adoc +++ b/modules/microshift-audit-logs-config-intro.adoc @@ -27,7 +27,7 @@ You can set fields in combination to define a maximum storage limit for retained |Audit log parameter|Default setting|Definition |`maxFileAge`:|`0`|How long log files are retained before automatic deletion. The default value means that a log file is never deleted based on age. This value can be configured. |`maxFiles`:|`10`|The total number of log files retained. By default, {microshift-short} retains 10 log files. The oldest is deleted when an excess file is created. This value can be configured. -|`maxFileSize`:|`200`|By default, when the `audit.log` file reaches the `maxFileSize` limit, the `audit.log` file is rotated and {microshift-short} begins writing to a new `audit.log` file. This value in in megabytes and can be configured. +|`maxFileSize`:|`200`|By default, when the `audit.log` file reaches the `maxFileSize` limit, the `audit.log` file is rotated and {microshift-short} begins writing to a new `audit.log` file. This value is in megabytes and can be configured. |`profile`:|`Default`|The `Default` profile setting only logs metadata for read and write requests; request bodies are not logged except for OAuth access token requests. If you do not specify this field, the `Default` profile is used. |=== diff --git a/modules/multi-architecture-enabling-64k-pages.adoc b/modules/multi-architecture-enabling-64k-pages.adoc index 43a1443947..81e295e723 100644 --- a/modules/multi-architecture-enabling-64k-pages.adoc +++ b/modules/multi-architecture-enabling-64k-pages.adoc @@ -5,7 +5,7 @@ :_mod-docs-content-type: PROCEDURE [id="multi-architecture-enabling-64k-pages_{context}"] -= Enabling 64k pages on the {op-system-first} kernel += Enabling 64k pages on the {op-system-first} kernel You can enable the 64k memory page in the {op-system-first} kernel on the 64-bit ARM compute machines in your cluster. The 64k page size kernel specification can be used for large GPU or high memory workloads. This is done using the Machine Config Operator (MCO) which uses a machine config pool to update the kernel. To enable 64k page sizes, you must dedicate a machine config pool for ARM64 to enable on the kernel. @@ -14,11 +14,11 @@ You can enable the 64k memory page in the {op-system-first} kernel on the 64-bit Using 64k pages is exclusive to 64-bit ARM architecture compute nodes or clusters installed on 64-bit ARM machines. If you configure the 64k pages kernel on a machine config pool using 64-bit x86 machines, the machine config pool and MCO will degrade. ==== -.Prerequisites: +.Prerequisites * You installed the OpenShift CLI (`oc`). * You created a cluster with compute nodes of different architecture on one of the supported platforms. -.Procedure: +.Procedure . Label the nodes where you want to run the 64k page size kernel: [source,terminal] @@ -27,7 +27,7 @@ Using 64k pages is exclusive to 64-bit ARM architecture compute nodes or cluster $ oc label node