diff --git a/installing/installing_bare_metal_ipi/ipi-install-troubleshooting.adoc b/installing/installing_bare_metal_ipi/ipi-install-troubleshooting.adoc index c2141aa5c4..0c2d1c99f2 100644 --- a/installing/installing_bare_metal_ipi/ipi-install-troubleshooting.adoc +++ b/installing/installing_bare_metal_ipi/ipi-install-troubleshooting.adoc @@ -16,7 +16,7 @@ _Workflow 1 of 4_ illustrates a troubleshooting workflow when the `install-confi image:flow2.png[Flow-Diagram-2] -_Workflow 2 of 4_ illustrates a troubleshooting workflow for xref:ipi-install-troubleshooting-bootstrap-vm_{context}[ bootstrap VM issues], xref:ipi-install-troubleshooting-bootstrap-vm-cannot-boot_{context}[ bootstrap VMs that cannot boot up the cluster nodes], and xref:ipi-install-troubleshooting-bootstrap-vm-inspecting-logs_{context}[ inspecting logs]. When installing a {product-title} cluster without the `provisioning` network, this workflow does not apply. +_Workflow 2 of 4_ illustrates a troubleshooting workflow for xref:ipi-install-troubleshooting-bootstrap-vm_{context}[ bootstrap VM issues], xref:ipi-install-troubleshooting-bootstrap-vm-cannot-boot_{context}[ bootstrap VMs that cannot boot up the cluster nodes], and xref:ipi-install-troubleshooting-bootstrap-vm-inspecting-logs_{context}[ inspecting logs]. When installing an {product-title} cluster without the `provisioning` network, this workflow does not apply. image:flow3.png[Flow-Diagram-3] diff --git a/modules/common-attributes.adoc b/modules/common-attributes.adoc index 193f4481b5..4068fb8ca1 100644 --- a/modules/common-attributes.adoc +++ b/modules/common-attributes.adoc @@ -1,6 +1,6 @@ // The {product-title} attribute provides the context-sensitive name of the relevant OpenShift distribution, for example, "OpenShift Container Platform" or "OKD". The {product-version} attribute provides the product version relative to the distribution, for example "4.9". // {product-title} and {product-version} are parsed when AsciiBinder queries the _distro_map.yml file in relation to the base branch of a pull request. -// See https://github.com/openshift/openshift-docs/blob/master/contributing_to_docs/doc_guidelines.adoc#product-name-version for more information on this topic. +// See https://github.com/openshift/openshift-docs/blob/main/contributing_to_docs/doc_guidelines.adoc#product-name-and-version for more information on this topic. // Other common attributes are defined in the following lines: :data-uri: :icons: diff --git a/modules/crd-creating-aggregated-cluster-roles.adoc b/modules/crd-creating-aggregated-cluster-roles.adoc index 2d9623e383..435128addb 100644 --- a/modules/crd-creating-aggregated-cluster-roles.adoc +++ b/modules/crd-creating-aggregated-cluster-roles.adoc @@ -20,7 +20,7 @@ endif::[] .Procedure -. Create a cluster role definition file for the CRD. The cluster role definition is a YAML file that contains the rules that apply to each cluster role. A {product-title} controller adds the rules that you specify to the default cluster roles. +. Create a cluster role definition file for the CRD. The cluster role definition is a YAML file that contains the rules that apply to each cluster role. An {product-title} controller adds the rules that you specify to the default cluster roles. + .Example YAML file for a cluster role definition [source,yaml] diff --git a/modules/infrastructure-node-sizing.adoc b/modules/infrastructure-node-sizing.adoc index 2a48241892..06270615bc 100644 --- a/modules/infrastructure-node-sizing.adoc +++ b/modules/infrastructure-node-sizing.adoc @@ -33,7 +33,7 @@ In general, three infrastructure nodes are recommended per cluster. [IMPORTANT] ==== -These sizing recommendations are based on scale tests, which create a large number of objects across the cluster. These tests include reaching some of the cluster maximums. In the case of 250 and 500 node counts on a {product-title} {product-version} cluster, these maximums are 10000 namespaces with 61000 pods, 10000 deployments, 181000 secrets, 400 config maps, and so on. Prometheus is a highly memory intensive application; the resource usage depends on various factors including the number of nodes, objects, the Prometheus metrics scraping interval, metrics or time series, and the age of the cluster. The disk size also depends on the retention period. You must take these factors into consideration and size them accordingly. +These sizing recommendations are based on scale tests, which create a large number of objects across the cluster. These tests include reaching some of the cluster maximums. In the case of 250 and 500 node counts on an {product-title} {product-version} cluster, these maximums are 10000 namespaces with 61000 pods, 10000 deployments, 181000 secrets, 400 config maps, and so on. Prometheus is a highly memory intensive application; the resource usage depends on various factors including the number of nodes, objects, the Prometheus metrics scraping interval, metrics or time series, and the age of the cluster. The disk size also depends on the retention period. You must take these factors into consideration and size them accordingly. These sizing recommendations are only applicable for the Prometheus, Router, and Registry infrastructure components, which are installed during cluster installation. Logging is a day-two operation and is not included in these recommendations. ==== diff --git a/modules/ipi-install-troubleshooting-bootstrap-vm.adoc b/modules/ipi-install-troubleshooting-bootstrap-vm.adoc index 4f02c1ff9d..8a12ee38e2 100644 --- a/modules/ipi-install-troubleshooting-bootstrap-vm.adoc +++ b/modules/ipi-install-troubleshooting-bootstrap-vm.adoc @@ -76,7 +76,7 @@ localhost login: + [IMPORTANT] ==== -When deploying a {product-title} cluster without the `provisioning` network, you must use a public IP address and not a private IP address like `172.22.0.2`. +When deploying an {product-title} cluster without the `provisioning` network, you must use a public IP address and not a private IP address like `172.22.0.2`. ==== diff --git a/modules/ipi-install-troubleshooting-cluster-nodes-will-not-pxe.adoc b/modules/ipi-install-troubleshooting-cluster-nodes-will-not-pxe.adoc index 8368804fb3..59c1077cc2 100644 --- a/modules/ipi-install-troubleshooting-cluster-nodes-will-not-pxe.adoc +++ b/modules/ipi-install-troubleshooting-cluster-nodes-will-not-pxe.adoc @@ -5,7 +5,7 @@ = Cluster nodes will not PXE boot -When {product-title} cluster nodes will not PXE boot, execute the following checks on the cluster nodes that will not PXE boot. This procedure does not apply when installing a {product-title} cluster without the `provisioning` network. +When {product-title} cluster nodes will not PXE boot, execute the following checks on the cluster nodes that will not PXE boot. This procedure does not apply when installing an {product-title} cluster without the `provisioning` network. .Procedure diff --git a/modules/machineset-osp-adding-bare-metal.adoc b/modules/machineset-osp-adding-bare-metal.adoc index 24ffcaaf9c..6da48fa214 100644 --- a/modules/machineset-osp-adding-bare-metal.adoc +++ b/modules/machineset-osp-adding-bare-metal.adoc @@ -19,7 +19,7 @@ Bare-metal compute machines are not supported on clusters that use Kuryr. * Bare metal is available as link:https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/bare_metal_provisioning/sect-configure#creating_the_bare_metal_flavor[an {rh-openstack} flavor]. -* You deployed a {product-title} cluster on installer-provisioned infrastructure. +* You deployed an {product-title} cluster on installer-provisioned infrastructure. * Your {rh-openstack} cloud provider is configured to route traffic between the installer-created VM subnet and the pre-existing bare metal subnet. diff --git a/modules/olm-dependency-resolution-preferences.adoc b/modules/olm-dependency-resolution-preferences.adoc index 91ece1fccc..6ba6a994cd 100644 --- a/modules/olm-dependency-resolution-preferences.adoc +++ b/modules/olm-dependency-resolution-preferences.adoc @@ -37,7 +37,7 @@ There are two rules that govern catalog preference: [id="olm-dependency-catalog-ordering_{context}"] == Channel ordering -An Operator package in a catalog is a collection of update channels that a user can subscribe to in a {product-title} cluster. Channels can be used to provide a particular stream of updates for a minor release (`1.2`, `1.3`) or a release frequency (`stable`, `fast`). +An Operator package in a catalog is a collection of update channels that a user can subscribe to in an {product-title} cluster. Channels can be used to provide a particular stream of updates for a minor release (`1.2`, `1.3`) or a release frequency (`stable`, `fast`). It is likely that a dependency might be satisfied by Operators in the same package, but different channels. For example, version `1.2` of an Operator might exist in both the `stable` and `fast` channels. diff --git a/modules/olm-installing-operators-from-operatorhub-configure.adoc b/modules/olm-installing-operators-from-operatorhub-configure.adoc index b9c3f04f6d..07111167c4 100644 --- a/modules/olm-installing-operators-from-operatorhub-configure.adoc +++ b/modules/olm-installing-operators-from-operatorhub-configure.adoc @@ -19,7 +19,7 @@ In {product-title}, Red Hat Operators are not available by default. You can acce .Procedure -To access the Red Hat Operators in a {product-title} cluster: +To access the Red Hat Operators in an {product-title} cluster: . Edit the `OperatorHub` CR using the web console or CLI: diff --git a/modules/psap-node-feature-discovery-operator.adoc b/modules/psap-node-feature-discovery-operator.adoc index ad86bf9bea..1f06a96a65 100644 --- a/modules/psap-node-feature-discovery-operator.adoc +++ b/modules/psap-node-feature-discovery-operator.adoc @@ -20,7 +20,7 @@ ifdef::operators[] [discrete] == Purpose endif::operators[] -The Node Feature Discovery Operator (NFD) manages the detection of hardware features and configuration in a {product-title} cluster by labeling the nodes with hardware-specific information. NFD labels the host with node-specific attributes, such as PCI cards, kernel, operating system version, and so on. +The Node Feature Discovery Operator (NFD) manages the detection of hardware features and configuration in an {product-title} cluster by labeling the nodes with hardware-specific information. NFD labels the host with node-specific attributes, such as PCI cards, kernel, operating system version, and so on. The NFD Operator can be found on the Operator Hub by searching for “Node Feature Discovery”. ifdef::operators[] diff --git a/post_installation_configuration/network-configuration.adoc b/post_installation_configuration/network-configuration.adoc index b97bbc2de8..2d65db04a1 100644 --- a/post_installation_configuration/network-configuration.adoc +++ b/post_installation_configuration/network-configuration.adoc @@ -100,7 +100,7 @@ include::modules/router-performance-optimizations.adoc[leveloffset=+2] [id="post-installation-osp-fips"] == Post-installation {rh-openstack} network configuration -You can configure some aspects of a {product-title} on {rh-openstack-first} cluster after installation. +You can configure some aspects of an {product-title} on {rh-openstack-first} cluster after installation. include::modules/installation-osp-configuring-api-floating-ip.adoc[leveloffset=+2] include::modules/installation-osp-kuryr-port-pools.adoc[leveloffset=+2] diff --git a/serverless/admin_guide/serverless-cluster-admin-serving.adoc b/serverless/admin_guide/serverless-cluster-admin-serving.adoc index 9c0ab4022b..51451679dd 100644 --- a/serverless/admin_guide/serverless-cluster-admin-serving.adoc +++ b/serverless/admin_guide/serverless-cluster-admin-serving.adoc @@ -6,7 +6,7 @@ include::modules/common-attributes.adoc[] toc::[] -If you have cluster administrator permissions on a {product-title} cluster, you can create Knative Serving components with {ServerlessProductName} in the *Administrator* perspective of the web console or by using the `kn` and `oc` CLIs. +If you have cluster administrator permissions on an {product-title} cluster, you can create Knative Serving components with {ServerlessProductName} in the *Administrator* perspective of the web console or by using the `kn` and `oc` CLIs. // Create services as an admin include::modules/creating-serverless-apps-admin-console.adoc[leveloffset=+1] diff --git a/serverless/functions/serverless-functions-eventing.adoc b/serverless/functions/serverless-functions-eventing.adoc index 8402023b79..eae4bafadc 100644 --- a/serverless/functions/serverless-functions-eventing.adoc +++ b/serverless/functions/serverless-functions-eventing.adoc @@ -9,6 +9,6 @@ toc::[] :FeatureName: {FunctionsProductName} include::modules/technology-preview.adoc[leveloffset=+2] -Functions are deployed as a Knative service on a {product-title} cluster, and can be connected as a sink to Knative Eventing components. +Functions are deployed as a Knative service on an {product-title} cluster, and can be connected as a sink to Knative Eventing components. include::modules/serverless-connect-sink-source-odc.adoc[leveloffset=+1] diff --git a/serverless/knative_serving/serverless-configuring-routes.adoc b/serverless/knative_serving/serverless-configuring-routes.adoc index 5ab93e2082..5633909c50 100644 --- a/serverless/knative_serving/serverless-configuring-routes.adoc +++ b/serverless/knative_serving/serverless-configuring-routes.adoc @@ -6,7 +6,7 @@ include::modules/common-attributes.adoc[] toc::[] -Knative leverages {product-title} TLS termination to provide routing for Knative services. When a Knative service is created, a {product-title} route is automatically created for the service. This route is managed by the {ServerlessOperatorName}. The {product-title} route exposes the Knative service through the same domain as the {product-title} cluster. +Knative leverages {product-title} TLS termination to provide routing for Knative services. When a Knative service is created, an {product-title} route is automatically created for the service. This route is managed by the {ServerlessOperatorName}. The {product-title} route exposes the Knative service through the same domain as the {product-title} cluster. You can disable Operator control of {product-title} routing so that you can configure a Knative route to directly use your TLS certificates instead. diff --git a/storage/container_storage_interface/persistent-storage-csi-ebs.adoc b/storage/container_storage_interface/persistent-storage-csi-ebs.adoc index 2444c917b1..310936f14d 100644 --- a/storage/container_storage_interface/persistent-storage-csi-ebs.adoc +++ b/storage/container_storage_interface/persistent-storage-csi-ebs.adoc @@ -19,7 +19,7 @@ To create CSI-provisioned PVs that mount to AWS EBS storage assets, {product-tit [NOTE] ==== -If you installed the AWS EBS CSI Operator and driver on a {product-title} 4.5 cluster, you must uninstall the 4.5 Operator and driver before you update to {product-title} {product-version}. +If you installed the AWS EBS CSI Operator and driver on an {product-title} 4.5 cluster, you must uninstall the 4.5 Operator and driver before you update to {product-title} {product-version}. ==== include::modules/persistent-storage-csi-about.adoc[leveloffset=+1] diff --git a/storage/persistent_storage/osd-persistent-storage-aws.adoc b/storage/persistent_storage/osd-persistent-storage-aws.adoc index 2de2bb26cc..7a79a56c18 100644 --- a/storage/persistent_storage/osd-persistent-storage-aws.adoc +++ b/storage/persistent_storage/osd-persistent-storage-aws.adoc @@ -23,7 +23,7 @@ The high-level process to enable EFS on a cluster is: == Prerequisites ifdef::openshift-dedicated[] -* Customer Cloud Subscription (CCS) for a {product-title} cluster +* Customer Cloud Subscription (CCS) for an {product-title} cluster endif::[] ifdef::openshift-rosa[]