diff --git a/metering/metering-troubleshooting-debugging.adoc b/metering/metering-troubleshooting-debugging.adoc index 646bdcd477..364909ad42 100644 --- a/metering/metering-troubleshooting-debugging.adoc +++ b/metering/metering-troubleshooting-debugging.adoc @@ -9,7 +9,7 @@ Use the following sections to help troubleshoot and debug specific issues with m In addition to the information in this section, be sure to review the following topics: -* xref:../metering/metering-installing-metering.adoc#metering-install-prerequisites_installing-metering[Prerequities for installing metering]. +* xref:../metering/metering-installing-metering.adoc#metering-install-prerequisites_installing-metering[Prerequisites for installing metering]. * xref:../metering/configuring_metering/metering-about-configuring.adoc#metering-about-configuring[About configuring metering] include::modules/metering-troubleshooting.adoc[leveloffset=+1] diff --git a/modules/cluster-logging-collector-envvar.adoc b/modules/cluster-logging-collector-envvar.adoc index ccc0fe7ed8..b88f2c69df 100644 --- a/modules/cluster-logging-collector-envvar.adoc +++ b/modules/cluster-logging-collector-envvar.adoc @@ -11,7 +11,7 @@ collector. See the link:https://github.com/openshift/origin-aggregated-logging/blob/master/fluentd/README.md[Fluentd README] in Github for lists of the available environment variables. -.Prerequisite +.Prerequisites * Set OpenShift Logging to the unmanaged state. Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades. diff --git a/modules/cluster-logging-curator-delete-index.adoc b/modules/cluster-logging-curator-delete-index.adoc index e94ee769b2..0054703792 100644 --- a/modules/cluster-logging-curator-delete-index.adoc +++ b/modules/cluster-logging-curator-delete-index.adoc @@ -11,7 +11,7 @@ This file can be expanded when more functions are supported. Only delete_indices You can configure Curator to delete Elasticsearch data that uses the data model prior to {product-title} {product-version}. You can configure per-project and global settings. Global settings apply to any project not specified. Per-project settings override global settings. -.Prerequisite +.Prerequisites * OpenShift Logging must be installed. diff --git a/modules/cluster-logging-manual-rollout-rolling.adoc b/modules/cluster-logging-manual-rollout-rolling.adoc index 19b4e414bc..37e3588000 100644 --- a/modules/cluster-logging-manual-rollout-rolling.adoc +++ b/modules/cluster-logging-manual-rollout-rolling.adoc @@ -11,7 +11,7 @@ or any of the `elasticsearch-*` deployment configurations. Also, a rolling restart is recommended if the nodes on which an Elasticsearch pod runs requires a reboot. -.Prerequisite +.Prerequisites * OpenShift Logging and Elasticsearch must be installed. diff --git a/modules/ldap-syncing-running-subset.adoc b/modules/ldap-syncing-running-subset.adoc index 3aed926b7b..9c6a1de7ff 100644 --- a/modules/ldap-syncing-running-subset.adoc +++ b/modules/ldap-syncing-running-subset.adoc @@ -17,7 +17,7 @@ These guidelines apply to groups found on LDAP servers as well as groups already present in {product-title}. ==== -.Prerequisite +.Prerequisites * Create a sync configuration file. diff --git a/modules/nodes-nodes-managing-max-pods-proc.adoc b/modules/nodes-nodes-managing-max-pods-proc.adoc index 283b518150..8a510c7837 100644 --- a/modules/nodes-nodes-managing-max-pods-proc.adoc +++ b/modules/nodes-nodes-managing-max-pods-proc.adoc @@ -10,7 +10,7 @@ Two parameters control the maximum number of pods that can be scheduled to a nod For example, if `podsPerCore` is set to `10` on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40. -.Prerequisite +.Prerequisites . Obtain the label associated with the static `MachineConfigPool` CRD for the type of node you want to configure. Perform one of the following steps: diff --git a/modules/nodes-scheduler-node-selectors-pod.adoc b/modules/nodes-scheduler-node-selectors-pod.adoc index 929e33138a..0eb402c893 100644 --- a/modules/nodes-scheduler-node-selectors-pod.adoc +++ b/modules/nodes-scheduler-node-selectors-pod.adoc @@ -20,7 +20,7 @@ Adding the label to the machine set ensures that new nodes or machines will have You cannot add a node selector directly to an existing scheduled pod. ==== -.Prerequisite +.Prerequisites To add a node selector to existing pods, determine the controlling object for that pod. For example, the `router-default-66d5cf9464-m2g75` pod is controlled by the `router-default-66d5cf9464` diff --git a/modules/odc-connecting-components.adoc b/modules/odc-connecting-components.adoc index b1d49ea974..3799e05bc5 100644 --- a/modules/odc-connecting-components.adoc +++ b/modules/odc-connecting-components.adoc @@ -56,7 +56,7 @@ You can establish a binding connection with Operator-backed components. This procedure walks through an example of creating a binding connection between a PostgreSQL Database service and a Node.js application. To create a binding connection with a service that is backed by the PostgreSQL Database Operator, you must first add the Red Hat-provided PostgreSQL Database Operator to the *OperatorHub* using a `CatalogSource` resource, and then install the Operator. The PostreSQL Database Operator then creates and manages the `Database` resource, which exposes the binding information in secrets, config maps, status, and spec attributes. -.Prerequisite +.Prerequisites * Ensure that you have created and deployed a Node.js application using the *Developer* perspective. * Ensure that you have installed the *Service Binding Operator* from OperatorHub. diff --git a/modules/persistent-storage-cinder-volume-security.adoc b/modules/persistent-storage-cinder-volume-security.adoc index a5081bd435..1242c020c2 100644 --- a/modules/persistent-storage-cinder-volume-security.adoc +++ b/modules/persistent-storage-cinder-volume-security.adoc @@ -8,7 +8,7 @@ If you use Cinder PVs in your application, configure security for their deployment configurations. -.Prerequisite +.Prerequisites - An SCC must be created that uses the appropriate `fsGroup` strategy. .Procedure diff --git a/modules/persistent-storage-local-pvc.adoc b/modules/persistent-storage-local-pvc.adoc index f90fa2b628..16a645b2e3 100644 --- a/modules/persistent-storage-local-pvc.adoc +++ b/modules/persistent-storage-local-pvc.adoc @@ -8,7 +8,7 @@ Local volumes must be statically created as a persistent volume claim (PVC) to be accessed by the pod. -.Prerequisite +.Prerequisites * Persistent volumes have been created using the local volume provisioner. diff --git a/modules/persistent-storage-local-removing-devices.adoc b/modules/persistent-storage-local-removing-devices.adoc index 3dcfacee96..87a3cf0d6f 100644 --- a/modules/persistent-storage-local-removing-devices.adoc +++ b/modules/persistent-storage-local-removing-devices.adoc @@ -12,7 +12,7 @@ Occasionally, local volumes and local volume sets must be deleted. While removin The following procedure outlines an example for removing a local volume. The same procedure can also be used to remove symlinks for a local volume set custom resource. ==== -.Prerequisite +.Prerequisites * The persistent volume must be in a `Released` or `Available` state. +