1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Fixing indefinite article errors for {product-title}.

This commit is contained in:
Samantha Gidlow
2021-12-09 10:12:56 -05:00
committed by Aidan Reilly
parent 9e186c6cda
commit 1195e78b8c
16 changed files with 16 additions and 16 deletions

View File

@@ -16,7 +16,7 @@ _Workflow 1 of 4_ illustrates a troubleshooting workflow when the `install-confi
image:flow2.png[Flow-Diagram-2]
_Workflow 2 of 4_ illustrates a troubleshooting workflow for xref:ipi-install-troubleshooting-bootstrap-vm_{context}[ bootstrap VM issues], xref:ipi-install-troubleshooting-bootstrap-vm-cannot-boot_{context}[ bootstrap VMs that cannot boot up the cluster nodes], and xref:ipi-install-troubleshooting-bootstrap-vm-inspecting-logs_{context}[ inspecting logs]. When installing a {product-title} cluster without the `provisioning` network, this workflow does not apply.
_Workflow 2 of 4_ illustrates a troubleshooting workflow for xref:ipi-install-troubleshooting-bootstrap-vm_{context}[ bootstrap VM issues], xref:ipi-install-troubleshooting-bootstrap-vm-cannot-boot_{context}[ bootstrap VMs that cannot boot up the cluster nodes], and xref:ipi-install-troubleshooting-bootstrap-vm-inspecting-logs_{context}[ inspecting logs]. When installing an {product-title} cluster without the `provisioning` network, this workflow does not apply.
image:flow3.png[Flow-Diagram-3]

View File

@@ -1,6 +1,6 @@
// The {product-title} attribute provides the context-sensitive name of the relevant OpenShift distribution, for example, "OpenShift Container Platform" or "OKD". The {product-version} attribute provides the product version relative to the distribution, for example "4.9".
// {product-title} and {product-version} are parsed when AsciiBinder queries the _distro_map.yml file in relation to the base branch of a pull request.
// See https://github.com/openshift/openshift-docs/blob/master/contributing_to_docs/doc_guidelines.adoc#product-name-version for more information on this topic.
// See https://github.com/openshift/openshift-docs/blob/main/contributing_to_docs/doc_guidelines.adoc#product-name-and-version for more information on this topic.
// Other common attributes are defined in the following lines:
:data-uri:
:icons:

View File

@@ -20,7 +20,7 @@ endif::[]
.Procedure
. Create a cluster role definition file for the CRD. The cluster role definition is a YAML file that contains the rules that apply to each cluster role. A {product-title} controller adds the rules that you specify to the default cluster roles.
. Create a cluster role definition file for the CRD. The cluster role definition is a YAML file that contains the rules that apply to each cluster role. An {product-title} controller adds the rules that you specify to the default cluster roles.
+
.Example YAML file for a cluster role definition
[source,yaml]

View File

@@ -33,7 +33,7 @@ In general, three infrastructure nodes are recommended per cluster.
[IMPORTANT]
====
These sizing recommendations are based on scale tests, which create a large number of objects across the cluster. These tests include reaching some of the cluster maximums. In the case of 250 and 500 node counts on a {product-title} {product-version} cluster, these maximums are 10000 namespaces with 61000 pods, 10000 deployments, 181000 secrets, 400 config maps, and so on. Prometheus is a highly memory intensive application; the resource usage depends on various factors including the number of nodes, objects, the Prometheus metrics scraping interval, metrics or time series, and the age of the cluster. The disk size also depends on the retention period. You must take these factors into consideration and size them accordingly.
These sizing recommendations are based on scale tests, which create a large number of objects across the cluster. These tests include reaching some of the cluster maximums. In the case of 250 and 500 node counts on an {product-title} {product-version} cluster, these maximums are 10000 namespaces with 61000 pods, 10000 deployments, 181000 secrets, 400 config maps, and so on. Prometheus is a highly memory intensive application; the resource usage depends on various factors including the number of nodes, objects, the Prometheus metrics scraping interval, metrics or time series, and the age of the cluster. The disk size also depends on the retention period. You must take these factors into consideration and size them accordingly.
These sizing recommendations are only applicable for the Prometheus, Router, and Registry infrastructure components, which are installed during cluster installation. Logging is a day-two operation and is not included in these recommendations.
====

View File

@@ -76,7 +76,7 @@ localhost login:
+
[IMPORTANT]
====
When deploying a {product-title} cluster without the `provisioning` network, you must use a public IP address and not a private IP address like `172.22.0.2`.
When deploying an {product-title} cluster without the `provisioning` network, you must use a public IP address and not a private IP address like `172.22.0.2`.
====

View File

@@ -5,7 +5,7 @@
= Cluster nodes will not PXE boot
When {product-title} cluster nodes will not PXE boot, execute the following checks on the cluster nodes that will not PXE boot. This procedure does not apply when installing a {product-title} cluster without the `provisioning` network.
When {product-title} cluster nodes will not PXE boot, execute the following checks on the cluster nodes that will not PXE boot. This procedure does not apply when installing an {product-title} cluster without the `provisioning` network.
.Procedure

View File

@@ -19,7 +19,7 @@ Bare-metal compute machines are not supported on clusters that use Kuryr.
* Bare metal is available as link:https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/bare_metal_provisioning/sect-configure#creating_the_bare_metal_flavor[an {rh-openstack} flavor].
* You deployed a {product-title} cluster on installer-provisioned infrastructure.
* You deployed an {product-title} cluster on installer-provisioned infrastructure.
* Your {rh-openstack} cloud provider is configured to route traffic between the installer-created VM
subnet and the pre-existing bare metal subnet.

View File

@@ -37,7 +37,7 @@ There are two rules that govern catalog preference:
[id="olm-dependency-catalog-ordering_{context}"]
== Channel ordering
An Operator package in a catalog is a collection of update channels that a user can subscribe to in a {product-title} cluster. Channels can be used to provide a particular stream of updates for a minor release (`1.2`, `1.3`) or a release frequency (`stable`, `fast`).
An Operator package in a catalog is a collection of update channels that a user can subscribe to in an {product-title} cluster. Channels can be used to provide a particular stream of updates for a minor release (`1.2`, `1.3`) or a release frequency (`stable`, `fast`).
It is likely that a dependency might be satisfied by Operators in the same package, but different channels. For example, version `1.2` of an Operator might exist in both the `stable` and `fast` channels.

View File

@@ -19,7 +19,7 @@ In {product-title}, Red Hat Operators are not available by default. You can acce
.Procedure
To access the Red Hat Operators in a {product-title} cluster:
To access the Red Hat Operators in an {product-title} cluster:
. Edit the `OperatorHub` CR using the web console or CLI:

View File

@@ -20,7 +20,7 @@ ifdef::operators[]
[discrete]
== Purpose
endif::operators[]
The Node Feature Discovery Operator (NFD) manages the detection of hardware features and configuration in a {product-title} cluster by labeling the nodes with hardware-specific information. NFD labels the host with node-specific attributes, such as PCI cards, kernel, operating system version, and so on.
The Node Feature Discovery Operator (NFD) manages the detection of hardware features and configuration in an {product-title} cluster by labeling the nodes with hardware-specific information. NFD labels the host with node-specific attributes, such as PCI cards, kernel, operating system version, and so on.
The NFD Operator can be found on the Operator Hub by searching for “Node Feature Discovery”.
ifdef::operators[]

View File

@@ -100,7 +100,7 @@ include::modules/router-performance-optimizations.adoc[leveloffset=+2]
[id="post-installation-osp-fips"]
== Post-installation {rh-openstack} network configuration
You can configure some aspects of a {product-title} on {rh-openstack-first} cluster after installation.
You can configure some aspects of an {product-title} on {rh-openstack-first} cluster after installation.
include::modules/installation-osp-configuring-api-floating-ip.adoc[leveloffset=+2]
include::modules/installation-osp-kuryr-port-pools.adoc[leveloffset=+2]

View File

@@ -6,7 +6,7 @@ include::modules/common-attributes.adoc[]
toc::[]
If you have cluster administrator permissions on a {product-title} cluster, you can create Knative Serving components with {ServerlessProductName} in the *Administrator* perspective of the web console or by using the `kn` and `oc` CLIs.
If you have cluster administrator permissions on an {product-title} cluster, you can create Knative Serving components with {ServerlessProductName} in the *Administrator* perspective of the web console or by using the `kn` and `oc` CLIs.
// Create services as an admin
include::modules/creating-serverless-apps-admin-console.adoc[leveloffset=+1]

View File

@@ -9,6 +9,6 @@ toc::[]
:FeatureName: {FunctionsProductName}
include::modules/technology-preview.adoc[leveloffset=+2]
Functions are deployed as a Knative service on a {product-title} cluster, and can be connected as a sink to Knative Eventing components.
Functions are deployed as a Knative service on an {product-title} cluster, and can be connected as a sink to Knative Eventing components.
include::modules/serverless-connect-sink-source-odc.adoc[leveloffset=+1]

View File

@@ -6,7 +6,7 @@ include::modules/common-attributes.adoc[]
toc::[]
Knative leverages {product-title} TLS termination to provide routing for Knative services. When a Knative service is created, a {product-title} route is automatically created for the service. This route is managed by the {ServerlessOperatorName}. The {product-title} route exposes the Knative service through the same domain as the {product-title} cluster.
Knative leverages {product-title} TLS termination to provide routing for Knative services. When a Knative service is created, an {product-title} route is automatically created for the service. This route is managed by the {ServerlessOperatorName}. The {product-title} route exposes the Knative service through the same domain as the {product-title} cluster.
You can disable Operator control of {product-title} routing so that you can configure a Knative route to directly use your TLS certificates instead.

View File

@@ -19,7 +19,7 @@ To create CSI-provisioned PVs that mount to AWS EBS storage assets, {product-tit
[NOTE]
====
If you installed the AWS EBS CSI Operator and driver on a {product-title} 4.5 cluster, you must uninstall the 4.5 Operator and driver before you update to {product-title} {product-version}.
If you installed the AWS EBS CSI Operator and driver on an {product-title} 4.5 cluster, you must uninstall the 4.5 Operator and driver before you update to {product-title} {product-version}.
====
include::modules/persistent-storage-csi-about.adoc[leveloffset=+1]

View File

@@ -23,7 +23,7 @@ The high-level process to enable EFS on a cluster is:
== Prerequisites
ifdef::openshift-dedicated[]
* Customer Cloud Subscription (CCS) for a {product-title} cluster
* Customer Cloud Subscription (CCS) for an {product-title} cluster
endif::[]
ifdef::openshift-rosa[]