diff --git a/modules/builds-using-build-volumes.adoc b/modules/builds-using-build-volumes.adoc index 3178054739..db996378cf 100644 --- a/modules/builds-using-build-volumes.adoc +++ b/modules/builds-using-build-volumes.adoc @@ -9,7 +9,7 @@ endif::[] [id="builds-using-build-volumes_{context}"] = Using build volumes -You can mount build volumes to give running builds access to information that you don't want to persist in the output container image. +You can mount build volumes to give running builds access to information that you do not want to persist in the output container image. Build volumes provide sensitive information, such as repository credentials, that the build environment or configuration only needs at build time. Build volumes are different from xref:../../cicd/builds/creating-build-inputs.adoc#builds-define-build-inputs_creating-build-inputs[build inputs], whose data can persist in the output container image. @@ -55,7 +55,7 @@ spec: attribute: value ---- <1> Required. A unique name. -<2> Required. The absolute path of the mount point. It must not contain `..` or `:` and doesn't collide with the destination path generated by the builder. The `/opt/app-root/src` is the default home directory for many Red Hat S2I-enabled images. +<2> Required. The absolute path of the mount point. It must not contain `..` or `:` and does not collide with the destination path generated by the builder. The `/opt/app-root/src` is the default home directory for many Red Hat S2I-enabled images. <3> Required. The type of source, `ConfigMap`, `Secret`, or `CSI`. <4> Required. The name of the source. <5> Required. The driver that provides the ephemeral CSI volume. @@ -105,7 +105,7 @@ spec: ---- <1> Required. A unique name. -<2> Required. The absolute path of the mount point. It must not contain `..` or `:` and doesn't collide with the destination path generated by the builder. The `/opt/app-root/src` is the default home directory for many Red Hat S2I-enabled images. +<2> Required. The absolute path of the mount point. It must not contain `..` or `:` and does not collide with the destination path generated by the builder. The `/opt/app-root/src` is the default home directory for many Red Hat S2I-enabled images. <3> Required. The type of source, `ConfigMap`, `Secret`, or `CSI`. <4> Required. The name of the source. <5> Required. The driver that provides the ephemeral CSI volume. diff --git a/modules/cluster-cloud-controller-config-osp.adoc b/modules/cluster-cloud-controller-config-osp.adoc index 36fff4ed70..35f1fc6d44 100644 --- a/modules/cluster-cloud-controller-config-osp.adoc +++ b/modules/cluster-cloud-controller-config-osp.adoc @@ -191,7 +191,7 @@ This option is unsupported if you use {rh-openstack} earlier than version 17 wit // | The id of the loadbalancer flavor to use. Uses octavia default if not set. // | `availability-zone` -// | The name of the loadbalancer availability zone to use. The Octavia availability zone capabilities will not be used if it is not set. The parameter will be ignored if the Octavia version doesn't support availability zones yet. +// | The name of the loadbalancer availability zone to use. The Octavia availability zone capabilities will not be used if it is not set. The parameter will be ignored if the Octavia version does not support availability zones yet. | `LoadBalancerClass "ClassName"` a| This is a config section that comprises a set of options: diff --git a/modules/cluster-logging-elasticsearch-rules.adoc b/modules/cluster-logging-elasticsearch-rules.adoc index 80494cfb6c..0ebf37a932 100644 --- a/modules/cluster-logging-elasticsearch-rules.adoc +++ b/modules/cluster-logging-elasticsearch-rules.adoc @@ -17,7 +17,7 @@ You can view these alerting rules in the {product-title} web console. |`ElasticsearchClusterNotHealthy` |The cluster health status has been RED for at least 2 minutes. The cluster does not accept writes, shards may be missing, or the master - node hasn't been elected yet. + node has not been elected yet. |Critical |`ElasticsearchClusterNotHealthy` diff --git a/modules/cluster-logging-exported-fields-top-level-fields.adoc b/modules/cluster-logging-exported-fields-top-level-fields.adoc index bde30bebfa..1af9477633 100644 --- a/modules/cluster-logging-exported-fields-top-level-fields.adoc +++ b/modules/cluster-logging-exported-fields-top-level-fields.adoc @@ -79,7 +79,7 @@ The following values come from link:http://sourceware.org/git/?p=glibc.git;a=blo The two following values are not part of `syslog.h` but are widely used: * `8` = `trace`, trace-level messages, which are more verbose than `debug` messages. -* `9` = `unknown`, when the logging system gets a value it doesn't recognize. +* `9` = `unknown`, when the logging system gets a value it does not recognize. Map the log levels or priorities of other logging systems to their nearest match in the preceding list. For example, from link:https://docs.python.org/2.7/library/logging.html#logging-levels[python logging], you can match `CRITICAL` with `crit`, `ERROR` with `err`, and so on. diff --git a/modules/cnf-configure_for_irq_dynamic_load_balancing.adoc b/modules/cnf-configure_for_irq_dynamic_load_balancing.adoc index 6f69a519ba..9d7b8e4d88 100644 --- a/modules/cnf-configure_for_irq_dynamic_load_balancing.adoc +++ b/modules/cnf-configure_for_irq_dynamic_load_balancing.adoc @@ -118,7 +118,7 @@ Starting pod/-debug ... To use host binaries, run `chroot /host` Pod IP: -If you don't see a command prompt, try pressing enter. +If you do not see a command prompt, try pressing enter. sh-4.4# ---- diff --git a/modules/creating-your-first-content.adoc b/modules/creating-your-first-content.adoc index 20316e7493..1b94268a79 100644 --- a/modules/creating-your-first-content.adoc +++ b/modules/creating-your-first-content.adoc @@ -96,7 +96,7 @@ Topics: ---- . On the command line, run `asciibinder` from the root folder of openshift-docs. -You don't have to add or commit your changes for asciibinder to run. +You do not have to add or commit your changes for asciibinder to run. . After the asciibinder build completes, open up your browser and navigate to /openshift-docs/_preview/openshift-enterprise/my_first_mod_docs/my_guide/assembly_my-first-assembly.html @@ -105,4 +105,4 @@ You don't have to add or commit your changes for asciibinder to run. contents from your module `My First Module`. NOTE: You can delete this branch now if you are done testing. This branch -shouldn't be submitted to the upstream openshift-docs repository. +should not be submitted to the upstream openshift-docs repository. diff --git a/modules/dynamic-plugin-api.adoc b/modules/dynamic-plugin-api.adoc index f47d587140..ce5714ffc8 100644 --- a/modules/dynamic-plugin-api.adoc +++ b/modules/dynamic-plugin-api.adoc @@ -122,7 +122,7 @@ Component for displaying an error status popover. |Parameter Name |Description |`title` |(optional) status text |`iconOnly` |(optional) if true, only displays icon -|`noTooltip` |(optional) if true, tooltip won't be displayed +|`noTooltip` |(optional) if true, tooltip is not displayed |`className` |(optional) additional class name for the component |`popoverTitle` |(optional) title for popover |=== @@ -143,7 +143,7 @@ Component for displaying an information status popover. |Parameter Name |Description |`title` |(optional) status text |`iconOnly` |(optional) if true, only displays icon -|`noTooltip` |(optional) if true, tooltip won't be displayed +|`noTooltip` |(optional) if true, tooltip is not displayed |`className` |(optional) additional class name for the component |`popoverTitle` |(optional) title for popover |=== @@ -164,7 +164,7 @@ Component for displaying a progressing status popover. |Parameter Name |Description |`title` |(optional) status text |`iconOnly` |(optional) if true, only displays icon -|`noTooltip` |(optional) if true, tooltip won't be displayed +|`noTooltip` |(optional) if true, tooltip is not displayed |`className` |(optional) additional class name for the component |`popoverTitle` |(optional) title for popover |=== @@ -185,7 +185,7 @@ Component for displaying a success status popover. |Parameter Name |Description |`title` |(optional) status text |`iconOnly` |(optional) if true, only displays icon -|`noTooltip` |(optional) if true, tooltip won't be displayed +|`noTooltip` |(optional) if true, tooltip is not displayed |`className` |(optional) additional class name for the component |`popoverTitle` |(optional) title for popover |=== @@ -219,7 +219,7 @@ Hook that provides information about user access to a given resource. It returns React hook for consuming Console extensions with resolved `CodeRef` properties. This hook accepts the same argument(s) as `useExtensions` hook and returns an adapted list of extension instances, resolving all code references within each extension's properties. -Initially, the hook returns an empty array. After the resolution is complete, the React component is re-rendered with the hook returning an adapted list of extensions. When the list of matching extensions changes, the resolution is restarted. The hook will continue to return the previous result until the resolution completes. +Initially, the hook returns an empty array. After the resolution is complete, the React component is re-rendered with the hook returning an adapted list of extensions. When the list of matching extensions changes, the resolution is restarted. The hook continues to return the previous result until the resolution completes. The hook's result elements are guaranteed to be referentially stable across re-renders. It returns a tuple containing a list of adapted extension instances with resolved code references, a boolean flag indicating whether the resolution is complete, and a list of errors detected during the resolution. @@ -364,7 +364,7 @@ A hook that provides a list of user-selected active TableColumns. TableColumns |`\{boolean} [options.showNamespaceOverride]` |(optional) If true, a -namespace column will be included, regardless of column management +namespace column is included, regardless of column management selections |`\{string} [options.columnManagementID]` |(optional) A unique ID @@ -757,7 +757,7 @@ const Component: React.FC = () => { |=== |Parameter Name |Description |`initResources` |Resources must be watched as key-value pair, -wherein key will be unique to resource and value will be options needed +wherein key is unique to resource and value is options needed to watch for the respective resource. |=== @@ -837,8 +837,8 @@ model. In case of failure, the promise gets rejected with HTTP error response. |`options.model` |k8s model -|`options.name` |The name of the resource, if not provided then it will -look for all the resources matching the model. +|`options.name` |The name of the resource, if not provided then it +looks for all the resources matching the model. |`options.ns` | The namespace to look into, should not be specified for cluster-scoped resources. @@ -949,7 +949,7 @@ request headers, method, redirect, etc. See link:{power-bi-url}[Interface Reques |`options.json` |Can control garbage collection of resources -explicitly if provided else will default to model's "propagationPolicy". +explicitly if provided or else it defaults to the model's "propagationPolicy". |=== [discrete] @@ -990,7 +990,7 @@ Provides apiVersion for a k8s model. [discrete] == `getGroupVersionKindForResource` -Provides a group, version, and kind for a resource. It returns the group, version, kind for the provided resource. If the resource does not have an API group, group "core" will be returned. If the resource has an invalid apiVersion, then it will throw an Error. +Provides a group, version, and kind for a resource. It returns the group, version, kind for the provided resource. If the resource does not have an API group, group "core" is returned. If the resource has an invalid apiVersion, then it throws an Error. [cols=",",options="header",] |=== @@ -1001,7 +1001,7 @@ Provides a group, version, and kind for a resource. It returns the group, versio [discrete] == `getGroupVersionKindForModel` -Provides a group, version, and kind for a k8s model. This returns the group, version, kind for the provided model. If the model does not have an apiGroup, group "core" will be returned. +Provides a group, version, and kind for a k8s model. This returns the group, version, kind for the provided model. If the model does not have an apiGroup, group "core" is returned. [cols=",",options="header",] |=== @@ -1294,7 +1294,7 @@ the editor. This prop is used only during the initial render |`header` |Add a header on top of the YAML editor -|`onSave` |Callback for the Save button. Passing it will override the +|`onSave` |Callback for the Save button. Passing it overrides the default update performed on the resource by the editor |=== @@ -1404,7 +1404,7 @@ Component that allows to receive contributions from other plugins for the `conso [discrete] == `NamespaceBar` -A component that renders a horizontal toolbar with a namespace dropdown menu in the leftmost position. Additional components can be passed in as children and will be rendered to the right of the namespace dropdown. This component is designed to be used at the top of the page. It should be used on pages where the user needs to be able to change the active namespace, such as on pages with k8s resources. +A component that renders a horizontal toolbar with a namespace dropdown menu in the leftmost position. Additional components can be passed in as children and is rendered to the right of the namespace dropdown. This component is designed to be used at the top of the page. It should be used on pages where the user needs to be able to change the active namespace, such as on pages with k8s resources. .Example [source,text] @@ -1429,7 +1429,7 @@ namespace option is selected. It accepts the new namespace in the form of a string as its only argument. The active namespace is updated automatically when an option is selected, but additional logic can be applied via this function. When the namespace is changed, the namespace -parameter in the URL will be changed from the previous namespace to the +parameter in the URL is changed from the previous namespace to the newly selected namespace. |`isDisabled` |(optional) A boolean flag that disables the namespace @@ -1493,14 +1493,14 @@ A component that renders a graph of the results from a Prometheus PromQL query a |`customDataSource` |(optional) Base URL of an API endpoint that handles PromQL queries. If provided, this is used instead of the default API for fetching data. |`defaultSamples` |(optional) The default number of data samples plotted for each data series. If there are many data series, QueryBrowser might automatically pick a lower number of data samples than specified here. |`defaultTimespan` |(optional) The default timespan for the graph in milliseconds - defaults to 1,800,000 (30 minutes). -|`disabledSeries` |(optional) Disable (don't display) data series with these exact label / value pairs. +|`disabledSeries` |(optional) Disable (do not display) data series with these exact label / value pairs. |`disableZoom` |(optional) Flag to disable the graph zoom controls. |`filterLabels` |(optional) Optionally filter the returned data series to only those that match these label / value pairs. |`fixedEndTime` |(optional) Set the end time for the displayed time range rather than showing data up to the current time. |`formatSeriesTitle` |(optional) Function that returns a string to use as the title for a single data series. |`GraphLink` |(optional) Component for rendering a link to another page (for example getting more information about this query). |`hideControls` |(optional) Flag to hide the graph controls for changing the graph timespan, and so on. -|`isStack` |(optional) Flag to display a stacked graph instead of a line graph. If showStackedControl is set, it will still be possible for the user to switch to a line graph. +|`isStack` |(optional) Flag to display a stacked graph instead of a line graph. If showStackedControl is set, it is still possible for the user to switch to a line graph. |`namespace` |(optional) If provided, data is only returned for this namespace (only series that have this namespace label). |`onZoom` |(optional) Callback called when the graph is zoomed. |`pollInterval` |(optional) If set, determines how often the graph is updated to show the latest data (in milliseconds). @@ -1533,7 +1533,7 @@ const PodAnnotationsButton = ({ pod }) => { |=== .Returns -A function which will launch a modal for editing a resource's annotations. +A function which launches a modal for editing a resource's annotations. [discrete] == `useDeleteModal` @@ -1561,7 +1561,7 @@ const DeletePodButton = ({ pod }) => { |=== .Returns -A function which will launch a modal for deleting a resource. +A function which launches a modal for deleting a resource. [discrete] == `useLabelsModel` @@ -1585,7 +1585,7 @@ const PodLabelsButton = ({ pod }) => { |=== .Returns -A function which will launch a modal for editing a resource's labels. +A function which launches a modal for editing a resource's labels. [discrete] == `useActiveNamespace` diff --git a/modules/gitops-release-notes-1-4-0.adoc b/modules/gitops-release-notes-1-4-0.adoc index e8aa41bac0..76eb45643d 100644 --- a/modules/gitops-release-notes-1-4-0.adoc +++ b/modules/gitops-release-notes-1-4-0.adoc @@ -24,7 +24,7 @@ The current release adds the following improvements. * As an administrative user, when you give Argo CD access to a namespace by using the `argocd.argoproj.io/managed-by` label, it assumes namespace-admin privileges. These privileges are an issue for administrators who provide namespaces to non-administrators, such as development teams, because the privileges enable non-administrators to modify objects such as network policies. + -With this update, administrators can configure a common cluster role for all the managed namespaces. In role bindings for the Argo CD application controller, the Operator refers to the `CONTROLLER_CLUSTER_ROLE` environment variable. In role bindings for the Argo CD server, the Operator refers to the `SERVER_CLUSTER_ROLE` environment variable. If these environment variables contain custom roles, the Operator doesn't create the default admin role. Instead, it uses the existing custom role for all managed namespaces. link:https://issues.redhat.com/browse/GITOPS-1290[GITOPS-1290] +With this update, administrators can configure a common cluster role for all the managed namespaces. In role bindings for the Argo CD application controller, the Operator refers to the `CONTROLLER_CLUSTER_ROLE` environment variable. In role bindings for the Argo CD server, the Operator refers to the `SERVER_CLUSTER_ROLE` environment variable. If these environment variables contain custom roles, the Operator does not create the default admin role. Instead, it uses the existing custom role for all managed namespaces. link:https://issues.redhat.com/browse/GITOPS-1290[GITOPS-1290] * With this update, the *Environments* page in the {product-title} *Developer* perspective displays a broken heart icon to indicate degraded resources, excluding ones whose status is `Progressing`, `Missing`, and `Unknown`. The console displays a yellow yield sign icon to indicate out-of-sync resources. link:https://issues.redhat.com/browse/GITOPS-1307[GITOPS-1307] @@ -37,9 +37,9 @@ The following issues have been resolved in the current release: * Before this update, setting a resource quota in the namespace of the Argo CD custom resource might cause the setup of the Red Hat SSO (RH SSO) instance to fail. This update fixes this issue by setting a minimum resource request for the RH SSO deployment pods. link:https://issues.redhat.com/browse/GITOPS-1297[GITOPS-1297] -* Before this update, if you changed the log level for the `argocd-repo-server` workload, the Operator didn't reconcile this setting. The workaround was to delete the deployment resource so that the Operator recreated it with the new log level. With this update, the log level is correctly reconciled for existing `argocd-repo-server` workloads. link:https://issues.redhat.com/browse/GITOPS-1387[GITOPS-1387] +* Before this update, if you changed the log level for the `argocd-repo-server` workload, the Operator did not reconcile this setting. The workaround was to delete the deployment resource so that the Operator recreated it with the new log level. With this update, the log level is correctly reconciled for existing `argocd-repo-server` workloads. link:https://issues.redhat.com/browse/GITOPS-1387[GITOPS-1387] -* Before this update, if the Operator managed an Argo CD instance that lacked the `.data` field in the `argocd-secret` Secret, the Operator on that instance crashed. This update fixes the issue so that the Operator doesn't crash when the `.data` field is missing. Instead, the secret regenerates and the `gitops-operator-controller-manager` resource is redeployed. link:https://issues.redhat.com/browse/GITOPS-1402[GITOPS-1402] +* Before this update, if the Operator managed an Argo CD instance that lacked the `.data` field in the `argocd-secret` Secret, the Operator on that instance crashed. This update fixes the issue so that the Operator does not crash when the `.data` field is missing. Instead, the secret regenerates and the `gitops-operator-controller-manager` resource is redeployed. link:https://issues.redhat.com/browse/GITOPS-1402[GITOPS-1402] * Before this update, the `gitopsservice` service was annotated as an internal object. This update removes the annotation so you can update or delete the default Argo CD instance and run GitOps workloads on infrastructure nodes by using the UI. link:https://issues.redhat.com/browse/GITOPS-1429[GITOPS-1429] diff --git a/modules/installation-aws-editing-manifests.adoc b/modules/installation-aws-editing-manifests.adoc index df12bf67dd..6ceecf3084 100644 --- a/modules/installation-aws-editing-manifests.adoc +++ b/modules/installation-aws-editing-manifests.adoc @@ -96,7 +96,7 @@ Find the subnet ID and replace it with the ID of the private subnet created in t * Specify MTU value for the Network Provider + -Outpost service links support a maximum packet size of 1300 bytes. It's required to modify the MTU of the Network Provider to follow this requirement. +Outpost service links support a maximum packet size of 1300 bytes. You must modify the MTU of the Network Provider to follow this requirement. Create a new file under the manifests directory and name the file `cluster-network-03-config.yml`. For the OVN-Kubernetes network provider, set the MTU value to 1200. + [source,yaml] diff --git a/modules/installation-initializing.adoc b/modules/installation-initializing.adoc index ece0816079..5c7d6ad058 100644 --- a/modules/installation-initializing.adoc +++ b/modules/installation-initializing.adoc @@ -415,7 +415,7 @@ ifdef::aws-outposts[] * Unlike AWS Regions, which offer near-infinite scale, AWS Outposts are limited by their provisioned capacity, EC2 family and generations, configured instance sizes, and availability of compute capacity that is not already consumed by other workloads. Therefore, when creating new {product-title} cluster, you need to provide the supported instance type in the `compute.platform.aws.type` section in the configuration file. * When deploying {product-title} cluster with remote workers running in AWS Outposts, only one Availability Zone can be used for the compute instances - the Availability Zone in which the Outpost instance was created in. Therefore, when creating new {product-title} cluster, it recommended to provide the relevant Availability Zone in the `compute.platform.aws.zones` section in the configuration file, in order to limit the compute instances to this Availability Zone. -* Amazon Elastic Block Store (EBS) gp3 volumes aren't supported by the AWS Outposts service. This volume type is the default type used by the {product-title} cluster. Therefore, when creating new {product-title} cluster, you must change the volume type in the `compute.platform.aws.rootVolume.type` section to gp2. +* Amazon Elastic Block Store (EBS) gp3 volumes are not supported by the AWS Outposts service. This volume type is the default type used by the {product-title} cluster. Therefore, when creating new {product-title} cluster, you must change the volume type in the `compute.platform.aws.rootVolume.type` section to gp2. You will find more information about how to change these values below. endif::aws-outposts[] diff --git a/modules/installation-osp-creating-control-plane.adoc b/modules/installation-osp-creating-control-plane.adoc index 34e6e7ded7..4c7e207f18 100644 --- a/modules/installation-osp-creating-control-plane.adoc +++ b/modules/installation-osp-creating-control-plane.adoc @@ -20,7 +20,7 @@ Create three control plane machines by using the Ignition config files that you . On a command line, change the working directory to the location of the playbooks. -. If the control plane Ignition config files aren't already in your working directory, copy them into it. +. If the control plane Ignition config files are not already in your working directory, copy them into it. . On a command line, run the `control-plane.yaml` playbook: + diff --git a/modules/installation-osp-local-disk-deployment.adoc b/modules/installation-osp-local-disk-deployment.adoc index e517a444d1..8ea6f00717 100644 --- a/modules/installation-osp-local-disk-deployment.adoc +++ b/modules/installation-osp-local-disk-deployment.adoc @@ -253,7 +253,7 @@ spec: ---- <1> The etcd database must be mounted by the device, not a label, to ensure that `systemd` generates the device dependency used in this config to trigger filesystem creation. <2> Do not run if the file system `dev/disk/by-label/local-etcd` already exists. -<3> Fails with an alert message if `/dev/disk/by-label/ephemeral0` doesn't exist. +<3> Fails with an alert message if `/dev/disk/by-label/ephemeral0` does not exist. <4> Migrates existing data to local etcd database. This config does so after `/var/lib/etcd` is mounted, but before CRI-O starts so etcd is not running yet. <5> Requires that etcd is mounted and does not contain a member directory, but the ostree does. <6> Cleans up any previous migration state. diff --git a/modules/ipi-install-establishing-communication-between-subnets.adoc b/modules/ipi-install-establishing-communication-between-subnets.adoc index ceb800745d..1cdf49616a 100644 --- a/modules/ipi-install-establishing-communication-between-subnets.adoc +++ b/modules/ipi-install-establishing-communication-between-subnets.adoc @@ -143,7 +143,7 @@ Adjust the commands to match your actual interface names and gateway. $ ping ---- + -If the ping is successful, it means the control plane nodes in the first subnet can reach the remote worker nodes in the second subnet. If you don't receive a response, review the network configurations and repeat the procedure for the node. +If the ping is successful, it means the control plane nodes in the first subnet can reach the remote worker nodes in the second subnet. If you do not receive a response, review the network configurations and repeat the procedure for the node. .. From the remote worker nodes in the second subnet, ping a control plane node in the first subnet by running the following command: + @@ -152,4 +152,4 @@ If the ping is successful, it means the control plane nodes in the first subnet $ ping ---- + -If the ping is successful, it means the remote worker nodes in the second subnet can reach the control plane in the first subnet. If you don't receive a response, review the network configurations and repeat the procedure for the node. +If the ping is successful, it means the remote worker nodes in the second subnet can reach the control plane in the first subnet. If you do not receive a response, review the network configurations and repeat the procedure for the node. diff --git a/modules/ipi-install-modifying-install-config-for-slaac-dual-stack-network.adoc b/modules/ipi-install-modifying-install-config-for-slaac-dual-stack-network.adoc index 511e12f76c..2dbbf2e929 100644 --- a/modules/ipi-install-modifying-install-config-for-slaac-dual-stack-network.adoc +++ b/modules/ipi-install-modifying-install-config-for-slaac-dual-stack-network.adoc @@ -6,7 +6,7 @@ [id='ipi-install-modifying-install-config-for-slaac-dual-stack-network_{context}'] = Optional: Configuring address generation modes for SLAAC in dual-stack networks -For dual-stack clusters that use Stateless Address AutoConfiguration (SLAAC), you must specify a global value for the `ipv6.addr-gen-mode` network setting. You can set this value using NMState to configure the ramdisk and the cluster configuration files. If you don't configure a consistent `ipv6.addr-gen-mode` in these locations, IPv6 address mismatches can occur between CSR resources and `BareMetalHost` resources in the cluster. +For dual-stack clusters that use Stateless Address AutoConfiguration (SLAAC), you must specify a global value for the `ipv6.addr-gen-mode` network setting. You can set this value using NMState to configure the RAM disk and the cluster configuration files. If you do not configure a consistent `ipv6.addr-gen-mode` in these locations, IPv6 address mismatches can occur between CSR resources and `BareMetalHost` resources in the cluster. .Prerequisites diff --git a/modules/ldap-syncing-spec.adoc b/modules/ldap-syncing-spec.adoc index f450d8de5e..9b2d4ee81b 100644 --- a/modules/ldap-syncing-spec.adoc +++ b/modules/ldap-syncing-spec.adoc @@ -179,7 +179,7 @@ record. `mail` or `sAMAccountName` are preferred choices in most installations. |string array |`tolerateMemberNotFoundErrors` -|Determines the behavior of the LDAP sync job when missing user entries are encountered. If `true`, an LDAP query for users that does not find any will be tolerated and an only and error will be logged. If `false`, the LDAP sync job will fail if a query for users doesn't find any. The default value is `false`. Misconfigured LDAP sync jobs with this flag set to `true` can cause group membership to be removed, so it is recommended to use this flag with caution. +|Determines the behavior of the LDAP sync job when missing user entries are encountered. If `true`, an LDAP query for users that does not find any will be tolerated and an only and error will be logged. If `false`, the LDAP sync job will fail if a query for users does not find any. The default value is `false`. Misconfigured LDAP sync jobs with this flag set to `true` can cause group membership to be removed, so it is recommended to use this flag with caution. |boolean |`tolerateMemberOutOfScopeErrors` diff --git a/modules/logging-loki-zone-fail-recovery.adoc b/modules/logging-loki-zone-fail-recovery.adoc index c98bba2d81..7befcf756b 100644 --- a/modules/logging-loki-zone-fail-recovery.adoc +++ b/modules/logging-loki-zone-fail-recovery.adoc @@ -6,7 +6,7 @@ [id="logging-loki-zone-fail-recovery_{context}"] = Recovering Loki pods from failed zones -In {product-title} a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider's data center, aimed at enhancing redundancy and fault tolerance. If your {product-title} cluster isn't configured to handle this, a zone failure can lead to service or data loss. +In {product-title} a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider's data center, aimed at enhancing redundancy and fault tolerance. If your {product-title} cluster is not configured to handle this, a zone failure can lead to service or data loss. Loki pods are part of a link:https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/[StatefulSet], and they come with Persistent Volume Claims (PVCs) provisioned by a `StorageClass` object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone. diff --git a/modules/logging-rn-5.7.3.adoc b/modules/logging-rn-5.7.3.adoc index f8dbc38520..423775cb8e 100644 --- a/modules/logging-rn-5.7.3.adoc +++ b/modules/logging-rn-5.7.3.adoc @@ -26,7 +26,7 @@ With this update, slashes are replaced with underscores, resolving the issue. (l * Before this update, the Cluster Logging Operator terminated unexpectedly when set to an unmanaged state. With this update, a check to ensure that the `ClusterLogging` resource is in the correct Management state before initiating the reconciliation of the `ClusterLogForwarder` CR, resolving the issue. (link:https://issues.redhat.com/browse/LOG-4177[LOG-4177]) -* Before this update, when viewing logs within the {product-title} web console, selecting a time range by dragging over the histogram didn't work on the aggregated logs view inside the pod detail. With this update, the time range can be selected by dragging on the histogram in this view. (link:https://issues.redhat.com/browse/LOG-4108[LOG-4108]) +* Before this update, when viewing logs within the {product-title} web console, selecting a time range by dragging over the histogram did not work on the aggregated logs view inside the pod detail. With this update, the time range can be selected by dragging on the histogram in this view. (link:https://issues.redhat.com/browse/LOG-4108[LOG-4108]) * Before this update, when viewing logs within the {product-title} web console, queries longer than 30 seconds timed out. With this update, the timeout value can be configured in the configmap/logging-view-plugin. (link:https://issues.redhat.com/browse/LOG-3498[LOG-3498]) diff --git a/modules/lvms-troubleshooting-recovering-from-disk-failure.adoc b/modules/lvms-troubleshooting-recovering-from-disk-failure.adoc index cdac4763e9..182f12c42b 100644 --- a/modules/lvms-troubleshooting-recovering-from-disk-failure.adoc +++ b/modules/lvms-troubleshooting-recovering-from-disk-failure.adoc @@ -24,7 +24,7 @@ $ oc describe pvc <1> + - *FailedMount or FailedUnMount:* This error indicates problems when trying to mount the volume to a node or unmount a volume from a node. If the disk has failed, this error might appear when a pod tries to use the PVC. + -- *Volume is already exclusively attached to one node and can't be attached to another:* This error can appear with storage solutions that do not support `ReadWriteMany` access modes. +- *Volume is already exclusively attached to one node and cannot be attached to another:* This error can appear with storage solutions that do not support `ReadWriteMany` access modes. . Establish a direct connection to the host where the problem is occurring. diff --git a/modules/lvms-troubleshooting-recovering-from-missing-lvms-or-operator-components.adoc b/modules/lvms-troubleshooting-recovering-from-missing-lvms-or-operator-components.adoc index dc9d1db062..4d695306c4 100644 --- a/modules/lvms-troubleshooting-recovering-from-missing-lvms-or-operator-components.adoc +++ b/modules/lvms-troubleshooting-recovering-from-missing-lvms-or-operator-components.adoc @@ -24,7 +24,7 @@ NAME AGE my-lvmcluster 65m ---- -. If the cluster doesn't have an `LVMCluster` resource, create one by running the following command: +. If the cluster does not have an `LVMCluster` resource, create one by running the following command: + [source,terminal] ---- diff --git a/modules/metering-exposing-the-reporting-api.adoc b/modules/metering-exposing-the-reporting-api.adoc index 4ee62a5184..a40ed2d312 100644 --- a/modules/metering-exposing-the-reporting-api.adoc +++ b/modules/metering-exposing-the-reporting-api.adoc @@ -61,7 +61,7 @@ To manually configure, or disable OAuth in the Reporting Operator, you must set This also disables all TLS and authentication between the Reporting Operator, Presto, and Hive. You would need to manually configure these resources yourself. ==== -Authentication can be enabled by configuring the following options. Enabling authentication configures the Reporting Operator pod to run the {product-title} auth-proxy as a sidecar container in the pod. This adjusts the ports so that the reporting API isn't exposed directly, but instead is proxied to via the auth-proxy sidecar container. +Authentication can be enabled by configuring the following options. Enabling authentication configures the Reporting Operator pod to run the {product-title} auth-proxy as a sidecar container in the pod. This adjusts the ports so that the reporting API is not exposed directly, but instead is proxied to via the auth-proxy sidecar container. * `reporting-operator.spec.authProxy.enabled` * `reporting-operator.spec.authProxy.cookie.createSecret` diff --git a/modules/migration-migrating-on-prem-to-cloud.adoc b/modules/migration-migrating-on-prem-to-cloud.adoc index 00abb5b014..40af7a8a01 100644 --- a/modules/migration-migrating-on-prem-to-cloud.adoc +++ b/modules/migration-migrating-on-prem-to-cloud.adoc @@ -49,7 +49,7 @@ $ crane tunnel-api [--namespace ] \ --source-context ---- + -If you don't specify a namespace, the command uses the default value `openvpn`. +If you do not specify a namespace, the command uses the default value `openvpn`. + For example: + diff --git a/modules/mod-docs-ocp-conventions.adoc b/modules/mod-docs-ocp-conventions.adoc index 37624cfe1a..663fbc77d6 100644 --- a/modules/mod-docs-ocp-conventions.adoc +++ b/modules/mod-docs-ocp-conventions.adoc @@ -88,7 +88,7 @@ of the openshift-docs repository. These modules must follow the file naming conventions specified in the link:https://redhat-documentation.github.io/modular-docs/[modular docs guidelines]. -* All assemblies must go in the relevant guide/book. If you can't find a relevant +* All assemblies must go in the relevant guide/book. If you cannot find a relevant guide/book, reach out to a member of the OpenShift CCS team. So guides/books contain assemblies, which contain modules. diff --git a/modules/network-observability-without-loki.adoc b/modules/network-observability-without-loki.adoc index 9883f4d4f0..c8ffb5d45f 100644 --- a/modules/network-observability-without-loki.adoc +++ b/modules/network-observability-without-loki.adoc @@ -4,7 +4,7 @@ :_mod-docs-content-type: REFERENCE [id="network-observability-without-loki_{context}"] = Network Observability without Loki -You can use Network Observability without Loki by not performing the Loki installation steps and skipping directly to "Installing the Network Observability Operator". If you only want to export flows to a Kafka consumer or IPFIX collector, or you only need dashboard metrics, then you do not need to install Loki or provide storage for Loki. Without Loki, there won't be a Network Traffic panel under Observe, which means there is no overview charts, flow table, or topology. The following table compares available features with and without Loki: +You can use Network Observability without Loki by not performing the Loki installation steps and skipping directly to "Installing the Network Observability Operator". If you only want to export flows to a Kafka consumer or IPFIX collector, or you only need dashboard metrics, then you do not need to install Loki or provide storage for Loki. Without Loki, there is no *Network Traffic* panel under *Observe*, which means there is no overview charts, flow table, or topology. The following table compares available features with and without Loki: .Comparison of feature availability with and without Loki [options="header"] diff --git a/modules/nodes-cluster-worker-latency-profiles-about.adoc b/modules/nodes-cluster-worker-latency-profiles-about.adoc index eb223099c5..651c12685e 100644 --- a/modules/nodes-cluster-worker-latency-profiles-about.adoc +++ b/modules/nodes-cluster-worker-latency-profiles-about.adoc @@ -35,7 +35,7 @@ ifdef::openshift-rosa,openshift-dedicated[] Although the default configuration works in most cases, {product-title} offers a second worker latency profile for situations where the network is experiencing higher latency than usual. The two worker latency profiles are described in the following sections: endif::openshift-rosa,openshift-dedicated[] -Default worker latency profile:: With the `Default` profile, each `Kubelet` updates it's status every 10 seconds (`node-status-update-frequency`). The `Kube Controller Manager` checks the statuses of `Kubelet` every 5 seconds (`node-monitor-grace-period`). +Default worker latency profile:: With the `Default` profile, each `Kubelet` updates its status every 10 seconds (`node-status-update-frequency`). The `Kube Controller Manager` checks the statuses of `Kubelet` every 5 seconds (`node-monitor-grace-period`). + The Kubernetes Controller Manager waits 40 seconds for a status update from `Kubelet` before considering the `Kubelet` unhealthy. If no status is made available to the Kubernetes Controller Manager, it then marks the node with the `node.kubernetes.io/not-ready` or `node.kubernetes.io/unreachable` taint and evicts the pods on that node. + diff --git a/modules/nw-external-dns-operator-configuration-parameters.adoc b/modules/nw-external-dns-operator-configuration-parameters.adoc index 893105e10f..4f286b6625 100644 --- a/modules/nw-external-dns-operator-configuration-parameters.adoc +++ b/modules/nw-external-dns-operator-configuration-parameters.adoc @@ -85,7 +85,7 @@ source: <1> * `ExternalName` <4> Ensures that the controller considers only those resources which matches with label filter. <5> The default value for `hostnameAnnotation` is `Ignore` which instructs `ExternalDNS` to generate DNS records using the templates specified in the field `fqdnTemplates`. When the value is `Allow` the DNS records get generated based on the value specified in the `external-dns.alpha.kubernetes.io/hostname` annotation. -<6> The External DNS Operator uses a string to generate DNS names from sources that don't define a hostname, or to add a hostname suffix when paired with the fake source. +<6> The External DNS Operator uses a string to generate DNS names from sources that do not define a hostname, or to add a hostname suffix when paired with the fake source. [source,yaml] ---- diff --git a/modules/nw-osp-loadbalancer-etp-local.adoc b/modules/nw-osp-loadbalancer-etp-local.adoc index ffb18d00d9..04535bb32a 100644 --- a/modules/nw-osp-loadbalancer-etp-local.adoc +++ b/modules/nw-osp-loadbalancer-etp-local.adoc @@ -7,7 +7,7 @@ You can set the external traffic policy (ETP) parameter, `.spec.externalTrafficPolicy`, on a load balancer service to preserve the source IP address of incoming traffic when it reaches service endpoint pods. However, if your cluster uses the Amphora Octavia provider, the source IP of the traffic is replaced with the IP address of the Amphora VM. This behavior does not occur if your cluster uses the OVN Octavia provider. -Having the `ETP` option set to `Local` requires that health monitors be created for the load balancer. Without health monitors, traffic can be routed to a node that doesn't have a functional endpoint, which causes the connection to drop. To force Cloud Provider OpenStack to create health monitors, you must set the value of the `create-monitor` option in the cloud provider configuration to `true`. +Having the `ETP` option set to `Local` requires that health monitors be created for the load balancer. Without health monitors, traffic can be routed to a node that does not have a functional endpoint, which causes the connection to drop. To force Cloud Provider OpenStack to create health monitors, you must set the value of the `create-monitor` option in the cloud provider configuration to `true`. In {rh-openstack} 16.2, the OVN Octavia provider does not support health monitors. Therefore, setting the ETP to local is unsupported. diff --git a/modules/op-installing-tekton-hub-with-login-and-rating.adoc b/modules/op-installing-tekton-hub-with-login-and-rating.adoc index 87e9bd0d26..123dcb5b71 100644 --- a/modules/op-installing-tekton-hub-with-login-and-rating.adoc +++ b/modules/op-installing-tekton-hub-with-login-and-rating.adoc @@ -122,7 +122,7 @@ spec: + [NOTE] ==== -If you don't provide custom values for the optional fields in the `TektonHub` CR, the default values configured in the {tekton-hub} API config map is used. +If you do not provide custom values for the optional fields in the `TektonHub` CR, the default values configured in the {tekton-hub} API config map is used. ==== . Apply the `TektonHub` CR. diff --git a/modules/op-installing-tekton-hub-without-login-and-rating.adoc b/modules/op-installing-tekton-hub-without-login-and-rating.adoc index 9887d1c4cd..90419a24eb 100644 --- a/modules/op-installing-tekton-hub-without-login-and-rating.adoc +++ b/modules/op-installing-tekton-hub-without-login-and-rating.adoc @@ -66,7 +66,7 @@ spec: + [NOTE] ==== -If you don't provide custom values for the optional fields in the `TektonHub` CR, the default values configured in the {tekton-hub} API config map is used. +If you do not provide custom values for the optional fields in the `TektonHub` CR, the default values configured in the {tekton-hub} API config map is used. ==== . Apply the `TektonHub` CR. diff --git a/modules/op-pipelines-as-code-command-reference.adoc b/modules/op-pipelines-as-code-command-reference.adoc index 90c325fb13..083dc7d634 100644 --- a/modules/op-pipelines-as-code-command-reference.adoc +++ b/modules/op-pipelines-as-code-command-reference.adoc @@ -18,7 +18,7 @@ The `tkn pac` CLI tool offers the following capabilities: [TIP] ==== -You can use the commands corresponding to the capabilities for testing and experimentation, so that you don't have to make changes to the Git repository containing the application source code. +You can use the commands corresponding to the capabilities for testing and experimentation, so that you do not have to make changes to the Git repository containing the application source code. ==== == Basic syntax diff --git a/modules/op-release-notes-1-8.adoc b/modules/op-release-notes-1-8.adoc index 8e196dbcc9..cb9707723f 100644 --- a/modules/op-release-notes-1-8.adoc +++ b/modules/op-release-notes-1-8.adoc @@ -605,7 +605,7 @@ config 1.8.1 True * Before this update, using the `tkn` CLI tool, you could not remove task runs and pipeline runs that contained a `result` object whose type was `array`. With this update, you can use the `tkn` CLI tool to remove task runs and pipeline runs that contain a `result` object whose type is `array`. // https://issues.redhat.com/browse/SRVKP-2478 -* Before this update, if a pipeline specification contained a task with an `ENV_VARS` parameter of `array` type, the pipeline run failed with the following error: `invalid input params for task func-buildpacks: param types don't match the user-specified type: [ENV_VARS]`. With this update, pipeline runs with such pipeline and task specifications do not fail. +* Before this update, if a pipeline specification contained a task with an `ENV_VARS` parameter of `array` type, the pipeline run failed with the following error: `invalid input params for task func-buildpacks: param types do not match the user-specified type: [ENV_VARS]`. With this update, pipeline runs with such pipeline and task specifications do not fail. // https://issues.redhat.com/browse/SRVKP-2422 * Before this update, cluster administrators could not provide a `config.json` file to the `Buildah` cluster task for accessing a container registry. With this update, cluster administrators can provide the `Buildah` cluster task with a `config.json` file by using the `dockerconfig` workspace. diff --git a/modules/op-tkn-hub-interaction.adoc b/modules/op-tkn-hub-interaction.adoc index 6428112d17..7bf044e0cf 100644 --- a/modules/op-tkn-hub-interaction.adoc +++ b/modules/op-tkn-hub-interaction.adoc @@ -30,7 +30,7 @@ For each example, to get the corresponding sub-commands and flags, run `tkn hub == hub downgrade Downgrade an installed resource. -.Example: Downgrade the `mytask` task in the `mynamespace` namespace to it's older version +.Example: Downgrade the `mytask` task in the `mynamespace` namespace to its older version [source,terminal] ---- $ tkn hub downgrade task mytask --to version -n mynamespace diff --git a/modules/ossm-cr-example.adoc b/modules/ossm-cr-example.adoc index a95e171131..55244f02ea 100644 --- a/modules/ossm-cr-example.adoc +++ b/modules/ossm-cr-example.adoc @@ -123,7 +123,7 @@ The following table lists the specifications for the `ServiceMeshControlPlane` r |string |`observedGeneration` -|The generation observed by the controller during the most recent reconciliation. The information in the status pertains to this particular generation of the object. The `status.conditions` are not up-to-date if the `status.observedGeneration` field doesn't match `metadata.generation`. +|The generation observed by the controller during the most recent reconciliation. The information in the status pertains to this particular generation of the object. The `status.conditions` are not up-to-date if the `status.observedGeneration` field does not match `metadata.generation`. |integer |`operatorVersion` diff --git a/modules/ossm-cr-status.adoc b/modules/ossm-cr-status.adoc index 3cd888f894..78b2692643 100644 --- a/modules/ossm-cr-status.adoc +++ b/modules/ossm-cr-status.adoc @@ -13,7 +13,7 @@ The `status` parameter describes the current state of your service mesh. This in |Name |Description |Type |`observedGeneration` -|The generation observed by the controller during the most recent reconciliation. The information in the status pertains to this particular generation of the object. The `status.conditions` are not up-to-date if the `status.observedGeneration` field doesn't match `metadata.generation`. +|The generation observed by the controller during the most recent reconciliation. The information in the status pertains to this particular generation of the object. The `status.conditions` are not up-to-date if the `status.observedGeneration` field does not match `metadata.generation`. |integer |`annotations` diff --git a/modules/ossm-federation-create-export.adoc b/modules/ossm-federation-create-export.adoc index 85dffaf5e1..852a097b32 100644 --- a/modules/ossm-federation-create-export.adoc +++ b/modules/ossm-federation-create-export.adoc @@ -23,7 +23,7 @@ When you set the `importAsLocal` parameter to `true` to aggregate the remote end [NOTE] ==== -You can configure services for export even if they don't exist yet. When a service that matches the value specified in the ExportedServiceSet is deployed, it will be automatically exported. +You can configure services for export even if they do not exist yet. When a service that matches the value specified in the ExportedServiceSet is deployed, it will be automatically exported. ==== //// diff --git a/modules/ossm-federation-create-import.adoc b/modules/ossm-federation-create-import.adoc index 6920157e80..fa5ccb7765 100644 --- a/modules/ossm-federation-create-import.adoc +++ b/modules/ossm-federation-create-import.adoc @@ -18,7 +18,7 @@ Services are imported with the name `..svc. ** Must use `typed_config` diff --git a/modules/ossm-vs-istio-1x.adoc b/modules/ossm-vs-istio-1x.adoc index 1160613290..01a2869095 100644 --- a/modules/ossm-vs-istio-1x.adoc +++ b/modules/ossm-vs-istio-1x.adoc @@ -98,7 +98,7 @@ Catch-all domains ("\*") are not supported. If one is found in the Gateway defin [id="ossm-subdomains_{context}"] === Subdomains -Subdomains (e.g.: "*.domain.com") are supported. However this ability doesn't come enabled by default in {product-title}. This means that {SMProductName} _will_ create the route with the subdomain, but it will only be in effect if {product-title} is configured to enable it. +Subdomains (e.g.: "*.domain.com") are supported. However this ability does not come enabled by default in {product-title}. This means that {SMProductName} _will_ create the route with the subdomain, but it will only be in effect if {product-title} is configured to enable it. [id="ossm-tls_{context}"] === Transport layer security diff --git a/modules/ossm-vs-istio.adoc b/modules/ossm-vs-istio.adoc index 08891aa5e6..15d393ea8d 100644 --- a/modules/ossm-vs-istio.adoc +++ b/modules/ossm-vs-istio.adoc @@ -165,7 +165,7 @@ Catch-all domains ("\*") are not supported. If one is found in the Gateway defin [id="ossm-subdomains_{context}"] === Subdomains -Subdomains (e.g.: "*.domain.com") are supported. However this ability doesn't come enabled by default in {product-title}. This means that {SMProductName} _will_ create the route with the subdomain, but it will only be in effect if {product-title} is configured to enable it. +Subdomains (e.g.: "*.domain.com") are supported. However this ability does not come enabled by default in {product-title}. This means that {SMProductName} _will_ create the route with the subdomain, but it will only be in effect if {product-title} is configured to enable it. [id="ossm-tls_{context}"] === Transport layer security diff --git a/modules/otel-product-overview.adoc b/modules/otel-product-overview.adoc index f6bc9c68df..26a135e216 100644 --- a/modules/otel-product-overview.adoc +++ b/modules/otel-product-overview.adoc @@ -17,7 +17,7 @@ Data Collection and Processing Hub:: It acts as a central component that gathers Customizable telemetry data pipeline:: The OpenTelemetry Collector is designed to be customizable. It supports various processors, exporters, and receivers. -Auto-instrumentation features:: Automatic instrumentation simplifies the process of adding observability to applications. Developers don't need to manually instrument their code for basic telemetry data. +Auto-instrumentation features:: Automatic instrumentation simplifies the process of adding observability to applications. Developers do not need to manually instrument their code for basic telemetry data. Here are some of the use cases for the OpenTelemetry Collector: diff --git a/modules/policy-identity-access-management.adoc b/modules/policy-identity-access-management.adoc index 5852f3d956..3dff6c762c 100644 --- a/modules/policy-identity-access-management.adoc +++ b/modules/policy-identity-access-management.adoc @@ -155,13 +155,13 @@ Cloud Infrastructure Account refers to the underlying AWS or Google Cloud accoun 5. Limited to what is granted through RBAC by the customer administrator, as well as namespaces created by the user. -- -// TODO: The above uses an asterisk as a footnote I think for the first sentence (though it doesn't show it as a reference below the table), then numbers for the rest of the footnote items. I'd suggest bumping all the numbers and using a number for the first header asterisk as well. +// TODO: The above uses an asterisk as a footnote I think for the first sentence (though it does not show it as a reference below the table), then numbers for the rest of the footnote items. I would suggest bumping all the numbers and using a number for the first header asterisk as well. [id="customer-access_{context}"] == Customer access Customer access is limited to namespaces created by the customer and permissions that are granted using RBAC by the customer administrator role. Access to the underlying infrastructure or product namespaces is generally not permitted without `cluster-admin` access. More information on customer access and authentication can be found in the Understanding Authentication section of the documentation. -// TODO: I don't think there is this "Understanding Authentication" section in the OSD docs +// TODO: I do not think there is this "Understanding Authentication" section in the OSD docs [id="access-approval_{context}"] == Access approval and review diff --git a/modules/recommended-etcd-practices.adoc b/modules/recommended-etcd-practices.adoc index d357632229..349d859e0b 100644 --- a/modules/recommended-etcd-practices.adoc +++ b/modules/recommended-etcd-practices.adoc @@ -52,7 +52,7 @@ To validate the hardware for etcd before or after you create the {product-title} .Prerequisites -* Container runtimes such as Podman or Docker are installed on the machine that you're testing. +* Container runtimes such as Podman or Docker are installed on the machine that you are testing. * Data is written to the `/var/lib/etcd` path. .Procedure diff --git a/modules/update-best-practices.adoc b/modules/update-best-practices.adoc index 1cdb20f874..b0b15836c4 100644 --- a/modules/update-best-practices.adoc +++ b/modules/update-best-practices.adoc @@ -65,4 +65,4 @@ When planning a cluster update, check the configuration of the `PodDisruptionBud * For highly available workloads, make sure there are replicas that can be temporarily taken offline without being prohibited by the `PodDisruptionBudget`. -* For workloads that aren't highly available, make sure they are either not protected by a `PodDisruptionBudget` or have some alternative mechanism for draining these workloads eventually, such as periodic restart or guaranteed eventual termination. +* For workloads that are not highly available, make sure they are either not protected by a `PodDisruptionBudget` or have some alternative mechanism for draining these workloads eventually, such as periodic restart or guaranteed eventual termination. diff --git a/modules/ztp-monitoring-policy-deployment-progress.adoc b/modules/ztp-monitoring-policy-deployment-progress.adoc index e0cee24e2d..708fcd5cc9 100644 --- a/modules/ztp-monitoring-policy-deployment-progress.adoc +++ b/modules/ztp-monitoring-policy-deployment-progress.adoc @@ -70,7 +70,7 @@ ztp-site.example1-perf-policy inform No .. To check policy status from the {rh-rhacm} web console, perform the following actions: ... Click *Governance* -> *Find policies*. -... Click on a cluster policy to check it's status. +... Click on a cluster policy to check its status. When all of the cluster policies become compliant, {ztp} installation and configuration for the cluster is complete. The `ztp-done` label is added to the cluster.