diff --git a/architecture/admission-plug-ins.adoc b/architecture/admission-plug-ins.adoc
index 4a2dc5f94f..c20f406a8c 100644
--- a/architecture/admission-plug-ins.adoc
+++ b/architecture/admission-plug-ins.adoc
@@ -1,6 +1,6 @@
:_content-type: ASSEMBLY
[id="admission-plug-ins"]
-= Admission plug-ins
+= Admission plugins
include::_attributes/common-attributes.adoc[]
:context: admission-plug-ins
@@ -23,7 +23,7 @@ include::modules/configuring-dynamic-admission.adoc[leveloffset=+1]
== Additional resources
ifdef::openshift-enterprise,openshift-webscale[]
-* xref:../networking/hardware_networks/configuring-sriov-operator.adoc#configuring-sriov-operator[Limiting custom network resources managed by the SR-IOV network device plug-in]
+* xref:../networking/hardware_networks/configuring-sriov-operator.adoc#configuring-sriov-operator[Limiting custom network resources managed by the SR-IOV network device plugin]
endif::[]
* xref:../nodes/scheduling/nodes-scheduler-taints-tolerations.adoc#nodes-scheduler-taints-tolerations_dedicating_nodes-scheduler-taints-tolerations[Defining tolerations that enable taints to qualify which pods should be scheduled on a node]
diff --git a/architecture/index.adoc b/architecture/index.adoc
index 80acc38338..08892de20d 100644
--- a/architecture/index.adoc
+++ b/architecture/index.adoc
@@ -78,6 +78,6 @@ During the first boot, Ignition reads its configuration from the installation me
You can learn how xref:../architecture/architecture-rhcos.adoc#architecture-rhcos[Ignition works], the process for a {op-system-first} machine in an {product-title} cluster, view Ignition configuration files, and change Ignition configuration after an installation.
[id="about-admission-plug-ins"]
-== About admission plug-ins
-You can use xref:../architecture/admission-plug-ins.adoc#admission-plug-ins[admission plug-ins] to regulate how {product-title} functions. After a resource request is authenticated and authorized, admission plug-ins intercept the resource request to the master API to validate resource requests and to ensure that scaling policies are adhered to.
-Admission plug-ins are used to enforce security policies, resource limitations, or configuration requirements.
+== About admission plugins
+You can use xref:../architecture/admission-plug-ins.adoc#admission-plug-ins[admission plugins] to regulate how {product-title} functions. After a resource request is authenticated and authorized, admission plugins intercept the resource request to the master API to validate resource requests and to ensure that scaling policies are adhered to.
+Admission plugins are used to enforce security policies, resource limitations, or configuration requirements.
diff --git a/authentication/assuming-an-aws-iam-role-for-a-service-account.adoc b/authentication/assuming-an-aws-iam-role-for-a-service-account.adoc
index b627791423..e03d428ffa 100644
--- a/authentication/assuming-an-aws-iam-role-for-a-service-account.adoc
+++ b/authentication/assuming-an-aws-iam-role-for-a-service-account.adoc
@@ -37,5 +37,5 @@ include::modules/verifying-the-assumed-iam-role-in-your-pod.adoc[leveloffset=+2]
* For more information about installing and using the AWS Boto3 SDK for Python, see the link:https://boto3.amazonaws.com/v1/documentation/api/latest/index.html[AWS Boto3 documentation].
ifdef::openshift-rosa,openshift-dedicated[]
-* For general information about webhook admission plug-ins for OpenShift, see link:https://docs.openshift.com/container-platform/4.11/architecture/admission-plug-ins.html#admission-webhooks-about_admission-plug-ins[Webhook admission plug-ins] in the OpenShift Container Platform documentation.
+* For general information about webhook admission plugins for OpenShift, see link:https://docs.openshift.com/container-platform/4.11/architecture/admission-plug-ins.html#admission-webhooks-about_admission-plug-ins[Webhook admission plugins] in the OpenShift Container Platform documentation.
endif::openshift-rosa,openshift-dedicated[]
diff --git a/backup_and_restore/application_backup_and_restore/oadp-api.adoc b/backup_and_restore/application_backup_and_restore/oadp-api.adoc
index bd9aa691b1..cc132a9c08 100644
--- a/backup_and_restore/application_backup_and_restore/oadp-api.adoc
+++ b/backup_and_restore/application_backup_and_restore/oadp-api.adoc
@@ -131,13 +131,13 @@ link:https://pkg.go.dev/github.com/openshift/oadp-operator/api/v1alpha1#Applicat
|`defaultPlugins`
|[] link:https://pkg.go.dev/builtin#string[string]
-|The following types of default Velero plug-ins can be installed: `aws`,`azure`, `csi`, `gcp`, `kubevirt`, and `openshift`.
+|The following types of default Velero plugins can be installed: `aws`,`azure`, `csi`, `gcp`, `kubevirt`, and `openshift`.
|`customPlugins`
|[]link:https://pkg.go.dev/github.com/openshift/oadp-operator/api/v1alpha1#CustomPlugin[CustomPlugin]
-|Used for installation of custom Velero plug-ins.
+|Used for installation of custom Velero plugins.
-Default and custom plug-ins are described in xref:../../backup_and_restore/application_backup_and_restore/oadp-features-plugins#oadp-features-plugins[OADP plug-ins]
+Default and custom plugins are described in xref:../../backup_and_restore/application_backup_and_restore/oadp-features-plugins#oadp-features-plugins[OADP plugins]
|`restoreResourcesVersionPriority`
|link:https://pkg.go.dev/builtin#string[string]
@@ -165,11 +165,11 @@ link:https://pkg.go.dev/github.com/openshift/oadp-operator/api/v1alpha1#VeleroCo
|`name`
|link:https://pkg.go.dev/builtin#string[string]
-|Name of custom plug-in.
+|Name of custom plugin.
|`image`
|link:https://pkg.go.dev/builtin#string[string]
-|Image of custom plug-in.
+|Image of custom plugin.
|===
link:https://pkg.go.dev/github.com/openshift/oadp-operator/api/v1alpha1#CustomPlugin[Complete schema definitions for the type `CustomPlugin`].
@@ -241,7 +241,7 @@ link:https://pkg.go.dev/github.com/openshift/oadp-operator/api/v1alpha1#Features
|`enable`
|link:https://pkg.go.dev/builtin#bool[bool]
-|If set to `true`, deploys the volume snapshot mover controller and a modified CSI Data Mover plug-in. If set to `false`, these are not deployed.
+|If set to `true`, deploys the volume snapshot mover controller and a modified CSI Data Mover plugin. If set to `false`, these are not deployed.
|`credentialName`
|link:https://pkg.go.dev/builtin#string[string]
diff --git a/backup_and_restore/application_backup_and_restore/oadp-features-plugins.adoc b/backup_and_restore/application_backup_and_restore/oadp-features-plugins.adoc
index 7b6f8ea52e..07184135ce 100644
--- a/backup_and_restore/application_backup_and_restore/oadp-features-plugins.adoc
+++ b/backup_and_restore/application_backup_and_restore/oadp-features-plugins.adoc
@@ -1,6 +1,6 @@
:_content-type: ASSEMBLY
[id="oadp-features-plugins"]
-= OADP features and plug-ins
+= OADP features and plugins
include::_attributes/common-attributes.adoc[]
:context: oadp-features-plugins
@@ -8,7 +8,7 @@ toc::[]
OpenShift API for Data Protection (OADP) features provide options for backing up and restoring applications.
-The default plug-ins enable Velero to integrate with certain cloud providers and to back up and restore {product-title} resources.
+The default plugins enable Velero to integrate with certain cloud providers and to back up and restore {product-title} resources.
include::modules/oadp-features.adoc[leveloffset=+1]
include::modules/oadp-plugins.adoc[leveloffset=+1]
diff --git a/backup_and_restore/application_backup_and_restore/troubleshooting.adoc b/backup_and_restore/application_backup_and_restore/troubleshooting.adoc
index f2e23cfa4c..1851e8cc54 100644
--- a/backup_and_restore/application_backup_and_restore/troubleshooting.adoc
+++ b/backup_and_restore/application_backup_and_restore/troubleshooting.adoc
@@ -29,7 +29,7 @@ include::modules/migration-debugging-velero-resources.adoc[leveloffset=+1]
[id="issues-with-velero-and-admission-workbooks"]
== Issues with Velero and admission webhooks
-Velero has limited abilities to resolve admission webhook issues during a restore. If you have workloads with admission webhooks, you might need to use an additional Velero plug-in or make changes to how you restore the workload.
+Velero has limited abilities to resolve admission webhook issues during a restore. If you have workloads with admission webhooks, you might need to use an additional Velero plugin or make changes to how you restore the workload.
Typically, workloads with admission webhooks require you to create a resource of a specific kind first. This is especially true if your workload has child resources because admission webhooks typically block child resources.
@@ -46,9 +46,9 @@ include::modules/migration-debugging-velero-admission-webhooks-ibm-appconnect.ad
[role="_additional-resources"]
.Additional resources
-* xref:../../architecture/admission-plug-ins.adoc[Admission plug-ins]
-* xref:../../architecture/admission-plug-ins.adoc#admission-webhooks-about_admission-plug-ins[Webhook admission plug-ins]
-* xref:../../architecture/admission-plug-ins.adoc#admission-webhook-types_admission-plug-ins[Types of webhook admission plug-ins]
+* xref:../../architecture/admission-plug-ins.adoc[Admission plugins]
+* xref:../../architecture/admission-plug-ins.adoc#admission-webhooks-about_admission-plug-ins[Webhook admission plugins]
+* xref:../../architecture/admission-plug-ins.adoc#admission-webhook-types_admission-plug-ins[Types of webhook admission plugins]
include::modules/oadp-installation-issues.adoc[leveloffset=+1]
include::modules/oadp-backup-restore-cr-issues.adoc[leveloffset=+1]
diff --git a/cicd/jenkins/important-changes-to-openshift-jenkins-images.adoc b/cicd/jenkins/important-changes-to-openshift-jenkins-images.adoc
index 33e9d35fa5..6590c40201 100644
--- a/cicd/jenkins/important-changes-to-openshift-jenkins-images.adoc
+++ b/cicd/jenkins/important-changes-to-openshift-jenkins-images.adoc
@@ -12,7 +12,7 @@ toc::[]
* {product-title} 4.10 deprecated the OpenShift Jenkins Maven and NodeJS Agent images. {product-title} 4.11 removes these images from its payload. Red Hat no longer produces these images, and they are not available from the `ocp-tools-4` repository at `registry.redhat.io`. Red Hat maintains the 4.10 and earlier versions of these images for any significant bug fixes or security CVEs, following the link:https://access.redhat.com/support/policy/updates/openshift[{product-title} lifecycle policy].
-These changes support the {product-title} 4.10 recommendation to use xref:../../cicd/jenkins/images-other-jenkins.adoc#images-other-jenkins-config-kubernetes_images-other-jenkins[multiple container Pod Templates with the Jenkins Kubernetes Plug-in].
+These changes support the {product-title} 4.10 recommendation to use xref:../../cicd/jenkins/images-other-jenkins.adoc#images-other-jenkins-config-kubernetes_images-other-jenkins[multiple container Pod Templates with the Jenkins Kubernetes Plugin].
include::modules/relocation-of-openshift-jenkins-images.adoc[leveloffset=+1]
diff --git a/cli_reference/kn-cli-tools.adoc b/cli_reference/kn-cli-tools.adoc
index e778f8128d..6a1ad984f2 100644
--- a/cli_reference/kn-cli-tools.adoc
+++ b/cli_reference/kn-cli-tools.adoc
@@ -18,7 +18,7 @@ Key features of the Knative CLI include:
* Manage features of Knative Serving, such as services, revisions, and traffic-splitting.
* Create and manage Knative Eventing components, such as event sources and triggers.
* Create sink bindings to connect existing Kubernetes applications and Knative services.
-* Extend the Knative CLI with flexible plug-in architecture, similar to the `kubectl` CLI.
+* Extend the Knative CLI with flexible plugin architecture, similar to the `kubectl` CLI.
* Configure autoscaling parameters for Knative services.
* Scripted usage, such as waiting for the results of an operation, or deploying custom rollout and rollback strategies.
diff --git a/cli_reference/openshift_cli/extending-cli-plugins.adoc b/cli_reference/openshift_cli/extending-cli-plugins.adoc
index 4188b82eac..549e986f66 100644
--- a/cli_reference/openshift_cli/extending-cli-plugins.adoc
+++ b/cli_reference/openshift_cli/extending-cli-plugins.adoc
@@ -1,16 +1,16 @@
:_content-type: ASSEMBLY
[id="cli-extend-plugins"]
-= Extending the OpenShift CLI with plug-ins
+= Extending the OpenShift CLI with plugins
include::_attributes/common-attributes.adoc[]
:context: cli-extend-plugins
toc::[]
-You can write and install plug-ins to build on the default `oc` commands,
+You can write and install plugins to build on the default `oc` commands,
allowing you to perform new and more complex tasks with the {product-title} CLI.
-// Writing CLI plug-ins
+// Writing CLI plugins
include::modules/cli-extending-plugins-writing.adoc[leveloffset=+1]
-// Installing and using CLI plug-ins
+// Installing and using CLI plugins
include::modules/cli-extending-plugins-installing.adoc[leveloffset=+1]
diff --git a/cli_reference/openshift_cli/managing-cli-plugins-krew.adoc b/cli_reference/openshift_cli/managing-cli-plugins-krew.adoc
index 4569547b0d..25d0e4420b 100644
--- a/cli_reference/openshift_cli/managing-cli-plugins-krew.adoc
+++ b/cli_reference/openshift_cli/managing-cli-plugins-krew.adoc
@@ -1,23 +1,23 @@
:_content-type: ASSEMBLY
[id="managing-cli-plugin-krew"]
-= Managing CLI plug-ins with Krew
+= Managing CLI plugins with Krew
include::_attributes/common-attributes.adoc[]
:context: managing-cli-plugins-krew
toc::[]
-You can use Krew to install and manage plug-ins for the OpenShift CLI (`oc`).
+You can use Krew to install and manage plugins for the OpenShift CLI (`oc`).
-:FeatureName: Using Krew to install and manage plug-ins for the OpenShift CLI
+:FeatureName: Using Krew to install and manage plugins for the OpenShift CLI
include::snippets/technology-preview.adoc[]
-// Installing a CLI plug-in with Krew
+// Installing a CLI plugin with Krew
include::modules/cli-krew-install-plugin.adoc[leveloffset=+1]
-// Updating a CLI plug-in with Krew
+// Updating a CLI plugin with Krew
include::modules/cli-krew-update-plugin.adoc[leveloffset=+1]
-// Removing a CLI plug-in with Krew
+// Removing a CLI plugin with Krew
include::modules/cli-krew-remove-plugin.adoc[leveloffset=+1]
[role="_additional-resources"]
@@ -25,4 +25,4 @@ include::modules/cli-krew-remove-plugin.adoc[leveloffset=+1]
== Additional resources
* link:https://krew.sigs.k8s.io/[Krew]
-* xref:../../cli_reference/openshift_cli/extending-cli-plugins.adoc#cli-extend-plugins[Extending the OpenShift CLI with plug-ins]
+* xref:../../cli_reference/openshift_cli/extending-cli-plugins.adoc#cli-extend-plugins[Extending the OpenShift CLI with plugins]
diff --git a/cli_reference/opm/cli-opm-ref.adoc b/cli_reference/opm/cli-opm-ref.adoc
index 917d421563..d06f904595 100644
--- a/cli_reference/opm/cli-opm-ref.adoc
+++ b/cli_reference/opm/cli-opm-ref.adoc
@@ -38,4 +38,4 @@ include::modules/opm-cli-ref-index.adoc[leveloffset=+1]
* xref:../../operators/understanding/olm-packaging-format.adoc#olm-file-based-catalogs_olm-packaging-format[Operator Framework packaging format]
* xref:../../operators/admin/olm-managing-custom-catalogs.adoc#olm-managing-custom-catalogs-fb[Managing custom catalogs]
-* xref:../../installing/disconnected_install/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plug-in]
+* xref:../../installing/disconnected_install/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plugin]
diff --git a/installing/disconnected_install/index.adoc b/installing/disconnected_install/index.adoc
index e859d01df7..b00aa205c2 100644
--- a/installing/disconnected_install/index.adoc
+++ b/installing/disconnected_install/index.adoc
@@ -19,4 +19,4 @@ If you already have a container image registry, such as Red Hat Quay, you can us
You can use one of the following procedures to mirror your {product-title} image repository to your mirror registry:
* xref:../../installing/disconnected_install/installing-mirroring-installation-images.adoc#installing-mirroring-installation-images[Mirroring images for a disconnected installation]
-* xref:../../installing/disconnected_install/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plug-in]
+* xref:../../installing/disconnected_install/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plugin]
diff --git a/installing/disconnected_install/installing-mirroring-disconnected.adoc b/installing/disconnected_install/installing-mirroring-disconnected.adoc
index 8e38ca2d06..6c50f1d633 100644
--- a/installing/disconnected_install/installing-mirroring-disconnected.adoc
+++ b/installing/disconnected_install/installing-mirroring-disconnected.adoc
@@ -1,6 +1,6 @@
:_content-type: ASSEMBLY
[id="installing-mirroring-disconnected"]
-= Mirroring images for a disconnected installation using the oc-mirror plug-in
+= Mirroring images for a disconnected installation using the oc-mirror plugin
include::_attributes/common-attributes.adoc[]
:context: installing-mirroring-disconnected
@@ -8,9 +8,9 @@ toc::[]
Running your cluster in a restricted network without direct internet connectivity is possible by installing the cluster from a mirrored set of {product-title} container images in a private registry. This registry must be running at all times as long as the cluster is running. See the xref:../../installing/disconnected_install/installing-mirroring-disconnected.adoc#prerequisites_installing-mirroring-disconnected[Prerequisites] section for more information.
-You can use the oc-mirror OpenShift CLI (`oc`) plug-in to mirror images to a mirror registry in your fully or partially disconnected environments. You must run oc-mirror from a system with internet connectivity in order to download the required images from the official Red Hat registries.
+You can use the oc-mirror OpenShift CLI (`oc`) plugin to mirror images to a mirror registry in your fully or partially disconnected environments. You must run oc-mirror from a system with internet connectivity in order to download the required images from the official Red Hat registries.
-The following steps outline the high-level workflow on how to use the oc-mirror plug-in to mirror images to a mirror registry:
+The following steps outline the high-level workflow on how to use the oc-mirror plugin to mirror images to a mirror registry:
. Create an image set configuration file.
. Mirror the image set to the mirror registry by using one of the following methods:
@@ -19,7 +19,7 @@ The following steps outline the high-level workflow on how to use the oc-mirror
. Install the `ImageContentSourcePolicy` and `CatalogSource` resources that were generated by oc-mirror into the cluster.
. Repeat these steps to update your mirror registry as necessary.
-// About the oc-mirror plug-in
+// About the oc-mirror plugin
include::modules/oc-mirror-about.adoc[leveloffset=+1]
// oc-mirror compatibility and support
@@ -40,7 +40,7 @@ include::modules/installation-about-mirror-registry.adoc[leveloffset=+1]
+
[NOTE]
====
-If you use Red Hat Quay, you must use version 3.6 or later with the oc-mirror plug-in. If you have an entitlement to Red Hat Quay, see the documentation on deploying Red Hat Quay link:https://access.redhat.com/documentation/en-us/red_hat_quay/3.6/html/deploy_red_hat_quay_for_proof-of-concept_non-production_purposes/[for proof-of-concept purposes] or link:https://access.redhat.com/documentation/en-us/red_hat_quay/3.6/html/deploy_red_hat_quay_on_openshift_with_the_quay_operator/[by using the Quay Operator]. If you need additional assistance selecting and installing a registry, contact your sales representative or Red Hat Support.
+If you use Red Hat Quay, you must use version 3.6 or later with the oc-mirror plugin. If you have an entitlement to Red Hat Quay, see the documentation on deploying Red Hat Quay link:https://access.redhat.com/documentation/en-us/red_hat_quay/3.6/html/deploy_red_hat_quay_for_proof-of-concept_non-production_purposes/[for proof-of-concept purposes] or link:https://access.redhat.com/documentation/en-us/red_hat_quay/3.6/html/deploy_red_hat_quay_on_openshift_with_the_quay_operator/[by using the Quay Operator]. If you need additional assistance selecting and installing a registry, contact your sales representative or Red Hat Support.
====
+
If you do not already have an existing solution for a container image registry, subscribers of {product-title} are provided a xref:../../installing/disconnected_install/installing-mirroring-creating-registry.adoc#installing-mirroring-creating-registry[mirror registry for Red Hat OpenShift]. The _mirror registry for Red Hat OpenShift_ is included with your subscription and is a small-scale container registry that can be used to mirror the required container images of {product-title} in disconnected installations.
@@ -48,15 +48,15 @@ If you do not already have an existing solution for a container image registry,
[id="mirroring-preparing-your-hosts"]
== Preparing your mirror hosts
-Before you can use the oc-mirror plug-in to mirror images, you must install the plug-in and create a container image registry credentials file to allow the mirroring from Red Hat to your mirror.
+Before you can use the oc-mirror plugin to mirror images, you must install the plugin and create a container image registry credentials file to allow the mirroring from Red Hat to your mirror.
-// Installing the oc-mirror OpenShift CLI plug-in
+// Installing the oc-mirror OpenShift CLI plugin
include::modules/oc-mirror-installing-plugin.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
-* xref:../../cli_reference/openshift_cli/extending-cli-plugins.adoc#cli-installing-plugins_cli-extend-plugins[Installing and using CLI plug-ins]
+* xref:../../cli_reference/openshift_cli/extending-cli-plugins.adoc#cli-installing-plugins_cli-extend-plugins[Installing and using CLI plugins]
// Configuring credentials that allow images to be mirrored
include::modules/installation-adding-registry-pull-secret.adoc[leveloffset=+2]
@@ -74,7 +74,7 @@ include::modules/oc-mirror-creating-image-set-config.adoc[leveloffset=+1]
[id="mirroring-image-set"]
== Mirroring an image set to a mirror registry
-You can use the oc-mirror CLI plug-in to mirror images to a mirror registry in a xref:../../installing/disconnected_install/installing-mirroring-disconnected.adoc#mirroring-image-set-partial[partially disconnected environment] or in a xref:../../installing/disconnected_install/installing-mirroring-disconnected.adoc#mirroring-image-set-full[fully disconnected environment].
+You can use the oc-mirror CLI plugin to mirror images to a mirror registry in a xref:../../installing/disconnected_install/installing-mirroring-disconnected.adoc#mirroring-image-set-partial[partially disconnected environment] or in a xref:../../installing/disconnected_install/installing-mirroring-disconnected.adoc#mirroring-image-set-full[fully disconnected environment].
These procedures assume that you already have your mirror registry set up.
diff --git a/installing/installing_with_agent_based_installer/understanding-disconnected-installation-mirroring.adoc b/installing/installing_with_agent_based_installer/understanding-disconnected-installation-mirroring.adoc
index 54ea0b498d..23eed63aaa 100644
--- a/installing/installing_with_agent_based_installer/understanding-disconnected-installation-mirroring.adoc
+++ b/installing/installing_with_agent_based_installer/understanding-disconnected-installation-mirroring.adoc
@@ -15,6 +15,6 @@ You can use a mirror registry for disconnected installations and to ensure that
You can use one of the following procedures to mirror your {product-title} image repository to your mirror registry:
* xref:../../installing/disconnected_install/installing-mirroring-installation-images.adoc#installing-mirroring-installation-images[Mirroring images for a disconnected installation]
-* xref:../../installing/disconnected_install/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plug-in]
+* xref:../../installing/disconnected_install/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plugin]
include::modules/agent-install-about-mirroring-for-disconnected-registry.adoc[leveloffset=+1]
diff --git a/logging/cluster-logging-deploying.adoc b/logging/cluster-logging-deploying.adoc
index beb4680818..8dc7284a90 100644
--- a/logging/cluster-logging-deploying.adoc
+++ b/logging/cluster-logging-deploying.adoc
@@ -31,7 +31,7 @@ include::modules/cluster-logging-deploy-console.adoc[leveloffset=+1]
If you plan to use Kibana, you must xref:#cluster-logging-visualizer-indices_cluster-logging-deploying[manually create your Kibana index patterns and visualizations] to explore and visualize data in Kibana.
-If your network plug-in enforces network isolation, xref:#cluster-logging-deploy-multitenant_cluster-logging-deploying[allow network traffic between the projects that contain the {logging} Operators].
+If your network plugin enforces network isolation, xref:#cluster-logging-deploy-multitenant_cluster-logging-deploying[allow network traffic between the projects that contain the {logging} Operators].
include::modules/cluster-logging-deploy-cli.adoc[leveloffset=+1]
@@ -40,7 +40,7 @@ include::modules/cluster-logging-deploy-cli.adoc[leveloffset=+1]
If you plan to use Kibana, you must xref:#cluster-logging-visualizer-indices_cluster-logging-deploying[manually create your Kibana index patterns and visualizations] to explore and visualize data in Kibana.
-If your network plug-in enforces network isolation, xref:#cluster-logging-deploy-multitenant_cluster-logging-deploying[allow network traffic between the projects that contain the {logging} Operators].
+If your network plugin enforces network isolation, xref:#cluster-logging-deploy-multitenant_cluster-logging-deploying[allow network traffic between the projects that contain the {logging} Operators].
include::modules/cluster-logging-visualizer-indices.adoc[leveloffset=+2]
@@ -50,8 +50,8 @@ include::modules/cluster-logging-deploy-multitenant.adoc[leveloffset=+2]
.Additional resources
* xref:../networking/network_policy/about-network-policy.adoc[About network policy]
-* xref:../networking/openshift_sdn/about-openshift-sdn.adoc[About the OpenShift SDN network plug-in]
-* xref:../networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.adoc[About the OVN-Kubernetes network plug-in]
+* xref:../networking/openshift_sdn/about-openshift-sdn.adoc[About the OpenShift SDN network plugin]
+* xref:../networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.adoc[About the OVN-Kubernetes network plugin]
// include::modules/cluster-logging-deploy-memory.adoc[leveloffset=+1]
diff --git a/logging/cluster-logging-release-notes.adoc b/logging/cluster-logging-release-notes.adoc
index 234d42f6f6..f4f8e4ab43 100644
--- a/logging/cluster-logging-release-notes.adoc
+++ b/logging/cluster-logging-release-notes.adoc
@@ -25,7 +25,7 @@ This release includes link:https://access.redhat.com/errata/RHSA-2022:6344[OpenS
[id="openshift-logging-5-5-1-enhancements_{context}"]
=== Enhancements
-* This enhancement adds an *Aggregated Logs* tab to the *Pod Details* page of the {product-title} web console when the Logging Console Plug-in is in use. This enhancement is only available on {product-title} 4.10 and later. (link:https://issues.redhat.com/browse/LOG-2647[LOG-2647])
+* This enhancement adds an *Aggregated Logs* tab to the *Pod Details* page of the {product-title} web console when the Logging Console Plugin is in use. This enhancement is only available on {product-title} 4.10 and later. (link:https://issues.redhat.com/browse/LOG-2647[LOG-2647])
* This enhancement adds Google Cloud Logging as an output option for log forwarding. (link:https://issues.redhat.com/browse/LOG-1482[LOG-1482])
//xref:cluster-logging-collector-log-forward-gcp.adoc
@@ -156,7 +156,7 @@ include::modules/cluster-logging-loki-tech-preview.adoc[leveloffset=+2]
* Before this update, the logging console link in OpenShift web console was not removed with the ClusterLogging CR. With this update, deleting the CR or uninstalling the Cluster Logging Operator removes the link. (link:https://issues.redhat.com/browse/LOG-2373[LOG-2373])
-* Before this update, a change to the container logs path caused the collection metric to always be zero with older releases configured with the original path. With this update, the plug-in which exposes metrics about collected logs supports reading from either path to resolve the issue. (link:https://issues.redhat.com/browse/LOG-2462[LOG-2462])
+* Before this update, a change to the container logs path caused the collection metric to always be zero with older releases configured with the original path. With this update, the plugin which exposes metrics about collected logs supports reading from either path to resolve the issue. (link:https://issues.redhat.com/browse/LOG-2462[LOG-2462])
=== CVEs
[id="openshift-logging-5-4-0-CVEs_{context}"]
@@ -352,7 +352,7 @@ This release includes link:https://access.redhat.com/errata/RHSA-2021:5129[RHSA-
* Before this update, the Logging dashboard displayed an empty CPU graph because of a reference to an invalid metric. With this update, the Logging dashboard displays CPU graphs correctly. (link:https://issues.redhat.com/browse/LOG-1925[LOG-1925])
-* Before this update, the Elasticsearch Prometheus exporter plug-in compiled index-level metrics using a high-cost query that impacted the Elasticsearch node performance. This update implements a lower-cost query that improves performance. (link:https://issues.redhat.com/browse/LOG-1897[LOG-1897])
+* Before this update, the Elasticsearch Prometheus exporter plugin compiled index-level metrics using a high-cost query that impacted the Elasticsearch node performance. This update implements a lower-cost query that improves performance. (link:https://issues.redhat.com/browse/LOG-1897[LOG-1897])
[id="openshift-logging-5-3-1-CVEs_{context}"]
@@ -747,7 +747,7 @@ This release includes link:https://access.redhat.com/errata/RHSA-2021:5127[RHSA-
* Before this update, records shipped via syslog would serialize a ruby hash encoding key/value pairs to contain a '=>' character, as well as replace tabs with "#11". This update serializes the message correctly as proper JSON. (link:https://issues.redhat.com/browse/LOG-1775[LOG-1775])
-* Before this update, the Elasticsearch Prometheus exporter plug-in compiled index-level metrics using a high-cost query that impacted the Elasticsearch node performance. This update implements a lower-cost query that improves performance. (link:https://issues.redhat.com/browse/LOG-1970[LOG-1970])
+* Before this update, the Elasticsearch Prometheus exporter plugin compiled index-level metrics using a high-cost query that impacted the Elasticsearch node performance. This update implements a lower-cost query that improves performance. (link:https://issues.redhat.com/browse/LOG-1970[LOG-1970])
* Before this update, Elasticsearch sometimes rejected messages when Log Forwarding was configured with multiple outputs. This happened because configuring one of the outputs modified message content to be a single message. With this update, Log Forwarding duplicates the messages for each output so that output-specific processing does not affect the other outputs. (link:https://issues.redhat.com/browse/LOG-1824[LOG-1824])
@@ -973,15 +973,15 @@ To see these metrics, open the *Administrator* perspective in the {product-title
* Before this update, the `kibana-proxy` pod sometimes entered the `CrashLoopBackoff` state and logged the following message `Invalid configuration: cookie_secret must be 16, 24, or 32 bytes to create an AES cipher when pass_access_token == true or cookie_refresh != 0, but is 29 bytes.` The exact actual number of bytes could vary. With this update, the generation of the Kibana session secret has been corrected, and the kibana-proxy pod no longer enters a `CrashLoopBackoff` state due to this error. (link:https://issues.redhat.com/browse/LOG-1446[LOG-1446])
-* Before this update, the AWS CloudWatch Fluentd plug-in logged its AWS API calls to the Fluentd log at all log levels, consuming additional {product-title} node resources. With this update, the AWS CloudWatch Fluentd plug-in logs AWS API calls only at the "debug" and "trace" log levels. This way, at the default "warn" log level, Fluentd does not consume extra node resources. (link:https://issues.redhat.com/browse/LOG-1071[LOG-1071])
+* Before this update, the AWS CloudWatch Fluentd plugin logged its AWS API calls to the Fluentd log at all log levels, consuming additional {product-title} node resources. With this update, the AWS CloudWatch Fluentd plugin logs AWS API calls only at the "debug" and "trace" log levels. This way, at the default "warn" log level, Fluentd does not consume extra node resources. (link:https://issues.redhat.com/browse/LOG-1071[LOG-1071])
-* Before this update, the Elasticsearch OpenDistro security plug-in caused user index migrations to fail. This update resolves the issue by providing a newer version of the plug-in. Now, index migrations proceed without errors. (link:https://issues.redhat.com/browse/LOG-1276[LOG-1276])
+* Before this update, the Elasticsearch OpenDistro security plugin caused user index migrations to fail. This update resolves the issue by providing a newer version of the plugin. Now, index migrations proceed without errors. (link:https://issues.redhat.com/browse/LOG-1276[LOG-1276])
* Before this update, in the *Logging* dashboard in the {product-title} console, the list of top 10 log-producing containers lacked data points. This update resolves the issue, and the dashboard displays all data points. (link:https://issues.redhat.com/browse/LOG-1353[LOG-1353])
* Before this update, if you were tuning the performance of the Fluentd log forwarder by adjusting the `chunkLimitSize` and `totalLimitSize` values, the `Setting queued_chunks_limit_size for each buffer to` message reported values that were too low. The current update fixes this issue so that this message reports the correct values. (link:https://issues.redhat.com/browse/LOG-1411[LOG-1411])
-* Before this update, the Kibana OpenDistro security plug-in caused user index migrations to fail. This update resolves the issue by providing a newer version of the plug-in. Now, index migrations proceed without errors. (link:https://issues.redhat.com/browse/LOG-1558[LOG-1558])
+* Before this update, the Kibana OpenDistro security plugin caused user index migrations to fail. This update resolves the issue by providing a newer version of the plugin. Now, index migrations proceed without errors. (link:https://issues.redhat.com/browse/LOG-1558[LOG-1558])
* Before this update, using a namespace input filter prevented logs in that namespace from appearing in other inputs. With this update, logs are sent to all inputs that can accept them. (link:https://issues.redhat.com/browse/LOG-1570[LOG-1570])
diff --git a/migrating_from_ocp_3_to_4/planning-migration-3-4.adoc b/migrating_from_ocp_3_to_4/planning-migration-3-4.adoc
index 29c391dcbe..f0599dfa6e 100644
--- a/migrating_from_ocp_3_to_4/planning-migration-3-4.adoc
+++ b/migrating_from_ocp_3_to_4/planning-migration-3-4.adoc
@@ -125,7 +125,7 @@ For more information, see xref:../storage/understanding-persistent-storage.adoc#
[discrete]
==== Migration of in-tree volumes to CSI drivers
-{product-title} 4 is migrating in-tree volume plug-ins to their Container Storage Interface (CSI) counterparts. In {product-title} {product-version}, CSI drivers are the new default for the following in-tree volume types:
+{product-title} 4 is migrating in-tree volume plugins to their Container Storage Interface (CSI) counterparts. In {product-title} {product-version}, CSI drivers are the new default for the following in-tree volume types:
* Amazon Web Services (AWS) Elastic Block Storage (EBS)
* Azure Disk
@@ -151,11 +151,11 @@ If your {product-title} 3.11 cluster used the `ovs-subnet` or `ovs-multitenant`
For more information, see xref:../networking/network_policy/about-network-policy.adoc#about-network-policy[About network policy].
[discrete]
-==== OVN-Kubernetes as the default networking plug-in in Red Hat OpenShift Networking
+==== OVN-Kubernetes as the default networking plugin in Red Hat OpenShift Networking
-In {product-title} 3.11, OpenShift SDN was the default networking plug-in in Red Hat OpenShift Networking. In {product-title} {product-version}, OVN-Kubernetes is now the default networking plug-in.
+In {product-title} 3.11, OpenShift SDN was the default networking plugin in Red Hat OpenShift Networking. In {product-title} {product-version}, OVN-Kubernetes is now the default networking plugin.
-For information on migrating to OVN-Kubernetes from OpenShift SDN, see xref:../networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.adoc#migrate-from-openshift-sdn[Migrating from the OpenShift SDN network plug-in].
+For information on migrating to OVN-Kubernetes from OpenShift SDN, see xref:../networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.adoc#migrate-from-openshift-sdn[Migrating from the OpenShift SDN network plugin].
[id="migration-preparing-logging"]
=== Logging considerations
diff --git a/modules/about-must-gather.adoc b/modules/about-must-gather.adoc
index a98a1efa84..1a53afa2e0 100644
--- a/modules/about-must-gather.adoc
+++ b/modules/about-must-gather.adoc
@@ -17,7 +17,7 @@ The `oc adm must-gather` CLI command collects the information from your cluster
* Resource definitions
* Service logs
-By default, the `oc adm must-gather` command uses the default plug-in image and writes into `./must-gather.local`.
+By default, the `oc adm must-gather` command uses the default plugin image and writes into `./must-gather.local`.
Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections:
diff --git a/modules/about-toolbox.adoc b/modules/about-toolbox.adoc
index 28efb61ce5..486ab0e695 100644
--- a/modules/about-toolbox.adoc
+++ b/modules/about-toolbox.adoc
@@ -7,11 +7,11 @@
= About `toolbox`
ifndef::openshift-origin[]
-`toolbox` is a tool that starts a container on a {op-system-first} system. The tool is primarily used to start a container that includes the required binaries and plug-ins that are needed to run commands such as `sosreport` and `redhat-support-tool`.
+`toolbox` is a tool that starts a container on a {op-system-first} system. The tool is primarily used to start a container that includes the required binaries and plugins that are needed to run commands such as `sosreport` and `redhat-support-tool`.
The primary purpose for a `toolbox` container is to gather diagnostic information and to provide it to Red Hat Support. However, if additional diagnostic tools are required, you can add RPM packages or run an image that is an alternative to the standard support tools image.
endif::openshift-origin[]
ifdef::openshift-origin[]
-`toolbox` is a tool that starts a container on a {op-system-first} system. The tool is primarily used to start a container that includes the required binaries and plug-ins that are needed to run your favorite debugging or admin tools.
+`toolbox` is a tool that starts a container on a {op-system-first} system. The tool is primarily used to start a container that includes the required binaries and plugins that are needed to run your favorite debugging or admin tools.
endif::openshift-origin[]
diff --git a/modules/adding-tab-pods-page.adoc b/modules/adding-tab-pods-page.adoc
index bd640e44d1..124c06141d 100644
--- a/modules/adding-tab-pods-page.adoc
+++ b/modules/adding-tab-pods-page.adoc
@@ -6,29 +6,29 @@
[id="adding-tab-to-pods-page_{context}"]
= Adding a tab to the pods page
-There are different customizations you can make to the {product-title} web console. The following procedure adds a tab to the *Pod details* page as an example extension to your plug-in.
+There are different customizations you can make to the {product-title} web console. The following procedure adds a tab to the *Pod details* page as an example extension to your plugin.
[NOTE]
====
-The {product-title} web console runs in a container connected to the cluster you have logged into. See "Dynamic plug-in development" for information to test the plug-in before creating your own.
+The {product-title} web console runs in a container connected to the cluster you have logged into. See "Dynamic plugin development" for information to test the plugin before creating your own.
====
.Procedure
-. Visit the link:https://github.com/openshift/console-plugin-template[`console-plugin-template`] repository containing a template for creating plug-ins in a new tab.
+. Visit the link:https://github.com/openshift/console-plugin-template[`console-plugin-template`] repository containing a template for creating plugins in a new tab.
+
[IMPORTANT]
====
-Custom plug-in code is not supported by Red Hat. Only link:https://access.redhat.com/solutions/5893251[Cooperative community support] is available for your plug-in.
+Custom plugin code is not supported by Red Hat. Only link:https://access.redhat.com/solutions/5893251[Cooperative community support] is available for your plugin.
====
. Select the *Use this template* dropdown button and select *_Create new repository_* from the dropdown list to create a GitHub repository.
-. Re-name the new repository with the name of your plug-in.
+. Re-name the new repository with the name of your plugin.
. Clone your copied repositury to your local machine so you can edit the code.
-. Edit the plug-in metadata in the `consolePlugin` declaration of `package.json`.
+. Edit the plugin metadata in the `consolePlugin` declaration of `package.json`.
+
[source,json]
----
@@ -45,10 +45,10 @@ Custom plug-in code is not supported by Red Hat. Only link:https://access.redhat
}
}
----
-<1> Update the name of your plug-in.
+<1> Update the name of your plugin.
<2> Update the version.
-<3> Update the display name for your plug-in.
-<4> Update the description with a synopsis about your plug-in.
+<3> Update the display name for your plugin.
+<4> Update the description with a synopsis about your plugin.
. Add the following to the `console-extensions.json` file:
+
@@ -92,7 +92,7 @@ import * as React from 'react';
export default function ExampleTab() {
return (
-
This is a custom tab added to a resource using a dynamic plug-in.
+ This is a custom tab added to a resource using a dynamic plugin.
);
}
----
diff --git a/modules/admission-plug-ins-about.adoc b/modules/admission-plug-ins-about.adoc
index 8b3695ef84..0616ddd904 100644
--- a/modules/admission-plug-ins-about.adoc
+++ b/modules/admission-plug-ins-about.adoc
@@ -4,19 +4,19 @@
:_content-type: CONCEPT
[id="admission-plug-ins-about_{context}"]
-= About admission plug-ins
+= About admission plugins
-Admission plug-ins are used to help regulate how {product-title} {product-version} functions. Admission plug-ins intercept requests to the master API to validate resource requests and ensure policies are adhered to, after the request is authenticated and authorized. For example, they are commonly used to enforce security policy, resource limitations or configuration requirements.
+Admission plugins are used to help regulate how {product-title} {product-version} functions. Admission plugins intercept requests to the master API to validate resource requests and ensure policies are adhered to, after the request is authenticated and authorized. For example, they are commonly used to enforce security policy, resource limitations or configuration requirements.
-Admission plug-ins run in sequence as an admission chain. If any admission plug-in in the sequence rejects a request, the whole chain is aborted and an error is returned.
+Admission plugins run in sequence as an admission chain. If any admission plugin in the sequence rejects a request, the whole chain is aborted and an error is returned.
-{product-title} has a default set of admission plug-ins enabled for each resource type. These are required for proper functioning of the cluster. Admission plug-ins ignore resources that they are not responsible for.
+{product-title} has a default set of admission plugins enabled for each resource type. These are required for proper functioning of the cluster. Admission plugins ignore resources that they are not responsible for.
-In addition to the defaults, the admission chain can be extended dynamically through webhook admission plug-ins that call out to custom webhook servers. There are two types of webhook admission plug-ins: a mutating admission plug-in and a validating admission plug-in. The mutating admission plug-in runs first and can both modify resources and validate requests. The validating admission plug-in validates requests and runs after the mutating admission plug-in so that modifications triggered by the mutating admission plug-in can also be validated.
+In addition to the defaults, the admission chain can be extended dynamically through webhook admission plugins that call out to custom webhook servers. There are two types of webhook admission plugins: a mutating admission plugin and a validating admission plugin. The mutating admission plugin runs first and can both modify resources and validate requests. The validating admission plugin validates requests and runs after the mutating admission plugin so that modifications triggered by the mutating admission plugin can also be validated.
-Calling webhook servers through a mutating admission plug-in can produce side effects on resources related to the target object. In such situations, you must take steps to validate that the end result is as expected.
+Calling webhook servers through a mutating admission plugin can produce side effects on resources related to the target object. In such situations, you must take steps to validate that the end result is as expected.
[WARNING]
====
-Dynamic admission should be used cautiously because it impacts cluster control plane operations. When calling webhook servers through webhook admission plug-ins in {product-title} {product-version}, ensure that you have read the documentation fully and tested for side effects of mutations. Include steps to restore resources back to their original state prior to mutation, in the event that a request does not pass through the entire admission chain.
+Dynamic admission should be used cautiously because it impacts cluster control plane operations. When calling webhook servers through webhook admission plugins in {product-title} {product-version}, ensure that you have read the documentation fully and tested for side effects of mutations. Include steps to restore resources back to their original state prior to mutation, in the event that a request does not pass through the entire admission chain.
====
diff --git a/modules/admission-plug-ins-default.adoc b/modules/admission-plug-ins-default.adoc
index 683705dfc9..bf460f87a5 100644
--- a/modules/admission-plug-ins-default.adoc
+++ b/modules/admission-plug-ins-default.adoc
@@ -3,12 +3,12 @@
// * architecture/admission-plug-ins.adoc
[id="admission-plug-ins-default_{context}"]
-= Default admission plug-ins
+= Default admission plugins
-//Future xref - A set of default admission plug-ins is enabled in {product-title} {product-version}. These default plug-ins contribute to fundamental control plane functionality, such as ingress policy, xref:../nodes/clusters/nodes-cluster-overcommit.adoc#nodes-cluster-resource-override_nodes-cluster-overcommit[cluster resource limit override] and quota policy.
-Default validating and admission plug-ins are enabled in {product-title} {product-version}. These default plug-ins contribute to fundamental control plane functionality, such as ingress policy, cluster resource limit override and quota policy. The following lists contain the default admission plug-ins:
+//Future xref - A set of default admission plugins is enabled in {product-title} {product-version}. These default plugins contribute to fundamental control plane functionality, such as ingress policy, xref:../nodes/clusters/nodes-cluster-overcommit.adoc#nodes-cluster-resource-override_nodes-cluster-overcommit[cluster resource limit override] and quota policy.
+Default validating and admission plugins are enabled in {product-title} {product-version}. These default plugins contribute to fundamental control plane functionality, such as ingress policy, cluster resource limit override and quota policy. The following lists contain the default admission plugins:
-.Validating admission plug-ins
+.Validating admission plugins
[%collapsible]
====
* `LimitRanger`
@@ -52,7 +52,7 @@ Default validating and admission plug-ins are enabled in {product-title} {produc
====
-.Mutating admission plug-ins
+.Mutating admission plugins
[%collapsible]
====
* `NamespaceLifecycle`
diff --git a/modules/admission-webhook-types.adoc b/modules/admission-webhook-types.adoc
index 7785119f75..b00f98cb27 100644
--- a/modules/admission-webhook-types.adoc
+++ b/modules/admission-webhook-types.adoc
@@ -3,17 +3,17 @@
// * architecture/admission-plug-ins.adoc
[id="admission-webhook-types_{context}"]
-= Types of webhook admission plug-ins
+= Types of webhook admission plugins
-Cluster administrators can call out to webhook servers through the mutating admission plug-in or the validating admission plug-in in the API server admission chain.
+Cluster administrators can call out to webhook servers through the mutating admission plugin or the validating admission plugin in the API server admission chain.
[id="mutating-admission-plug-in_{context}"]
-== Mutating admission plug-in
+== Mutating admission plugin
-The mutating admission plug-in is invoked during the mutation phase of the admission process, which allows modification of resource content before it is persisted. One example webhook that can be called through the mutating admission plug-in is the Pod Node Selector feature, which uses an annotation on a namespace to find a label selector and add it to the pod specification.
+The mutating admission plugin is invoked during the mutation phase of the admission process, which allows modification of resource content before it is persisted. One example webhook that can be called through the mutating admission plugin is the Pod Node Selector feature, which uses an annotation on a namespace to find a label selector and add it to the pod specification.
[id="mutating-admission-plug-in-config_{context}"]
-.Sample mutating admission plug-in configuration
+.Sample mutating admission plugin configuration
[source,yaml]
----
@@ -42,7 +42,7 @@ webhooks:
sideEffects: None
----
-<1> Specifies a mutating admission plug-in configuration.
+<1> Specifies a mutating admission plugin configuration.
<2> The name for the `MutatingWebhookConfiguration` object. Replace `` with the appropriate value.
<3> The name of the webhook to call. Replace `` with the appropriate value.
<4> Information about how to connect to, trust, and send data to the webhook server.
@@ -50,24 +50,24 @@ webhooks:
<6> The name of the front-end service.
<7> The webhook URL used for admission requests. Replace `` with the appropriate value.
<8> A PEM-encoded CA certificate that signs the server certificate that is used by the webhook server. Replace `` with the appropriate certificate in base64 format.
-<9> Rules that define when the API server should use this webhook admission plug-in.
-<10> One or more operations that trigger the API server to call this webhook admission plug-in. Possible values are `create`, `update`, `delete` or `connect`. Replace `` and `` with the appropriate values.
+<9> Rules that define when the API server should use this webhook admission plugin.
+<10> One or more operations that trigger the API server to call this webhook admission plugin. Possible values are `create`, `update`, `delete` or `connect`. Replace `` and `` with the appropriate values.
<11> Specifies how the policy should proceed if the webhook server is unavailable.
Replace `` with either `Ignore` (to unconditionally accept the request in the event of a failure) or `Fail` (to deny the failed request). Using `Ignore` can result in unpredictable behavior for all clients.
[IMPORTANT]
====
-In {product-title} {product-version}, objects created by users or control loops through a mutating admission plug-in might return unexpected results, especially if values set in an initial request are overwritten, which is not recommended.
+In {product-title} {product-version}, objects created by users or control loops through a mutating admission plugin might return unexpected results, especially if values set in an initial request are overwritten, which is not recommended.
====
[id="validating-admission-plug-in_{context}"]
-== Validating admission plug-in
+== Validating admission plugin
-A validating admission plug-in is invoked during the validation phase of the admission process. This phase allows the enforcement of invariants on particular API resources to ensure that the resource does not change again. The Pod Node Selector is also an example of a webhook which is called by the validating admission plug-in, to ensure that all `nodeSelector` fields are constrained by the node selector restrictions on the namespace.
+A validating admission plugin is invoked during the validation phase of the admission process. This phase allows the enforcement of invariants on particular API resources to ensure that the resource does not change again. The Pod Node Selector is also an example of a webhook which is called by the validating admission plugin, to ensure that all `nodeSelector` fields are constrained by the node selector restrictions on the namespace.
[id="validating-admission-plug-in-config_{context}"]
//http://blog.kubernetes.io/2018/01/extensible-admission-is-beta.html
-.Sample validating admission plug-in configuration
+.Sample validating admission plugin configuration
[source,yaml]
----
@@ -96,7 +96,7 @@ webhooks:
sideEffects: Unknown
----
-<1> Specifies a validating admission plug-in configuration.
+<1> Specifies a validating admission plugin configuration.
<2> The name for the `ValidatingWebhookConfiguration` object. Replace `` with the appropriate value.
<3> The name of the webhook to call. Replace `` with the appropriate value.
<4> Information about how to connect to, trust, and send data to the webhook server.
@@ -104,7 +104,7 @@ webhooks:
<6> The name of the front-end service.
<7> The webhook URL used for admission requests. Replace `` with the appropriate value.
<8> A PEM-encoded CA certificate that signs the server certificate that is used by the webhook server. Replace `` with the appropriate certificate in base64 format.
-<9> Rules that define when the API server should use this webhook admission plug-in.
-<10> One or more operations that trigger the API server to call this webhook admission plug-in. Possible values are `create`, `update`, `delete` or `connect`. Replace `` and `` with the appropriate values.
+<9> Rules that define when the API server should use this webhook admission plugin.
+<10> One or more operations that trigger the API server to call this webhook admission plugin. Possible values are `create`, `update`, `delete` or `connect`. Replace `` and `` with the appropriate values.
<11> Specifies how the policy should proceed if the webhook server is unavailable.
Replace `` with either `Ignore` (to unconditionally accept the request in the event of a failure) or `Fail` (to deny the failed request). Using `Ignore` can result in unpredictable behavior for all clients.
diff --git a/modules/admission-webhooks-about.adoc b/modules/admission-webhooks-about.adoc
index 24ad3650bb..b92f845098 100644
--- a/modules/admission-webhooks-about.adoc
+++ b/modules/admission-webhooks-about.adoc
@@ -3,19 +3,19 @@
// * architecture/admission-plug-ins.adoc
[id="admission-webhooks-about_{context}"]
-= Webhook admission plug-ins
+= Webhook admission plugins
-In addition to {product-title} default admission plug-ins, dynamic admission can be implemented through webhook admission plug-ins that call webhook servers, to extend the functionality of the admission chain. Webhook servers are called over HTTP at defined endpoints.
+In addition to {product-title} default admission plugins, dynamic admission can be implemented through webhook admission plugins that call webhook servers, to extend the functionality of the admission chain. Webhook servers are called over HTTP at defined endpoints.
-There are two types of webhook admission plug-ins in {product-title}:
+There are two types of webhook admission plugins in {product-title}:
-//Future xref - * During the admission process, xref:../architecture/admission-plug-ins.adoc#mutating-admission-plug-in[the mutating admission plug-in] can perform tasks, such as injecting affinity labels.
-* During the admission process, the _mutating admission plug-in_ can perform tasks, such as injecting affinity labels.
+//Future xref - * During the admission process, xref:../architecture/admission-plug-ins.adoc#mutating-admission-plug-in[the mutating admission plugin] can perform tasks, such as injecting affinity labels.
+* During the admission process, the _mutating admission plugin_ can perform tasks, such as injecting affinity labels.
-//Future xref - * At the end of the admission process, xref:../architecture/admission-plug-ins.adoc#validating-admission-plug-in[the validating admission plug-in] makes sure an object is configured properly, for example ensuring affinity labels are as expected. If the validation passes, {product-title} schedules the object as configured.
-* At the end of the admission process, the _validating admission plug-in_ can be used to make sure an object is configured properly, for example ensuring affinity labels are as expected. If the validation passes, {product-title} schedules the object as configured.
+//Future xref - * At the end of the admission process, xref:../architecture/admission-plug-ins.adoc#validating-admission-plug-in[the validating admission plugin] makes sure an object is configured properly, for example ensuring affinity labels are as expected. If the validation passes, {product-title} schedules the object as configured.
+* At the end of the admission process, the _validating admission plugin_ can be used to make sure an object is configured properly, for example ensuring affinity labels are as expected. If the validation passes, {product-title} schedules the object as configured.
-When an API request comes in, mutating or validating admission plug-ins use the list of external webhooks in the configuration and call them in parallel:
+When an API request comes in, mutating or validating admission plugins use the list of external webhooks in the configuration and call them in parallel:
* If all of the webhooks approve the request, the admission chain continues.
@@ -25,22 +25,22 @@ When an API request comes in, mutating or validating admission plug-ins use the
* If an error is encountered when calling a webhook, the request is either denied or the webhook is ignored depending on the error policy set. If the error policy is set to `Ignore`, the request is unconditionally accepted in the event of a failure. If the policy is set to `Fail`, failed requests are denied. Using `Ignore` can result in unpredictable behavior for all clients.
-//Future xrefs - Communication between the webhook admission plug-in and the webhook server must use TLS. Generate a certificate authority (CA) certificate and use the certificate to sign the server certificate that is used by your webhook server. The PEM-encoded CA certificate is supplied to the webhook admission plug-in using a mechanism, such as xref:../security/certificates/service-serving-certificate.adoc#service-serving-certificate[service serving certificate secrets].
-Communication between the webhook admission plug-in and the webhook server must use TLS. Generate a CA certificate and use the certificate to sign the server certificate that is used by your webhook admission server. The PEM-encoded CA certificate is supplied to the webhook admission plug-in using a mechanism, such as service serving certificate secrets.
+//Future xrefs - Communication between the webhook admission plugin and the webhook server must use TLS. Generate a certificate authority (CA) certificate and use the certificate to sign the server certificate that is used by your webhook server. The PEM-encoded CA certificate is supplied to the webhook admission plugin using a mechanism, such as xref:../security/certificates/service-serving-certificate.adoc#service-serving-certificate[service serving certificate secrets].
+Communication between the webhook admission plugin and the webhook server must use TLS. Generate a CA certificate and use the certificate to sign the server certificate that is used by your webhook admission server. The PEM-encoded CA certificate is supplied to the webhook admission plugin using a mechanism, such as service serving certificate secrets.
The following diagram illustrates the sequential admission chain process within which multiple webhook servers are called.
-.API admission chain with mutating and validating admission plug-ins
+.API admission chain with mutating and validating admission plugins
image::api-admission-chain.png["API admission stage", align="center"]
-An example webhook admission plug-in use case is where all pods must have a common set of labels. In this example, the mutating admission plug-in can inject labels and the validating admission plug-in can check that labels are as expected. {product-title} would subsequently schedule pods that include required labels and reject those that do not.
+An example webhook admission plugin use case is where all pods must have a common set of labels. In this example, the mutating admission plugin can inject labels and the validating admission plugin can check that labels are as expected. {product-title} would subsequently schedule pods that include required labels and reject those that do not.
-Some common webhook admission plug-in use cases include:
+Some common webhook admission plugin use cases include:
//Future xref - * Namespace reservation.
* Namespace reservation.
-//Future xrefs - * :../networking/hardware_networks/configuring-sriov-operator.adoc#configuring-sriov-operator[Limiting custom network resources managed by the SR-IOV network device plug-in].
-* Limiting custom network resources managed by the SR-IOV network device plug-in.
+//Future xrefs - * :../networking/hardware_networks/configuring-sriov-operator.adoc#configuring-sriov-operator[Limiting custom network resources managed by the SR-IOV network device plugin].
+* Limiting custom network resources managed by the SR-IOV network device plugin.
//Future xref - * xref:../nodes/scheduling/nodes-scheduler-taints-tolerations.adoc#nodes-scheduler-taints-tolerations_dedicating_nodes-scheduler-taints-tolerations[Defining tolerations that enable taints to qualify which pods should be scheduled on a node].
* Defining tolerations that enable taints to qualify which pods should be scheduled on a node.
//Future xref - * xref:../nodes/pods/nodes-pods-priority.adoc#admin-guide-priority-preemption-names_nodes-pods-priority[Pod priority class validation].
diff --git a/modules/api-compatibility-common-terminology.adoc b/modules/api-compatibility-common-terminology.adoc
index d33a61ea65..9862ede14c 100644
--- a/modules/api-compatibility-common-terminology.adoc
+++ b/modules/api-compatibility-common-terminology.adoc
@@ -15,7 +15,7 @@ An API is a public interface implemented by a software program that enables it t
An AOE is the integrated environment that executes the end-user application program. The AOE is a containerized environment that provides isolation from the host operating system (OS). At a minimum, AOE allows the application to run in an isolated manner from the host OS libraries and binaries, but still share the same OS kernel as all other containers on the host. The AOE is enforced at runtime and it describes the interface between an application and its operating environment. It includes intersection points between the platform, operating system and environment, with the user application including projection of downward API, DNS, resource accounting, device access, platform workload identity, isolation among containers, isolation between containers and host OS.
-The AOE does not include components that might vary by installation, such as Container Network Interface (CNI) plug-in selection or extensions to the product such as admission hooks. Components that integrate with the cluster at a level below the container environment might be subjected to additional variation between versions.
+The AOE does not include components that might vary by installation, such as Container Network Interface (CNI) plugin selection or extensions to the product such as admission hooks. Components that integrate with the cluster at a level below the container environment might be subjected to additional variation between versions.
[id="api-compatibility-common-terminology-virtualized_{context}"]
== Compatibility in a virtualized environment
diff --git a/modules/available-persistent-storage-options.adoc b/modules/available-persistent-storage-options.adoc
index f7a06872fd..717a224858 100644
--- a/modules/available-persistent-storage-options.adoc
+++ b/modules/available-persistent-storage-options.adoc
@@ -40,7 +40,7 @@ a| * Accessible through a REST API endpoint
|===
[.small]
--
-1. NetApp NFS supports dynamic PV provisioning when using the Trident plug-in.
+1. NetApp NFS supports dynamic PV provisioning when using the Trident plugin.
--
[IMPORTANT]
diff --git a/modules/build-image-docker.adoc b/modules/build-image-docker.adoc
index 2b9a7ba004..a3d38fb6b2 100644
--- a/modules/build-image-docker.adoc
+++ b/modules/build-image-docker.adoc
@@ -6,7 +6,7 @@
[id="build-image-with-docker_{context}"]
= Build an image with Docker
-To deploy your plug-in on a cluster, you need to build an image and push it to an image registry.
+To deploy your plugin on a cluster, you need to build an image and push it to an image registry.
.Procedure
diff --git a/modules/builds-secrets-overview.adoc b/modules/builds-secrets-overview.adoc
index bc7bb95fa5..41795716e3 100644
--- a/modules/builds-secrets-overview.adoc
+++ b/modules/builds-secrets-overview.adoc
@@ -4,7 +4,7 @@
[id="builds-secrets-overview_{context}"]
= What is a secret?
-The `Secret` object type provides a mechanism to hold sensitive information such as passwords, {product-title} client configuration files, `dockercfg` files, private source repository credentials, and so on. Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plug-in or the system can use secrets to perform actions on behalf of a pod.
+The `Secret` object type provides a mechanism to hold sensitive information such as passwords, {product-title} client configuration files, `dockercfg` files, private source repository credentials, and so on. Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod.
.YAML Secret Object Definition
diff --git a/modules/builds-strategy-pipeline-build.adoc b/modules/builds-strategy-pipeline-build.adoc
index 4364e52a6a..a1caef9af1 100644
--- a/modules/builds-strategy-pipeline-build.adoc
+++ b/modules/builds-strategy-pipeline-build.adoc
@@ -13,7 +13,7 @@ The Pipeline build strategy is deprecated in {product-title} 4. Equivalent and i
Jenkins images on {product-title} are fully supported and users should follow Jenkins user documentation for defining their `jenkinsfile` in a job or store it in a Source Control Management system.
====
-The Pipeline build strategy allows developers to define a Jenkins pipeline for use by the Jenkins pipeline plug-in. The build can be started, monitored, and managed by {product-title} in the same way as any other build type.
+The Pipeline build strategy allows developers to define a Jenkins pipeline for use by the Jenkins pipeline plugin. The build can be started, monitored, and managed by {product-title} in the same way as any other build type.
Pipeline workflows are defined in a `jenkinsfile`, either embedded directly in the build configuration, or supplied in a Git repository and referenced by the build configuration.
diff --git a/modules/builds-understanding-openshift-pipeline.adoc b/modules/builds-understanding-openshift-pipeline.adoc
index 2646520887..af71e4a98f 100644
--- a/modules/builds-understanding-openshift-pipeline.adoc
+++ b/modules/builds-understanding-openshift-pipeline.adoc
@@ -12,24 +12,24 @@ The Pipeline build strategy is deprecated in {product-title} 4. Equivalent and i
Jenkins images on {product-title} are fully supported and users should follow Jenkins user documentation for defining their `jenkinsfile` in a job or store it in a Source Control Management system.
====
-Pipelines give you control over building, deploying, and promoting your applications on {product-title}. Using a combination of the Jenkins Pipeline build strategy, `jenkinsfiles`, and the {product-title} Domain Specific Language (DSL) provided by the Jenkins Client Plug-in, you can create advanced build, test, deploy, and promote pipelines for any scenario.
+Pipelines give you control over building, deploying, and promoting your applications on {product-title}. Using a combination of the Jenkins Pipeline build strategy, `jenkinsfiles`, and the {product-title} Domain Specific Language (DSL) provided by the Jenkins Client Plugin, you can create advanced build, test, deploy, and promote pipelines for any scenario.
-*{product-title} Jenkins Sync Plug-in*
+*{product-title} Jenkins Sync Plugin*
-The {product-title} Jenkins Sync Plug-in keeps the build configuration and build objects in sync with Jenkins jobs and builds, and provides the following:
+The {product-title} Jenkins Sync Plugin keeps the build configuration and build objects in sync with Jenkins jobs and builds, and provides the following:
* Dynamic job and run creation in Jenkins.
* Dynamic creation of agent pod templates from image streams, image stream tags, or config maps.
* Injection of environment variables.
* Pipeline visualization in the {product-title} web console.
- * Integration with the Jenkins Git plug-in, which passes commit information from {product-title} builds to the Jenkins Git plug-in.
+ * Integration with the Jenkins Git plugin, which passes commit information from {product-title} builds to the Jenkins Git plugin.
* Synchronization of secrets into Jenkins credential entries.
-*{product-title} Jenkins Client Plug-in*
+*{product-title} Jenkins Client Plugin*
-The {product-title} Jenkins Client Plug-in is a Jenkins plug-in which aims to provide a readable, concise, comprehensive, and fluent Jenkins Pipeline syntax for rich interactions with an {product-title} API Server. The plug-in uses the {product-title} command line tool, `oc`, which must be available on the nodes executing the script.
+The {product-title} Jenkins Client Plugin is a Jenkins plugin which aims to provide a readable, concise, comprehensive, and fluent Jenkins Pipeline syntax for rich interactions with an {product-title} API Server. The plugin uses the {product-title} command line tool, `oc`, which must be available on the nodes executing the script.
-The Jenkins Client Plug-in must be installed on your Jenkins master so the {product-title} DSL will be available to use within the `jenkinsfile` for your application. This plug-in is installed and enabled by default when using the {product-title} Jenkins image.
+The Jenkins Client Plugin must be installed on your Jenkins master so the {product-title} DSL will be available to use within the `jenkinsfile` for your application. This plugin is installed and enabled by default when using the {product-title} Jenkins image.
For {product-title} Pipelines within your project, you will must use the Jenkins Pipeline Build Strategy. This strategy defaults to using a `jenkinsfile` at the root of your source repository, but also provides the following configuration options:
diff --git a/modules/builds-using-proxy-git-cloning.adoc b/modules/builds-using-proxy-git-cloning.adoc
index 2f76998bc9..56c89d8be2 100644
--- a/modules/builds-using-proxy-git-cloning.adoc
+++ b/modules/builds-using-proxy-git-cloning.adoc
@@ -25,7 +25,7 @@ source:
[NOTE]
====
-For Pipeline strategy builds, given the current restrictions with the Git plug-in for Jenkins, any Git operations through the Git plug-in do not leverage the HTTP or HTTPS proxy defined in the `BuildConfig`. The Git plug-in only uses the proxy configured in the Jenkins UI at the Plugin Manager panel. This proxy is then used for all git interactions within Jenkins, across all jobs.
+For Pipeline strategy builds, given the current restrictions with the Git plugin for Jenkins, any Git operations through the Git plugin do not leverage the HTTP or HTTPS proxy defined in the `BuildConfig`. The Git plugin only uses the proxy configured in the Jenkins UI at the Plugin Manager panel. This proxy is then used for all git interactions within Jenkins, across all jobs.
====
[role="_additional-resources"]
diff --git a/modules/cli-extending-plugins-installing.adoc b/modules/cli-extending-plugins-installing.adoc
index d444533c4a..33e91a5e37 100644
--- a/modules/cli-extending-plugins-installing.adoc
+++ b/modules/cli-extending-plugins-installing.adoc
@@ -4,19 +4,19 @@
:_content-type: PROCEDURE
[id="cli-installing-plugins_{context}"]
-= Installing and using CLI plug-ins
+= Installing and using CLI plugins
-After you write a custom plug-in for the {product-title} CLI, you must install
+After you write a custom plugin for the {product-title} CLI, you must install
it to use the functionality that it provides.
.Prerequisites
* You must have the `oc` CLI tool installed.
-* You must have a CLI plug-in file that begins with `oc-` or `kubectl-`.
+* You must have a CLI plugin file that begins with `oc-` or `kubectl-`.
.Procedure
-. If necessary, update the plug-in file to be executable.
+. If necessary, update the plugin file to be executable.
+
[source,terminal]
----
@@ -28,7 +28,7 @@ $ chmod +x
----
$ sudo mv /usr/local/bin/.
----
-. Run `oc plugin list` to make sure that the plug-in is listed.
+. Run `oc plugin list` to make sure that the plugin is listed.
+
[source,terminal]
----
@@ -38,17 +38,17 @@ $ oc plugin list
.Example output
[source,terminal]
----
-The following compatible plug-ins are available:
+The following compatible plugins are available:
/usr/local/bin/
----
+
-If your plug-in is not listed here, verify that the file begins with `oc-`
+If your plugin is not listed here, verify that the file begins with `oc-`
or `kubectl-`, is executable, and is on your `PATH`.
-. Invoke the new command or option introduced by the plug-in.
+. Invoke the new command or option introduced by the plugin.
+
-For example, if you built and installed the `kubectl-ns` plug-in from the
- link:https://github.com/kubernetes/sample-cli-plugin[Sample plug-in repository],
+For example, if you built and installed the `kubectl-ns` plugin from the
+ link:https://github.com/kubernetes/sample-cli-plugin[Sample plugin repository],
you can use the following command to view the current namespace.
+
[source,terminal]
@@ -56,6 +56,6 @@ For example, if you built and installed the `kubectl-ns` plug-in from the
$ oc ns
----
+
-Note that the command to invoke the plug-in depends on the plug-in file name.
-For example, a plug-in with the file name of `oc-foo-bar` is invoked by the `oc foo bar`
+Note that the command to invoke the plugin depends on the plugin file name.
+For example, a plugin with the file name of `oc-foo-bar` is invoked by the `oc foo bar`
command.
diff --git a/modules/cli-extending-plugins-writing.adoc b/modules/cli-extending-plugins-writing.adoc
index 93c9a028a2..a71a89e167 100644
--- a/modules/cli-extending-plugins-writing.adoc
+++ b/modules/cli-extending-plugins-writing.adoc
@@ -4,27 +4,27 @@
:_content-type: PROCEDURE
[id="cli-writing-plugins_{context}"]
-= Writing CLI plug-ins
+= Writing CLI plugins
-You can write a plug-in for the {product-title} CLI in any programming language
+You can write a plugin for the {product-title} CLI in any programming language
or script that allows you to write command-line commands. Note that you can not
-use a plug-in to overwrite an existing `oc` command.
+use a plugin to overwrite an existing `oc` command.
.Procedure
-This procedure creates a simple Bash plug-in that prints a message to the
+This procedure creates a simple Bash plugin that prints a message to the
terminal when the `oc foo` command is issued.
. Create a file called `oc-foo`.
+
-When naming your plug-in file, keep the following in mind:
+When naming your plugin file, keep the following in mind:
* The file must begin with `oc-` or `kubectl-` to be recognized as a
-plug-in.
-* The file name determines the command that invokes the plug-in. For example, a
-plug-in with the file name `oc-foo-bar` can be invoked by a command of
+plugin.
+* The file name determines the command that invokes the plugin. For example, a
+plugin with the file name `oc-foo-bar` can be invoked by a command of
`oc foo bar`. You can also use underscores if you want the command to contain
-dashes. For example, a plug-in with the file name `oc-foo_bar` can be invoked
+dashes. For example, a plugin with the file name `oc-foo_bar` can be invoked
by a command of `oc foo-bar`.
. Add the following contents to the file.
@@ -50,12 +50,12 @@ fi
echo "I am a plugin named kubectl-foo"
----
-After you install this plug-in for the {product-title} CLI, it can be invoked
+After you install this plugin for the {product-title} CLI, it can be invoked
using the `oc foo` command.
[role="_additional-resources"]
.Additional resources
-* Review the link:https://github.com/kubernetes/sample-cli-plugin[Sample plug-in repository]
-for an example of a plug-in written in Go.
-* Review the link:https://github.com/kubernetes/cli-runtime/[CLI runtime repository] for a set of utilities to assist in writing plug-ins in Go.
+* Review the link:https://github.com/kubernetes/sample-cli-plugin[Sample plugin repository]
+for an example of a plugin written in Go.
+* Review the link:https://github.com/kubernetes/cli-runtime/[CLI runtime repository] for a set of utilities to assist in writing plugins in Go.
diff --git a/modules/cli-krew-install-plugin.adoc b/modules/cli-krew-install-plugin.adoc
index bff8cd4310..e35186d4b4 100644
--- a/modules/cli-krew-install-plugin.adoc
+++ b/modules/cli-krew-install-plugin.adoc
@@ -4,9 +4,9 @@
:_content-type: PROCEDURE
[id="cli-krew-install-plugin_{context}"]
-= Installing a CLI plug-in with Krew
+= Installing a CLI plugin with Krew
-You can install a plug-in for the OpenShift CLI (`oc`) with Krew.
+You can install a plugin for the OpenShift CLI (`oc`) with Krew.
.Prerequisites
@@ -14,28 +14,28 @@ You can install a plug-in for the OpenShift CLI (`oc`) with Krew.
.Procedure
-. To list all available plug-ins, run the following command:
+. To list all available plugins, run the following command:
+
[source,terminal]
----
$ oc krew search
----
-. To get information about a plug-in, run the following command:
+. To get information about a plugin, run the following command:
+
[source,terminal]
----
$ oc krew info
----
-. To install a plug-in, run the following command:
+. To install a plugin, run the following command:
+
[source,terminal]
----
$ oc krew install
----
-. To list all plug-ins that were installed by Krew, run the following command:
+. To list all plugins that were installed by Krew, run the following command:
+
[source,terminal]
----
diff --git a/modules/cli-krew-remove-plugin.adoc b/modules/cli-krew-remove-plugin.adoc
index 22ba9f133a..5c43e2a6b4 100644
--- a/modules/cli-krew-remove-plugin.adoc
+++ b/modules/cli-krew-remove-plugin.adoc
@@ -4,18 +4,18 @@
:_content-type: PROCEDURE
[id="cli-krew-remove-plugin_{context}"]
-= Uninstalling a CLI plug-in with Krew
+= Uninstalling a CLI plugin with Krew
-You can uninstall a plug-in that was installed for the OpenShift CLI (`oc`) with Krew.
+You can uninstall a plugin that was installed for the OpenShift CLI (`oc`) with Krew.
.Prerequisites
* You have installed Krew by following the link:https://krew.sigs.k8s.io/docs/user-guide/setup/install/[installation procedure] in the Krew documentation.
-* You have installed a plug-in for the OpenShift CLI with Krew.
+* You have installed a plugin for the OpenShift CLI with Krew.
.Procedure
-* To uninstall a plug-in, run the following command:
+* To uninstall a plugin, run the following command:
+
[source,terminal]
----
diff --git a/modules/cli-krew-update-plugin.adoc b/modules/cli-krew-update-plugin.adoc
index e4e86db64a..ddaf47993c 100644
--- a/modules/cli-krew-update-plugin.adoc
+++ b/modules/cli-krew-update-plugin.adoc
@@ -4,25 +4,25 @@
:_content-type: PROCEDURE
[id="cli-krew-update-plugin_{context}"]
-= Updating a CLI plug-in with Krew
+= Updating a CLI plugin with Krew
-You can update a plug-in that was installed for the OpenShift CLI (`oc`) with Krew.
+You can update a plugin that was installed for the OpenShift CLI (`oc`) with Krew.
.Prerequisites
* You have installed Krew by following the link:https://krew.sigs.k8s.io/docs/user-guide/setup/install/[installation procedure] in the Krew documentation.
-* You have installed a plug-in for the OpenShift CLI with Krew.
+* You have installed a plugin for the OpenShift CLI with Krew.
.Procedure
-* To update a single plug-in, run the following command:
+* To update a single plugin, run the following command:
+
[source,terminal]
----
$ oc krew upgrade
----
-* To update all plug-ins that were installed by Krew, run the following command:
+* To update all plugins that were installed by Krew, run the following command:
+
[source,terminal]
----
diff --git a/modules/cluster-dns-operator.adoc b/modules/cluster-dns-operator.adoc
index 85f6a758be..fa88268d5f 100644
--- a/modules/cluster-dns-operator.adoc
+++ b/modules/cluster-dns-operator.adoc
@@ -13,7 +13,7 @@ The DNS Operator deploys and manages CoreDNS to provide a name resolution servic
The Operator creates a working default deployment based on the cluster's configuration.
* The default cluster domain is `cluster.local`.
-* Configuration of the CoreDNS Corefile or Kubernetes plug-in is not yet supported.
+* Configuration of the CoreDNS Corefile or Kubernetes plugin is not yet supported.
The DNS Operator manages CoreDNS as a Kubernetes daemon set exposed as a service with a static IP. CoreDNS runs on all nodes in the cluster.
diff --git a/modules/cluster-logging-collector-log-forwarding-supported-plugins-5-1.adoc b/modules/cluster-logging-collector-log-forwarding-supported-plugins-5-1.adoc
index 15db554991..39c1385383 100644
--- a/modules/cluster-logging-collector-log-forwarding-supported-plugins-5-1.adoc
+++ b/modules/cluster-logging-collector-log-forwarding-supported-plugins-5-1.adoc
@@ -42,7 +42,7 @@ kafka 2.7.0
// Note to tech writer, validate these items against the corresponding line of the test configuration file that Red Hat OpenShift Logging 5.0 uses: https://github.com/openshift/origin-aggregated-logging/blob/release-5.0/fluentd/Gemfile.lock
// This file is the authoritative source of information about which items and versions Red Hat tests and supports.
-// According to this link:https://github.com/zendesk/ruby-kafka#compatibility[Zendesk compatibility list for ruby-kafka], the fluent-plugin-kafka plug-in supports Kafka version 0.11.
+// According to this link:https://github.com/zendesk/ruby-kafka#compatibility[Zendesk compatibility list for ruby-kafka], the fluent-plugin-kafka plugin supports Kafka version 0.11.
// Logstash support is according to https://github.com/openshift/cluster-logging-operator/blob/master/test/functional/outputs/forward_to_logstash_test.go#L37
[NOTE]
diff --git a/modules/cluster-logging-collector-log-forwarding-supported-plugins-5-2.adoc b/modules/cluster-logging-collector-log-forwarding-supported-plugins-5-2.adoc
index b443463e64..de95c54793 100644
--- a/modules/cluster-logging-collector-log-forwarding-supported-plugins-5-2.adoc
+++ b/modules/cluster-logging-collector-log-forwarding-supported-plugins-5-2.adoc
@@ -51,7 +51,7 @@ kafka 2.7.0
// Note to tech writer, validate these items against the corresponding line of the test configuration file that Red Hat OpenShift Logging 5.0 uses: https://github.com/openshift/origin-aggregated-logging/blob/release-5.0/fluentd/Gemfile.lock
// This file is the authoritative source of information about which items and versions Red Hat tests and supports.
-// According to this link:https://github.com/zendesk/ruby-kafka#compatibility[Zendesk compatibility list for ruby-kafka], the fluent-plugin-kafka plug-in supports Kafka version 0.11.
+// According to this link:https://github.com/zendesk/ruby-kafka#compatibility[Zendesk compatibility list for ruby-kafka], the fluent-plugin-kafka plugin supports Kafka version 0.11.
// Logstash support is according to https://github.com/openshift/cluster-logging-operator/blob/master/test/functional/outputs/forward_to_logstash_test.go#L37
[NOTE]
diff --git a/modules/cluster-logging-deploy-multitenant.adoc b/modules/cluster-logging-deploy-multitenant.adoc
index b619edf21f..e079f9df21 100644
--- a/modules/cluster-logging-deploy-multitenant.adoc
+++ b/modules/cluster-logging-deploy-multitenant.adoc
@@ -6,11 +6,11 @@
[id="cluster-logging-deploy-multitenant_{context}"]
= Allowing traffic between projects when network isolation is enabled
-Your cluster network plug-in might enforce network isolation. If so, you must allow network traffic between the projects that contain the operators deployed by OpenShift Logging.
+Your cluster network plugin might enforce network isolation. If so, you must allow network traffic between the projects that contain the operators deployed by OpenShift Logging.
Network isolation blocks network traffic between pods or services that are in different projects. The {logging} installs the _OpenShift Elasticsearch Operator_ in the `openshift-operators-redhat` project and the _Red Hat OpenShift Logging Operator_ in the `openshift-logging` project. Therefore, you must allow traffic between these two projects.
-{product-title} offers two supported choices for the network plug-in, OpenShift SDN and OVN-Kubernetes. These two providers implement various network isolation policies.
+{product-title} offers two supported choices for the network plugin, OpenShift SDN and OVN-Kubernetes. These two providers implement various network isolation policies.
OpenShift SDN has three modes:
diff --git a/modules/cluster-logging-loki-deploy.adoc b/modules/cluster-logging-loki-deploy.adoc
index 013857d55f..bdf4a0e3fd 100644
--- a/modules/cluster-logging-loki-deploy.adoc
+++ b/modules/cluster-logging-loki-deploy.adoc
@@ -127,5 +127,5 @@ oc apply -f cr-lokistack.yaml
[NOTE]
====
-This plug-in is only available on {product-title} 4.10 and later.
+This plugin is only available on {product-title} 4.10 and later.
====
diff --git a/modules/cluster-logging-troubleshooting-unknown.adoc b/modules/cluster-logging-troubleshooting-unknown.adoc
index c1f3a57a49..92e173aa7c 100644
--- a/modules/cluster-logging-troubleshooting-unknown.adoc
+++ b/modules/cluster-logging-troubleshooting-unknown.adoc
@@ -7,7 +7,7 @@
If you are attempting to use a F-5 load balancer in front of Kibana with
`X-Forwarded-For` enabled, this can cause an issue in which the Elasticsearch
-`Searchguard` plug-in is unable to correctly accept connections from Kibana.
+`Searchguard` plugin is unable to correctly accept connections from Kibana.
.Example Kibana Error Message
----
diff --git a/modules/cnf-about-virtual-routing-and-forwarding.adoc b/modules/cnf-about-virtual-routing-and-forwarding.adoc
index 55f632695a..aec1274e41 100644
--- a/modules/cnf-about-virtual-routing-and-forwarding.adoc
+++ b/modules/cnf-about-virtual-routing-and-forwarding.adoc
@@ -13,4 +13,4 @@ Processes can bind a socket to the VRF device. Packets through the binded socket
[id="cnf-benefits-secondary-networks-telecommunications-operators_{context}"]
== Benefits of secondary networks for pods for telecommunications operators
-In telecommunications use cases, each CNF can potentially be connected to multiple different networks sharing the same address space. These secondary networks can potentially conflict with the cluster's main network CIDR. Using the CNI VRF plug-in, network functions can be connected to different customers' infrastructure using the same IP address, keeping different customers isolated. IP addresses are overlapped with {product-title} IP space. The CNI VRF plug-in also reduces the number of permissions needed by CNF and increases the visibility of network topologies of secondary networks.
+In telecommunications use cases, each CNF can potentially be connected to multiple different networks sharing the same address space. These secondary networks can potentially conflict with the cluster's main network CIDR. Using the CNI VRF plugin, network functions can be connected to different customers' infrastructure using the same IP address, keeping different customers isolated. IP addresses are overlapped with {product-title} IP space. The CNI VRF plugin also reduces the number of permissions needed by CNF and increases the visibility of network topologies of secondary networks.
diff --git a/modules/cnf-assigning-a-secondary-network-to-a-vrf.adoc b/modules/cnf-assigning-a-secondary-network-to-a-vrf.adoc
index 3bbb0007cb..a767203461 100644
--- a/modules/cnf-assigning-a-secondary-network-to-a-vrf.adoc
+++ b/modules/cnf-assigning-a-secondary-network-to-a-vrf.adoc
@@ -7,7 +7,7 @@
[id="cnf-assigning-a-secondary-network-to-a-vrf_{context}"]
= Assigning a secondary network to a VRF
-As a cluster administrator, you can configure an additional network for your VRF domain by using the CNI VRF plug-in. The virtual network created by this plug-in is associated with a physical interface that you specify.
+As a cluster administrator, you can configure an additional network for your VRF domain by using the CNI VRF plugin. The virtual network created by this plugin is associated with a physical interface that you specify.
[NOTE]
====
@@ -17,7 +17,7 @@ Using a VRF through the `ip vrf exec` command is not supported in {product-title
====
[id="cnf-creating-an-additional-network-attachment-with-the-cni-vrf-plug-in_{context}"]
-== Creating an additional network attachment with the CNI VRF plug-in
+== Creating an additional network attachment with the CNI VRF plugin
The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the `NetworkAttachmentDefinition` custom resource (CR) automatically.
@@ -26,7 +26,7 @@ The Cluster Network Operator (CNO) manages additional network definitions. When
Do not edit the `NetworkAttachmentDefinition` CRs that the Cluster Network Operator manages. Doing so might disrupt network traffic on your additional network.
====
-To create an additional network attachment with the CNI VRF plug-in, perform the following procedure.
+To create an additional network attachment with the CNI VRF plugin, perform the following procedure.
.Prerequisites
@@ -141,4 +141,3 @@ $ ip link
----
5: net1: mtu 1500 qdisc noqueue master red state UP mode
----
-
diff --git a/modules/cnf-assigning-a-sriov-network-to-a-vrf.adoc b/modules/cnf-assigning-a-sriov-network-to-a-vrf.adoc
index 75b50daa4e..209f39f079 100644
--- a/modules/cnf-assigning-a-sriov-network-to-a-vrf.adoc
+++ b/modules/cnf-assigning-a-sriov-network-to-a-vrf.adoc
@@ -6,7 +6,7 @@
[id="cnf-assigning-a-sriov-network-to-a-vrf_{context}"]
= Assigning an SR-IOV network to a VRF
-As a cluster administrator, you can assign an SR-IOV network interface to your VRF domain by using the CNI VRF plug-in.
+As a cluster administrator, you can assign an SR-IOV network interface to your VRF domain by using the CNI VRF plugin.
To do this, add the VRF configuration to the optional `metaPlugins` parameter of the `SriovNetwork` resource.
@@ -18,7 +18,7 @@ Using a VRF through the `ip vrf exec` command is not supported in {product-title
====
[id="cnf-creating-an-additional-sriov-network-with-vrf-plug-in_{context}"]
-== Creating an additional SR-IOV network attachment with the CNI VRF plug-in
+== Creating an additional SR-IOV network attachment with the CNI VRF plugin
The SR-IOV Network Operator manages additional network definitions. When you specify an additional SR-IOV network to create, the SR-IOV Network Operator creates the `NetworkAttachmentDefinition` custom resource (CR) automatically.
@@ -27,7 +27,7 @@ The SR-IOV Network Operator manages additional network definitions. When you spe
Do not edit `NetworkAttachmentDefinition` custom resources that the SR-IOV Network Operator manages. Doing so might disrupt network traffic on your additional network.
====
-To create an additional SR-IOV network attachment with the CNI VRF plug-in, perform the following procedure.
+To create an additional SR-IOV network attachment with the CNI VRF plugin, perform the following procedure.
.Prerequisites
diff --git a/modules/cnf-provisioning-deploying-a-distributed-unit-manually.adoc b/modules/cnf-provisioning-deploying-a-distributed-unit-manually.adoc
index 128e8768f8..d5f5e5493e 100644
--- a/modules/cnf-provisioning-deploying-a-distributed-unit-manually.adoc
+++ b/modules/cnf-provisioning-deploying-a-distributed-unit-manually.adoc
@@ -109,4 +109,4 @@ The `SriovNetworkNodePolicy` object must be configured differently for different
In addition, when configuring the `nicSelector`, the `pfNames` value must match the intended interface name on the specific host.
If there is a mixed cluster where some of the nodes are deployed with Intel NICs and some with Mellanox, several SR-IOV configurations can be
-created with the same `resourceName`. The device plug-in will discover only the available ones and will put the capacity on the node accordingly.
+created with the same `resourceName`. The device plugin will discover only the available ones and will put the capacity on the node accordingly.
diff --git a/modules/configuring-dynamic-admission.adoc b/modules/configuring-dynamic-admission.adoc
index 9ecca34b58..9dccc271ee 100644
--- a/modules/configuring-dynamic-admission.adoc
+++ b/modules/configuring-dynamic-admission.adoc
@@ -6,7 +6,7 @@
[id="configuring-dynamic-admission_{context}"]
= Configuring dynamic admission
-This procedure outlines high-level steps to configure dynamic admission. The functionality of the admission chain is extended by configuring a webhook admission plug-in to call out to a webhook server.
+This procedure outlines high-level steps to configure dynamic admission. The functionality of the admission chain is extended by configuring a webhook admission plugin to call out to a webhook server.
The webhook server is also configured as an aggregated API server. This allows other {product-title} components to communicate with the webhook using internal credentials and facilitates testing using the `oc` command. Additionally, this enables role based access control (RBAC) into the webhook and prevents token information from other API servers from being disclosed to the webhook.
@@ -335,7 +335,7 @@ spec:
$ oc apply -f webhook-api-service.yaml
----
-. Define the webhook admission plug-in configuration within a file called `webhook-config.yaml`. This example uses the validating admission plug-in:
+. Define the webhook admission plugin configuration within a file called `webhook-config.yaml`. This example uses the validating admission plugin:
+
[source,yaml]
----
diff --git a/modules/configuring-huge-pages.adoc b/modules/configuring-huge-pages.adoc
index 0cdfccc9f8..2776130af1 100644
--- a/modules/configuring-huge-pages.adoc
+++ b/modules/configuring-huge-pages.adoc
@@ -96,7 +96,7 @@ $ oc get node -o jsonpath="{.status.allocatable.hugepages
ifndef::openshift-origin[]
[WARNING]
====
-The TuneD bootloader plug-in is currently supported on {op-system-first} 8.x worker nodes. For {op-system-base-full} 7.x worker nodes, the TuneD bootloader plug-in is currently not supported.
+The TuneD bootloader plugin is currently supported on {op-system-first} 8.x worker nodes. For {op-system-base-full} 7.x worker nodes, the TuneD bootloader plugin is currently not supported.
====
endif::openshift-origin[]
diff --git a/modules/consuming-huge-pages-resource-using-the-downward-api.adoc b/modules/consuming-huge-pages-resource-using-the-downward-api.adoc
index f630436ebe..37b68a02f9 100644
--- a/modules/consuming-huge-pages-resource-using-the-downward-api.adoc
+++ b/modules/consuming-huge-pages-resource-using-the-downward-api.adoc
@@ -10,7 +10,7 @@
You can use the Downward API to inject information about the huge pages resources that are consumed by a container.
-You can inject the resource allocation as environment variables, a volume plug-in, or both. Applications that you develop and run in the container can determine the resources that are available by reading the environment variables or files in the specified volumes.
+You can inject the resource allocation as environment variables, a volume plugin, or both. Applications that you develop and run in the container can determine the resources that are available by reading the environment variables or files in the specified volumes.
.Procedure
diff --git a/modules/deployment-plug-in-cluster.adoc b/modules/deployment-plug-in-cluster.adoc
index 47d57fe8dd..72674888ca 100644
--- a/modules/deployment-plug-in-cluster.adoc
+++ b/modules/deployment-plug-in-cluster.adoc
@@ -4,13 +4,13 @@
:_content-type: PROCEDURE
[id="deploy-on-cluster_{context}"]
-= Deploy your plug-in on a cluster
+= Deploy your plugin on a cluster
-After pushing an image with your changes to a registry, you can deploy the plug-in to a cluster.
+After pushing an image with your changes to a registry, you can deploy the plugin to a cluster.
.Procedure
-. To deploy your plug-in to a cluster, install a Helm chart with the name of the plug-in as the Helm release name into a new namespace or an existing namespace as specified by the `-n` command-line option. Provide the location of the image within the `plugin.image` parameter by using the following command:
+. To deploy your plugin to a cluster, install a Helm chart with the name of the plugin as the Helm release name into a new namespace or an existing namespace as specified by the `-n` command-line option. Provide the location of the image within the `plugin.image` parameter by using the following command:
+
[source,terminal]
@@ -21,7 +21,7 @@ $ helm upgrade -i my-plugin charts/openshift-console-plugin -n my-plugin-namesp
Where:
+
--
-`n `:: Specifies an existing namespace to deploy your plug-in into.
+`n `:: Specifies an existing namespace to deploy your plugin into.
`--create-namespace`:: Optional: If deploying to a new namespace, use this parameter.
`--set plugin.image=my-plugin-image-location`:: Specifies the location of the image within the `plugin.image` parameter.
--
@@ -91,5 +91,5 @@ You can see the list of the enabled plugins on the *Overview* page or by navigat
[NOTE]
====
-It can take a few minutes for the new plug-in configuration to appear. If you do not see your plug-in, you might need to refresh your browser if the plugin was recently enabled. If you recieve any errors at runtime, check the JS console in browser developer tools to look for any errors in your plugin code.
-====
\ No newline at end of file
+It can take a few minutes for the new plugin configuration to appear. If you do not see your plugin, you might need to refresh your browser if the plugin was recently enabled. If you recieve any errors at runtime, check the JS console in browser developer tools to look for any errors in your plugin code.
+====
diff --git a/modules/developer-cli-odo-developer-setup.adoc b/modules/developer-cli-odo-developer-setup.adoc
index a4625f8f8c..6f12c5abc8 100644
--- a/modules/developer-cli-odo-developer-setup.adoc
+++ b/modules/developer-cli-odo-developer-setup.adoc
@@ -6,7 +6,7 @@
= Developer setup
-With {odo-title} you can create and deploy application on {product-title} clusters from a terminal. Code editor plug-ins use {odo-title} which allows users to interact with {product-title} clusters from their IDE terminals. Examples of plug-ins that use {odo-title}: VS Code OpenShift Connector, OpenShift Connector for Intellij, Codewind for Eclipse Che.
+With {odo-title} you can create and deploy application on {product-title} clusters from a terminal. Code editor plugins use {odo-title} which allows users to interact with {product-title} clusters from their IDE terminals. Examples of plugins that use {odo-title}: VS Code OpenShift Connector, OpenShift Connector for Intellij, Codewind for Eclipse Che.
{odo-title} works on Windows, macOS, and Linux operating systems and from any terminal. {odo-title} provides autocompletion for bash and zsh command line shells.
diff --git a/modules/disabling-plug-in-browser.adoc b/modules/disabling-plug-in-browser.adoc
index 55a216527c..8d0f81be88 100644
--- a/modules/disabling-plug-in-browser.adoc
+++ b/modules/disabling-plug-in-browser.adoc
@@ -4,17 +4,17 @@
:_content-type: PROCEDURE
[id="disabling-your-plug-in-browser_{context}"]
-= Disabling your plug-in in the browser
+= Disabling your plugin in the browser
-Console users can use the `disable-plugins` query parameter to disable specific or all dynamic plug-ins that would normally get loaded at run-time.
+Console users can use the `disable-plugins` query parameter to disable specific or all dynamic plugins that would normally get loaded at run-time.
.Procedure
-* To disable a specific plug-in(s), remove the plug-in you want to disable from the comma-separated list of plug-in names.
+* To disable a specific plugin(s), remove the plugin you want to disable from the comma-separated list of plugin names.
-* To disable all plug-ins, leave an empty string in the `disable-plugins` query parameter.
+* To disable all plugins, leave an empty string in the `disable-plugins` query parameter.
[NOTE]
====
-Cluster administrators can disable plug-ins in the *Cluster Settings* page of the web console
+Cluster administrators can disable plugins in the *Cluster Settings* page of the web console
====
diff --git a/modules/dr-restoring-cluster-state.adoc b/modules/dr-restoring-cluster-state.adoc
index 9b8b281551..a0158de6b4 100644
--- a/modules/dr-restoring-cluster-state.adoc
+++ b/modules/dr-restoring-cluster-state.adoc
@@ -292,7 +292,7 @@ If the status is `Pending`, or the output lists more than one running etcd pod,
+
[NOTE]
====
-Perform the following step only if you are using `OVNKubernetes` network plug-in.
+Perform the following step only if you are using `OVNKubernetes` network plugin.
====
+
. Restart the Open Virtual Network (OVN) Kubernetes pods on all the hosts.
diff --git a/modules/dynamic-plug-in-api.adoc b/modules/dynamic-plug-in-api.adoc
index 8e84752e77..cdb97cc96d 100644
--- a/modules/dynamic-plug-in-api.adoc
+++ b/modules/dynamic-plug-in-api.adoc
@@ -1467,4 +1467,4 @@ component could be unmounted. It returns an array with a pair of state value and
|===
|Parameter Name |Description
|`initialState` |initial state value
-|===
\ No newline at end of file
+|===
diff --git a/modules/dynamic-plug-in-development.adoc b/modules/dynamic-plug-in-development.adoc
index b1cc6b0abb..6329063fb6 100644
--- a/modules/dynamic-plug-in-development.adoc
+++ b/modules/dynamic-plug-in-development.adoc
@@ -4,9 +4,9 @@
:_content-type: PROCEDURE
[id="dynamic-plugin-development_{context}"]
-= Dynamic plug-in development
+= Dynamic plugin development
-You can run the plug-in using a local development environment. The {product-title} web console runs in a container connected to the cluster you have logged into.
+You can run the plugin using a local development environment. The {product-title} web console runs in a container connected to the cluster you have logged into.
.Prerequisites
* You must have an OpenShift cluster running.
@@ -16,7 +16,7 @@ You can run the plug-in using a local development environment. The {product-titl
.Procedure
-. In your terminal, run the following command to install the dependencies for your plug-in using yarn.
+. In your terminal, run the following command to install the dependencies for your plugin using yarn.
+
[source,terminal]
@@ -32,7 +32,7 @@ $ yarn install
$ yarn run start
----
-. In another terminal window, login to the {product-title} through the CLI.
+. In another terminal window, login to the {product-title} through the CLI.
+
[source,terminal]
----
@@ -47,4 +47,4 @@ $ yarn run start-console
----
.Verification
-* Visit link:http://localhost:9000/example[localhost:9000] to view the running plug-in. Inspect the value of `window.SERVER_FLAGS.consolePlugins` to see the list of plug-ins which load at runtime.
+* Visit link:http://localhost:9000/example[localhost:9000] to view the running plugin. Inspect the value of `window.SERVER_FLAGS.consolePlugins` to see the list of plugins which load at runtime.
diff --git a/modules/dynamic-plug-in-sdk-extensions.adoc b/modules/dynamic-plug-in-sdk-extensions.adoc
index 3946008514..ab0b404d7b 100644
--- a/modules/dynamic-plug-in-sdk-extensions.adoc
+++ b/modules/dynamic-plug-in-sdk-extensions.adoc
@@ -4,7 +4,7 @@
:_content-type: CONCEPT
[id="dynamic-plug-in-sdk-extensions_{context}"]
-= Dynamic plug-in extension types
+= Dynamic plugin extension types
[discrete]
== `console.action/filter`
diff --git a/modules/dynamic-provisioning-available-plugins.adoc b/modules/dynamic-provisioning-available-plugins.adoc
index b079a77450..740e9811a4 100644
--- a/modules/dynamic-provisioning-available-plugins.adoc
+++ b/modules/dynamic-provisioning-available-plugins.adoc
@@ -4,9 +4,9 @@
// * post_installation_configuration/storage-configuration.adoc
[id="available-plug-ins_{context}"]
-= Available dynamic provisioning plug-ins
+= Available dynamic provisioning plugins
-{product-title} provides the following provisioner plug-ins, which have
+{product-title} provides the following provisioner plugins, which have
generic implementations for dynamic provisioning that use the cluster's
configured provider's API to create new storage resources:
@@ -15,7 +15,7 @@ configured provider's API to create new storage resources:
|===
|Storage type
-|Provisioner plug-in name
+|Provisioner plugin name
|Notes
|{rh-openstack-first} Cinder
@@ -72,6 +72,6 @@ no node in the current cluster exists.
[IMPORTANT]
====
-Any chosen provisioner plug-in also requires configuration for the relevant
+Any chosen provisioner plugin also requires configuration for the relevant
cloud, host, or third-party provider as per the relevant documentation.
====
diff --git a/modules/dynamic-provisioning-aws-definition.adoc b/modules/dynamic-provisioning-aws-definition.adoc
index 473af778a6..e7818c0c91 100644
--- a/modules/dynamic-provisioning-aws-definition.adoc
+++ b/modules/dynamic-provisioning-aws-definition.adoc
@@ -27,7 +27,7 @@ See the
link:http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html[AWS documentation]
for valid Amazon Resource Name (ARN) values.
<3> Optional: Only for *io1* volumes. I/O operations per second per GiB.
-The AWS volume plug-in multiplies this with the size of the requested
+The AWS volume plugin multiplies this with the size of the requested
volume to compute IOPS of the volume. The value cap is 20,000 IOPS, which
is the maximum supported by AWS. See the
link:http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html[AWS documentation]
diff --git a/modules/dynamic-provisioning-gluster-definition.adoc b/modules/dynamic-provisioning-gluster-definition.adoc
index 0089db1522..51934a58e2 100644
--- a/modules/dynamic-provisioning-gluster-definition.adoc
+++ b/modules/dynamic-provisioning-gluster-definition.adoc
@@ -89,7 +89,7 @@ type: kubernetes.io/glusterfs
[NOTE]
====
-When the PVs are dynamically provisioned, the GlusterFS plug-in
+When the PVs are dynamically provisioned, the GlusterFS plugin
automatically creates an Endpoints and a headless Service named
`gluster-dynamic-`. When the PVC is deleted, these dynamic
resources are deleted automatically.
diff --git a/modules/enabling-multi-cluster-console.adoc b/modules/enabling-multi-cluster-console.adoc
index 201274077b..e8eb5c7bb5 100644
--- a/modules/enabling-multi-cluster-console.adoc
+++ b/modules/enabling-multi-cluster-console.adoc
@@ -26,9 +26,9 @@ Do not set this feature gate on production clusters. You will not be able to upg
. Enable ACM in the administrator perspective by navigating from *Administration* -> *Cluster Settings* -> *Configuration* -> *Console* `console.operator.openshift.io` -> *Console Plugins* and click *Enable* for `acm`.
-. A pop-up window will appear notifying you that updating the enablement of this console plug-in will prompt for the console to be refreshed once it has been updated. Select `Enable` and click *Save*.
+. A pop-up window will appear notifying you that updating the enablement of this console plugin will prompt for the console to be refreshed once it has been updated. Select `Enable` and click *Save*.
-. Repeat the previous two steps for the `mce` console plug-in immediately after enabling `acm`.
+. Repeat the previous two steps for the `mce` console plugin immediately after enabling `acm`.
. A pop-up window that states that a web console update is available will appear a few moments after you enable. Click *Refresh the web console* in the pop-up window to update.
+
diff --git a/modules/enabling-plug-in-browser.adoc b/modules/enabling-plug-in-browser.adoc
index dc0972cdd0..a58ddc2c0c 100644
--- a/modules/enabling-plug-in-browser.adoc
+++ b/modules/enabling-plug-in-browser.adoc
@@ -4,8 +4,8 @@
:_content-type: PROCEDURE
[id="enable-plug-in-browser_{context}"]
-= Enable dynamic plug-ins in the web console
-Cluster administrators can enable plug-ins in the web console browser. Dynamic plug-ins are disabled by default. In order to enable, a cluster administrator will need to enable them in the `console-operator` configuration.
+= Enable dynamic plugins in the web console
+Cluster administrators can enable plugins in the web console browser. Dynamic plugins are disabled by default. In order to enable, a cluster administrator will need to enable them in the `console-operator` configuration.
.Procedure
@@ -13,7 +13,7 @@ Cluster administrators can enable plug-ins in the web console browser. Dynamic p
. Click the `Console` `operator.openshift.io` configuration resource.
-. From there, click the *Console plugins* tab to view the dynamic plug-ins running.
+. From there, click the *Console plugins* tab to view the dynamic plugins running.
. In the `Status` column, click `Enable console plugin` in the pop-over menu, which will launch the `Console plugin enablement` modal.
@@ -21,4 +21,4 @@ Cluster administrators can enable plug-ins in the web console browser. Dynamic p
.Verification
-* Refresh the browser to view the enabled plug-in.
+* Refresh the browser to view the enabled plugin.
diff --git a/modules/getting-started-cli-creating-secret.adoc b/modules/getting-started-cli-creating-secret.adoc
index 228bc2cac6..b906e6e5bf 100644
--- a/modules/getting-started-cli-creating-secret.adoc
+++ b/modules/getting-started-cli-creating-secret.adoc
@@ -8,7 +8,7 @@
= Creating a secret
The `Secret` object provides a mechanism to hold sensitive information such as passwords, {product-title} client configuration files, private source repository credentials, and so on.
-Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plug-in or the system can use secrets to perform actions on behalf of a pod.
+Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod.
The following procedure adds the secret `nationalparks-mongodb-parameters` and mounts it to the `nationalparks` workload.
.Prerequisites
diff --git a/modules/getting-started-web-console-creating-secret.adoc b/modules/getting-started-web-console-creating-secret.adoc
index 9d09e3de19..04fbef14e8 100644
--- a/modules/getting-started-web-console-creating-secret.adoc
+++ b/modules/getting-started-web-console-creating-secret.adoc
@@ -7,7 +7,7 @@
= Creating a secret
The `Secret` object provides a mechanism to hold sensitive information such as passwords, {product-title} client configuration files, private source repository credentials, and so on.
-Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plug-in or the system can use secrets to perform actions on behalf of a pod.
+Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod.
The following procedure adds the secret `nationalparks-mongodb-parameters` and mounts it to the `nationalparks` workload.
.Prerequisites
diff --git a/modules/images-other-jenkins-customize-s2i.adoc b/modules/images-other-jenkins-customize-s2i.adoc
index 1571d2b446..b9421af426 100644
--- a/modules/images-other-jenkins-customize-s2i.adoc
+++ b/modules/images-other-jenkins-customize-s2i.adoc
@@ -8,15 +8,15 @@
To customize the official {product-title} Jenkins image, you can use the image as a source-to-image (S2I) builder.
-You can use S2I to copy your custom Jenkins jobs definitions, add additional plug-ins, or replace the provided `config.xml` file with your own, custom, configuration.
+You can use S2I to copy your custom Jenkins jobs definitions, add additional plugins, or replace the provided `config.xml` file with your own, custom, configuration.
To include your modifications in the Jenkins image, you must have a Git repository with the following directory structure:
`plugins`::
-This directory contains those binary Jenkins plug-ins you want to copy into Jenkins.
+This directory contains those binary Jenkins plugins you want to copy into Jenkins.
`plugins.txt`::
-This file lists the plug-ins you want to install using the following syntax:
+This file lists the plugins you want to install using the following syntax:
----
pluginId:pluginVersion
diff --git a/modules/images-other-jenkins-env-var.adoc b/modules/images-other-jenkins-env-var.adoc
index a129c407e9..8e1569691b 100644
--- a/modules/images-other-jenkins-env-var.adoc
+++ b/modules/images-other-jenkins-env-var.adoc
@@ -13,7 +13,7 @@ The Jenkins server can be configured with the following environment variables:
| Variable | Definition | Example values and settings
|`OPENSHIFT_ENABLE_OAUTH`
-|Determines whether the {product-title} Login plug-in manages authentication when logging in to Jenkins. To enable, set to `true`.
+|Determines whether the {product-title} Login plugin manages authentication when logging in to Jenkins. To enable, set to `true`.
|Default: `false`
|`JENKINS_PASSWORD`
@@ -63,11 +63,11 @@ By default, the JVM sets the initial heap size.
|
|`INSTALL_PLUGINS`
-|Specifies additional Jenkins plug-ins to install when the container is first run or when `OVERRIDE_PV_PLUGINS_WITH_IMAGE_PLUGINS` is set to `true`. Plug-ins are specified as a comma-delimited list of name:version pairs.
+|Specifies additional Jenkins plugins to install when the container is first run or when `OVERRIDE_PV_PLUGINS_WITH_IMAGE_PLUGINS` is set to `true`. Plugins are specified as a comma-delimited list of name:version pairs.
|Example setting: `git:3.7.0,subversion:2.10.2`.
|`OPENSHIFT_PERMISSIONS_POLL_INTERVAL`
-|Specifies the interval in milliseconds that the {product-title} Login plug-in polls {product-title} for the permissions that are associated with each user that is defined in Jenkins.
+|Specifies the interval in milliseconds that the {product-title} Login plugin polls {product-title} for the permissions that are associated with each user that is defined in Jenkins.
|Default: `300000` - 5 minutes
|`OVERRIDE_PV_CONFIG_WITH_IMAGE_CONFIG`
@@ -75,7 +75,7 @@ By default, the JVM sets the initial heap size.
|Default: `false`
|`OVERRIDE_PV_PLUGINS_WITH_IMAGE_PLUGINS`
-|When running this image with an {product-title} PV for the Jenkins configuration directory, the transfer of plug-ins from the image to the PV is performed only the first time the image starts because the PV is assigned when the PVC is created. If you create a custom image that extends this image and updates plug-ins in the custom image after the initial startup, the plug-ins are not copied over unless you set this environment variable to `true`.
+|When running this image with an {product-title} PV for the Jenkins configuration directory, the transfer of plugins from the image to the PV is performed only the first time the image starts because the PV is assigned when the PVC is created. If you create a custom image that extends this image and updates plugins in the custom image after the initial startup, the plugins are not copied over unless you set this environment variable to `true`.
|Default: `false`
|`ENABLE_FATAL_ERROR_LOG_FILE`
@@ -83,12 +83,12 @@ By default, the JVM sets the initial heap size.
|Default: `false`
|`AGENT_BASE_IMAGE`
-|Setting this value overrides the image used for the `jnlp` container in the sample Kubernetes plug-in pod templates provided with this image. Otherwise, the image from the `jenkins-agent-base-rhel8:latest` image stream tag in the `openshift` namespace is used.
+|Setting this value overrides the image used for the `jnlp` container in the sample Kubernetes plugin pod templates provided with this image. Otherwise, the image from the `jenkins-agent-base-rhel8:latest` image stream tag in the `openshift` namespace is used.
|Default:
`image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest`
|`JAVA_BUILDER_IMAGE`
-|Setting this value overrides the image used for the `java-builder` container in the `java-builder` sample Kubernetes plug-in pod templates provided with this image. Otherwise, the image from the `java:latest` image stream tag in the `openshift` namespace is used.
+|Setting this value overrides the image used for the `java-builder` container in the `java-builder` sample Kubernetes plugin pod templates provided with this image. Otherwise, the image from the `java:latest` image stream tag in the `openshift` namespace is used.
|Default:
`image-registry.openshift-image-registry.svc:5000/openshift/java:latest`
diff --git a/modules/images-other-jenkins-memory.adoc b/modules/images-other-jenkins-memory.adoc
index 0ae7cc34b2..d12411d290 100644
--- a/modules/images-other-jenkins-memory.adoc
+++ b/modules/images-other-jenkins-memory.adoc
@@ -12,6 +12,6 @@ By default, all other process that run in the Jenkins container cannot use more
And if `Project` quotas allow for it, see recommendations from the Jenkins documentation on what a Jenkins master should have from a memory perspective. Those recommendations proscribe to allocate even more memory for the Jenkins master.
-It is recommended to specify memory request and limit values on agent containers created by the Jenkins Kubernetes plug-in. Admin users can set default values on a per-agent image basis through the Jenkins configuration. The memory request and limit parameters can also be overridden on a per-container basis.
+It is recommended to specify memory request and limit values on agent containers created by the Jenkins Kubernetes plugin. Admin users can set default values on a per-agent image basis through the Jenkins configuration. The memory request and limit parameters can also be overridden on a per-container basis.
You can increase the amount of memory available to Jenkins by overriding the `MEMORY_LIMIT` parameter when instantiating the Jenkins Ephemeral or Jenkins Persistent template.
diff --git a/modules/images-other-jenkins-oauth-auth.adoc b/modules/images-other-jenkins-oauth-auth.adoc
index 99ec29a70f..493d59fca0 100644
--- a/modules/images-other-jenkins-oauth-auth.adoc
+++ b/modules/images-other-jenkins-oauth-auth.adoc
@@ -6,23 +6,23 @@
[id="images-other-jenkins-oauth-auth_{context}"]
= {product-title} OAuth authentication
-OAuth authentication is activated by configuring options on the *Configure Global Security* panel in the Jenkins UI, or by setting the `OPENSHIFT_ENABLE_OAUTH` environment variable on the Jenkins *Deployment configuration* to anything other than `false`. This activates the {product-title} Login plug-in, which retrieves the configuration information from pod data or by interacting with the {product-title} API server.
+OAuth authentication is activated by configuring options on the *Configure Global Security* panel in the Jenkins UI, or by setting the `OPENSHIFT_ENABLE_OAUTH` environment variable on the Jenkins *Deployment configuration* to anything other than `false`. This activates the {product-title} Login plugin, which retrieves the configuration information from pod data or by interacting with the {product-title} API server.
Valid credentials are controlled by the {product-title} identity provider.
Jenkins supports both browser and non-browser access.
-Valid users are automatically added to the Jenkins authorization matrix at log in, where {product-title} roles dictate the specific Jenkins permissions that users have. The roles used by default are the predefined `admin`, `edit`, and `view`. The login plug-in executes self-SAR requests against those roles in the project or namespace that Jenkins is running in.
+Valid users are automatically added to the Jenkins authorization matrix at log in, where {product-title} roles dictate the specific Jenkins permissions that users have. The roles used by default are the predefined `admin`, `edit`, and `view`. The login plugin executes self-SAR requests against those roles in the project or namespace that Jenkins is running in.
Users with the `admin` role have the traditional Jenkins administrative user permissions. Users with the `edit` or `view` role have progressively fewer permissions.
The default {product-title} `admin`, `edit`, and `view` roles and the Jenkins permissions those roles are assigned in the Jenkins instance are configurable.
-When running Jenkins in an {product-title} pod, the login plug-in looks for a config map named `openshift-jenkins-login-plugin-config` in the namespace that Jenkins is running in.
+When running Jenkins in an {product-title} pod, the login plugin looks for a config map named `openshift-jenkins-login-plugin-config` in the namespace that Jenkins is running in.
-If this plug-in finds and can read in that config map, you can define the role to Jenkins Permission mappings. Specifically:
+If this plugin finds and can read in that config map, you can define the role to Jenkins Permission mappings. Specifically:
- * The login plug-in treats the key and value pairs in the config map as Jenkins permission to {product-title} role mappings.
+ * The login plugin treats the key and value pairs in the config map as Jenkins permission to {product-title} role mappings.
* The key is the Jenkins permission group short ID and the Jenkins permission short ID, with those two separated by a hyphen character.
* If you want to add the `Overall Jenkins Administer` permission to an {product-title} role, the key should be `Overall-Administer`.
* To get a sense of which permission groups and permissions IDs are available, go to the matrix authorization page in the Jenkins console and IDs for the groups and individual permissions in the table they provide.
@@ -35,7 +35,7 @@ If this plug-in finds and can read in that config map, you can define the role t
The `admin` user that is pre-populated in the {product-title} Jenkins image with administrative privileges is not given those privileges when {product-title} OAuth is used. To grant these permissions the {product-title} cluster administrator must explicitly define that user in the {product-title} identity provider and assigns the `admin` role to the user.
====
-Jenkins users' permissions that are stored can be changed after the users are initially established. The {product-title} Login plug-in polls the {product-title} API server for permissions and updates the permissions stored in Jenkins for each user with the permissions retrieved from {product-title}. If the Jenkins UI is used to update permissions for a Jenkins user, the permission changes are overwritten the next time the plug-in polls {product-title}.
+Jenkins users' permissions that are stored can be changed after the users are initially established. The {product-title} Login plugin polls the {product-title} API server for permissions and updates the permissions stored in Jenkins for each user with the permissions retrieved from {product-title}. If the Jenkins UI is used to update permissions for a Jenkins user, the permission changes are overwritten the next time the plugin polls {product-title}.
You can control how often the polling occurs with the `OPENSHIFT_PERMISSIONS_POLL_INTERVAL` environment variable. The default polling interval is five minutes.
diff --git a/modules/images-other-jenkins-permissions.adoc b/modules/images-other-jenkins-permissions.adoc
index 352eb90fba..a28686174b 100644
--- a/modules/images-other-jenkins-permissions.adoc
+++ b/modules/images-other-jenkins-permissions.adoc
@@ -8,13 +8,13 @@
If in the config map the `` element of the pod template XML is the {product-title} service account used for the resulting pod, the service account credentials are mounted into the pod. The permissions are associated with the service account and control which operations against the {product-title} master are allowed from the pod.
-Consider the following scenario with service accounts used for the pod, which is launched by the Kubernetes Plug-in that runs in the {product-title} Jenkins image.
+Consider the following scenario with service accounts used for the pod, which is launched by the Kubernetes Plugin that runs in the {product-title} Jenkins image.
If you use the example template for Jenkins that is provided by {product-title}, the `jenkins` service account is defined with the `edit` role for the project Jenkins runs in, and the master Jenkins pod has that service account mounted.
The two default Maven and NodeJS pod templates that are injected into the Jenkins configuration are also set to use the same service account as the Jenkins master.
-* Any pod templates that are automatically discovered by the {product-title} sync plug-in because their image streams or image stream tags have the required label or annotations are configured to use the Jenkins master service account as their service account.
-* For the other ways you can provide a pod template definition into Jenkins and the Kubernetes plug-in, you have to explicitly specify the service account to use. Those other ways include the Jenkins console, the `podTemplate` pipeline DSL that is provided by the Kubernetes plug-in, or labeling a config map whose data is the XML configuration for a pod template.
+* Any pod templates that are automatically discovered by the {product-title} sync plugin because their image streams or image stream tags have the required label or annotations are configured to use the Jenkins master service account as their service account.
+* For the other ways you can provide a pod template definition into Jenkins and the Kubernetes plugin, you have to explicitly specify the service account to use. Those other ways include the Jenkins console, the `podTemplate` pipeline DSL that is provided by the Kubernetes plugin, or labeling a config map whose data is the XML configuration for a pod template.
* If you do not specify a value for the service account, the `default` service account is used.
* Ensure that whatever service account is used has the necessary permissions, roles, and so on defined within {product-title} to manipulate whatever projects you choose to manipulate from the within the pod.
diff --git a/modules/install-sno-about-installing-on-a-single-node.adoc b/modules/install-sno-about-installing-on-a-single-node.adoc
index 93f08b83ea..aba00569f8 100644
--- a/modules/install-sno-about-installing-on-a-single-node.adoc
+++ b/modules/install-sno-about-installing-on-a-single-node.adoc
@@ -10,5 +10,5 @@ You can create a single-node cluster with standard installation methods. {produc
[IMPORTANT]
====
-The use of OpenShiftSDN with {sno} is not supported. OVN-Kubernetes is the default network plug-in for {sno} deployments.
+The use of OpenShiftSDN with {sno} is not supported. OVN-Kubernetes is the default network plugin for {sno} deployments.
====
diff --git a/modules/install-sno-generating-the-install-iso-manually.adoc b/modules/install-sno-generating-the-install-iso-manually.adoc
index 6df139edeb..0decc32b3e 100644
--- a/modules/install-sno-generating-the-install-iso-manually.adoc
+++ b/modules/install-sno-generating-the-install-iso-manually.adoc
@@ -121,7 +121,7 @@ capabilities:
<2> Set the `compute` replicas to `0`. This makes the control plane node schedulable.
<3> Set the `controlPlane` replicas to `1`. In conjunction with the previous `compute` setting, this setting ensures the cluster runs on a single node.
<4> Set the `metadata` name to the cluster name.
-<5> Set the `networking` details. OVN-Kubernetes is the only allowed network plug-in type for single-node clusters.
+<5> Set the `networking` details. OVN-Kubernetes is the only allowed network plugin type for single-node clusters.
<6> Set the `cidr` value to match the subnet of the {sno} cluster.
<7> Set the path to the installation disk drive, for example, `/dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2`.
<8> Copy the {cluster-manager-url-pull} and add the contents to this configuration setting.
diff --git a/modules/installation-bare-metal-agent-installer-config-yaml.adoc b/modules/installation-bare-metal-agent-installer-config-yaml.adoc
index ab0ecd8318..ad035ffc5a 100644
--- a/modules/installation-bare-metal-agent-installer-config-yaml.adoc
+++ b/modules/installation-bare-metal-agent-installer-config-yaml.adoc
@@ -69,7 +69,7 @@ Class E CIDR range is reserved for a future use. To use the Class E CIDR range,
====
+
<8> The subnet prefix length to assign to each individual node. For example, if `hostPrefix` is set to `23`, then each node is assigned a `/23` subnet out of the given `cidr`, which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic.
-<9> The cluster network plug-in to install. The supported values are `OVNKubernetes` (default value) and `OpenShiftSDN`.
+<9> The cluster network plugin to install. The supported values are `OVNKubernetes` (default value) and `OpenShiftSDN`.
<10> The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic.
<11> You must set the platform to `none` for a single-node cluster. You can also set the platform to `vSphere` and `baremetal`.
+
diff --git a/modules/installation-cis-ibm-cloud.adoc b/modules/installation-cis-ibm-cloud.adoc
index cd8095f01e..2263175c6a 100644
--- a/modules/installation-cis-ibm-cloud.adoc
+++ b/modules/installation-cis-ibm-cloud.adoc
@@ -24,7 +24,7 @@ You must create a domain zone in CIS in the same account as your cluster. You mu
. Create a CIS instance to use with your cluster:
-.. Install the CIS plug-in:
+.. Install the CIS plugin:
+
[source,terminal]
----
diff --git a/modules/installation-configuration-parameters.adoc b/modules/installation-configuration-parameters.adoc
index acdb3f7778..51bb0dc422 100644
--- a/modules/installation-configuration-parameters.adoc
+++ b/modules/installation-configuration-parameters.adoc
@@ -310,9 +310,9 @@ Only IPv4 addresses are supported.
endif::bare[]
ifdef::bare[]
-* If you use the {openshift-networking} OVN-Kubernetes network plug-in, both IPv4 and IPv6 address families are supported.
+* If you use the {openshift-networking} OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported.
-* If you use the {openshift-networking} OpenShift SDN network plug-in, only the IPv4 address family is supported.
+* If you use the {openshift-networking} OpenShift SDN network plugin, only the IPv4 address family is supported.
ifdef::ibm-cloud[]
[NOTE]
@@ -363,7 +363,7 @@ You cannot modify parameters specified by the `networking` object after installa
====
|`networking.networkType`
-|The {openshift-networking} network plug-in to install.
+|The {openshift-networking} network plugin to install.
|
ifdef::openshift-origin[]
Either `OpenShiftSDN` or `OVNKubernetes`. The default value is `OVNKubernetes`.
@@ -408,7 +408,7 @@ An IPv4 network.
endif::bare[]
ifdef::bare[]
-If you use the OpenShift SDN network plug-in, specify an IPv4 network. If you use the OVN-Kubernetes network plug-in, you can specify IPv4 and IPv6 networks.
+If you use the OpenShift SDN network plugin, specify an IPv4 network. If you use the OVN-Kubernetes network plugin, you can specify IPv4 and IPv6 networks.
endif::bare[]
|
An IP address block in Classless Inter-Domain Routing (CIDR) notation.
@@ -435,10 +435,10 @@ endif::bare[]
|
The IP address block for services. The default value is `172.30.0.0/16`.
-The OpenShift SDN and OVN-Kubernetes network plug-ins support only a single IP address block for the service network.
+The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network.
ifdef::bare[]
-If you use the OVN-Kubernetes network plug-in, you can specify an IP address block for both of the IPv4 and IPv6 address families.
+If you use the OVN-Kubernetes network plugin, you can specify an IP address block for both of the IPv4 and IPv6 address families.
endif::bare[]
|
diff --git a/modules/installation-dns-ibm-cloud.adoc b/modules/installation-dns-ibm-cloud.adoc
index 66c1c33dea..14f96b32e1 100644
--- a/modules/installation-dns-ibm-cloud.adoc
+++ b/modules/installation-dns-ibm-cloud.adoc
@@ -24,7 +24,7 @@ IBM Cloud VPC does not support IPv6, so dual stack or IPv6 environments are not
. Create a DNS Services instance to use with your cluster:
-.. Install the DNS Services plug-in by running the following command:
+.. Install the DNS Services plugin by running the following command:
+
[source,terminal]
----
diff --git a/modules/installation-osp-about-kuryr.adoc b/modules/installation-osp-about-kuryr.adoc
index 23978101ca..218e06cfc7 100644
--- a/modules/installation-osp-about-kuryr.adoc
+++ b/modules/installation-osp-about-kuryr.adoc
@@ -10,7 +10,7 @@
include::snippets/deprecated-feature.adoc[]
link:https://docs.openstack.org/kuryr-kubernetes/latest/[Kuryr] is a container
-network interface (CNI) plug-in solution that uses the
+network interface (CNI) plugin solution that uses the
link:https://docs.openstack.org/neutron/latest/[Neutron] and
link:https://docs.openstack.org/octavia/latest/[Octavia] {rh-openstack-first} services
to provide networking for pods and Services.
diff --git a/modules/installation-osp-dpdk-exposing-host-interface.adoc b/modules/installation-osp-dpdk-exposing-host-interface.adoc
index 4b77b54480..5010e75cf4 100644
--- a/modules/installation-osp-dpdk-exposing-host-interface.adoc
+++ b/modules/installation-osp-dpdk-exposing-host-interface.adoc
@@ -2,11 +2,11 @@
[id="installation-osp-dpdk-exposing-host-interface_{context}"]
= Exposing the host-device interface to the pod
-You can use the Container Network Interface (CNI) plug-in to expose an interface that is on the host to the pod. The plug-in moves the interface from the namespace of the host network to the namespace of the pod. The pod then has direct control of the interface.
+You can use the Container Network Interface (CNI) plugin to expose an interface that is on the host to the pod. The plugin moves the interface from the namespace of the host network to the namespace of the pod. The pod then has direct control of the interface.
.Procedure
-* Create an additional network attachment with the host-device CNI plug-in by using the following object as an example:
+* Create an additional network attachment with the host-device CNI plugin by using the following object as an example:
+
[source,yaml]
----
diff --git a/modules/installation-osp-kuryr-config-yaml.adoc b/modules/installation-osp-kuryr-config-yaml.adoc
index aac3b5b8e3..b883e747fb 100644
--- a/modules/installation-osp-kuryr-config-yaml.adoc
+++ b/modules/installation-osp-kuryr-config-yaml.adoc
@@ -5,7 +5,7 @@
[id="installation-osp-kuryr-config-yaml_{context}"]
= Sample customized `install-config.yaml` file for {rh-openstack} with Kuryr
-To deploy with Kuryr SDN instead of the default OVN-Kubernetes network plug-in, you must modify the `install-config.yaml` file to include `Kuryr` as the desired `networking.networkType`.
+To deploy with Kuryr SDN instead of the default OVN-Kubernetes network plugin, you must modify the `install-config.yaml` file to include `Kuryr` as the desired `networking.networkType`.
This sample `install-config.yaml` demonstrates all of the possible
{rh-openstack-first} customization options.
@@ -55,7 +55,7 @@ sshKey: ssh-ed25519 AAAA...
result, the service subnet that the installer creates is twice the size of the
CIDR that is specified as the value of the `serviceNetwork` property. The larger range is
required to prevent IP address conflicts.
-<2> The cluster network plug-in to install. The supported values are `Kuryr`, `OVNKubernetes`, and `OpenShiftSDN`. The default value is `OVNKubernetes`.
+<2> The cluster network plugin to install. The supported values are `Kuryr`, `OVNKubernetes`, and `OpenShiftSDN`. The default value is `OVNKubernetes`.
<3> Both `trunkSupport` and `octaviaSupport` are automatically discovered by the
installer, so there is no need to set them. But if your environment does not
meet both requirements, Kuryr SDN will not properly work. Trunks are needed
diff --git a/modules/installation-osp-verifying-external-network.adoc b/modules/installation-osp-verifying-external-network.adoc
index a86d42396c..b221b8b5fa 100644
--- a/modules/installation-osp-verifying-external-network.adoc
+++ b/modules/installation-osp-verifying-external-network.adoc
@@ -78,7 +78,7 @@ endif::osp-custom,osp-kuryr[]
[NOTE]
====
-If the Neutron trunk service plug-in is enabled, a trunk port is created by default. For more information, see https://wiki.openstack.org/wiki/Neutron/TrunkPort[Neutron trunk port].
+If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see https://wiki.openstack.org/wiki/Neutron/TrunkPort[Neutron trunk port].
====
ifeval::["{context}" == "installing-openstack-installer-custom"]
diff --git a/modules/installation-special-config-storage.adoc b/modules/installation-special-config-storage.adoc
index 53a452d569..41c7df4822 100644
--- a/modules/installation-special-config-storage.adoc
+++ b/modules/installation-special-config-storage.adoc
@@ -336,7 +336,7 @@ The `aes-cbc-essiv:sha256` cipher is used if FIPS mode is enabled.
<3> The device that contains the encrypted LUKS2 volume.
If mirroring is enabled, the value will represent a software mirror device, for example `/dev/md126`.
+
-.. List the Clevis plug-ins that are bound to the encrypted device:
+.. List the Clevis plugins that are bound to the encrypted device:
+
[source,terminal]
----
@@ -349,7 +349,7 @@ If mirroring is enabled, the value will represent a software mirror device, for
----
1: sss '{"t":1,"pins":{"tang":[{"url":"http://tang.example.com:7500"}]}}' <1>
----
-<1> In the example output, the Tang plug-in is used by the Shamir's Secret Sharing (SSS) Clevis plug-in for the `/dev/sda4` device.
+<1> In the example output, the Tang plugin is used by the Shamir's Secret Sharing (SSS) Clevis plugin for the `/dev/sda4` device.
. If you configured mirroring, verify if it is enabled:
.. From the debug shell, list the software RAID devices on the node:
diff --git a/modules/installation-uninstall-clouds.adoc b/modules/installation-uninstall-clouds.adoc
index abbbc90267..11eb05e84a 100644
--- a/modules/installation-uninstall-clouds.adoc
+++ b/modules/installation-uninstall-clouds.adoc
@@ -49,7 +49,7 @@ endif::gcp[]
cluster.
ifdef::ibm-cloud[]
* You have configured the `ccoctl` binary.
-* You have installed the IBM Cloud CLI and installed or updated the VPC infrastructure service plug-in. For more information see "Prerequisites" in the link:https://cloud.ibm.com/docs/vpc?topic=vpc-infrastructure-cli-plugin-vpc-reference&interface=ui#cli-ref-prereqs[IBM Cloud VPC CLI documentation].
+* You have installed the IBM Cloud CLI and installed or updated the VPC infrastructure service plugin. For more information see "Prerequisites" in the link:https://cloud.ibm.com/docs/vpc?topic=vpc-infrastructure-cli-plugin-vpc-reference&interface=ui#cli-ref-prereqs[IBM Cloud VPC CLI documentation].
endif::ibm-cloud[]
.Procedure
diff --git a/modules/installation-vsphere-infrastructure.adoc b/modules/installation-vsphere-infrastructure.adoc
index f03f4e411c..e1f518660b 100644
--- a/modules/installation-vsphere-infrastructure.adoc
+++ b/modules/installation-vsphere-infrastructure.adoc
@@ -72,12 +72,12 @@ Installing a cluster on VMware vSphere versions 7.0.0 and 7.0.1 is deprecated. T
|Storage with in-tree drivers
|vSphere 7.0.2 and later
-|This plug-in creates vSphere storage by using the in-tree storage drivers for vSphere included in {product-title}.
+|This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in {product-title}.
ifndef::vmc[]
|Optional: Networking (NSX-T)
|vSphere 7.0.2 and later
-|vSphere 7.0.2 is required for {product-title}. VMware's NSX Container Plug-in (NCP) is certified with {product-title} 4.6 and NSX-T 3.x+.
+|vSphere 7.0.2 is required for {product-title}. VMware's NSX Container Plugin (NCP) is certified with {product-title} 4.6 and NSX-T 3.x+.
endif::vmc[]
|===
diff --git a/modules/ipi-install-bmc-addressing-for-dell-idrac.adoc b/modules/ipi-install-bmc-addressing-for-dell-idrac.adoc
index d415df80af..16f0899cdc 100644
--- a/modules/ipi-install-bmc-addressing-for-dell-idrac.adoc
+++ b/modules/ipi-install-bmc-addressing-for-dell-idrac.adoc
@@ -80,7 +80,7 @@ platform:
[NOTE]
====
-There is a known issue on Dell iDRAC 9 with firmware version `04.40.00.00` or later for installer-provisioned installations on bare metal deployments. The Virtual Console plug-in defaults to eHTML5, an enhanced version of HTML5, which causes problems with the *InsertVirtualMedia* workflow. Set the plug-in to use HTML5 to avoid this issue. The menu path is *Configuration* -> *Virtual console* -> *Plug-in Type* -> *HTML5* .
+There is a known issue on Dell iDRAC 9 with firmware version `04.40.00.00` or later for installer-provisioned installations on bare metal deployments. The Virtual Console plugin defaults to eHTML5, an enhanced version of HTML5, which causes problems with the *InsertVirtualMedia* workflow. Set the plugin to use HTML5 to avoid this issue. The menu path is *Configuration* -> *Virtual console* -> *Plug-in Type* -> *HTML5* .
Ensure the {product-title} cluster nodes have *AutoAttach* enabled through the iDRAC console. The menu path is: *Configuration* -> *Virtual Media* -> *Attach Mode* -> *AutoAttach* .
@@ -124,7 +124,7 @@ platform:
[NOTE]
====
-There is a known issue on Dell iDRAC 9 with firmware version `04.40.00.00` or later for installer-provisioned installations on bare metal deployments. The Virtual Console plug-in defaults to eHTML5, an enhanced version of HTML5, which causes problems with the *InsertVirtualMedia* workflow. Set the plug-in to use HTML5 to avoid this issue. The menu path is *Configuration* -> *Virtual console* -> *Plug-in Type* -> *HTML5* .
+There is a known issue on Dell iDRAC 9 with firmware version `04.40.00.00` or later for installer-provisioned installations on bare metal deployments. The Virtual Console plugin defaults to eHTML5, an enhanced version of HTML5, which causes problems with the *InsertVirtualMedia* workflow. Set the plugin to use HTML5 to avoid this issue. The menu path is *Configuration* -> *Virtual console* -> *Plug-in Type* -> *HTML5* .
Ensure the {product-title} cluster nodes have *AutoAttach* enabled through the iDRAC console. The menu path is: *Configuration* -> *Virtual Media* -> *Attach Mode* -> *AutoAttach* .
diff --git a/modules/ipi-install-firmware-requirements-for-installing-with-virtual-media.adoc b/modules/ipi-install-firmware-requirements-for-installing-with-virtual-media.adoc
index a83ad2fca1..1e07b5022a 100644
--- a/modules/ipi-install-firmware-requirements-for-installing-with-virtual-media.adoc
+++ b/modules/ipi-install-firmware-requirements-for-installing-with-virtual-media.adoc
@@ -37,5 +37,5 @@ Red Hat does not test every combination of firmware, hardware, or other third-pa
[NOTE]
====
-For Dell servers, ensure the {product-title} cluster nodes have *AutoAttach* enabled through the iDRAC console. The menu path is *Configuration* -> *Virtual Media* -> *Attach Mode* -> *AutoAttach* . With iDRAC 9 firmware version `04.40.00.00` or later, the Virtual Console plug-in defaults to eHTML5, an enhanced version of HTML5, which causes problems with the *InsertVirtualMedia* workflow. Set the plug-in to use HTML5 to avoid this issue. The menu path is *Configuration* -> *Virtual console* -> *Plug-in Type* -> *HTML5* .
+For Dell servers, ensure the {product-title} cluster nodes have *AutoAttach* enabled through the iDRAC console. The menu path is *Configuration* -> *Virtual Media* -> *Attach Mode* -> *AutoAttach* . With iDRAC 9 firmware version `04.40.00.00` or later, the Virtual Console plugin defaults to eHTML5, an enhanced version of HTML5, which causes problems with the *InsertVirtualMedia* workflow. Set the plugin to use HTML5 to avoid this issue. The menu path is *Configuration* -> *Virtual console* -> *Plug-in Type* -> *HTML5* .
====
diff --git a/modules/jt-comparison-of-jenkins-and-openshift-pipelines-concepts.adoc b/modules/jt-comparison-of-jenkins-and-openshift-pipelines-concepts.adoc
index 5cfab8288a..709e4a384a 100644
--- a/modules/jt-comparison-of-jenkins-and-openshift-pipelines-concepts.adoc
+++ b/modules/jt-comparison-of-jenkins-and-openshift-pipelines-concepts.adoc
@@ -9,11 +9,11 @@
You can review and compare the following equivalent terms used in Jenkins and OpenShift Pipelines.
== Jenkins terminology
-Jenkins offers declarative and scripted pipelines that are extensible using shared libraries and plug-ins. Some basic terms in Jenkins are as follows:
+Jenkins offers declarative and scripted pipelines that are extensible using shared libraries and plugins. Some basic terms in Jenkins are as follows:
* *Pipeline*: Automates the entire process of building, testing, and deploying applications by using link:https://groovy-lang.org/[Groovy] syntax.
* *Node*: A machine capable of either orchestrating or executing a scripted pipeline.
-* *Stage*: A conceptually distinct subset of tasks performed in a pipeline. Plug-ins or user interfaces often use this block to display the status or progress of tasks.
+* *Stage*: A conceptually distinct subset of tasks performed in a pipeline. Plugins or user interfaces often use this block to display the status or progress of tasks.
* **Step**: A single task that specifies the exact action to be taken, either by using a command or a script.
== OpenShift Pipelines terminology
diff --git a/modules/jt-comparison-of-jenkins-openshift-pipelines-execution-models.adoc b/modules/jt-comparison-of-jenkins-openshift-pipelines-execution-models.adoc
index 7c377e2fa3..504acca042 100644
--- a/modules/jt-comparison-of-jenkins-openshift-pipelines-execution-models.adoc
+++ b/modules/jt-comparison-of-jenkins-openshift-pipelines-execution-models.adoc
@@ -14,5 +14,5 @@ Jenkins and OpenShift Pipelines offer similar functions but are different in arc
|Jenkins|OpenShift Pipelines
|Jenkins has a controller node. Jenkins runs pipelines and steps centrally, or orchestrates jobs running in other nodes.|OpenShift Pipelines is serverless and distributed, and there is no central dependency for execution.
|Containers are launched by the Jenkins controller node through the pipeline.|OpenShift Pipelines adopts a 'container-first' approach, where every step runs as a container in a pod (equivalent to nodes in Jenkins).
-|Extensibility is achieved by using plug-ins.|Extensibility is achieved by using tasks in Tekton Hub or by creating custom tasks and scripts.
+|Extensibility is achieved by using plugins.|Extensibility is achieved by using tasks in Tekton Hub or by creating custom tasks and scripts.
|===
diff --git a/modules/jt-examples-of-common-use-cases.adoc b/modules/jt-examples-of-common-use-cases.adoc
index 49fc14df8f..63a3b2eeac 100644
--- a/modules/jt-examples-of-common-use-cases.adoc
+++ b/modules/jt-examples-of-common-use-cases.adoc
@@ -9,7 +9,7 @@
Both Jenkins and OpenShift Pipelines offer capabilities for common CI/CD use cases, such as:
* Compiling, building, and deploying images using Apache Maven
-* Extending the core capabilities by using plug-ins
+* Extending the core capabilities by using plugins
* Reusing shareable libraries and custom scripts
== Running a Maven pipeline in Jenkins and OpenShift Pipelines
@@ -151,12 +151,12 @@ spec:
----
-== Extending the core capabilities of Jenkins and OpenShift Pipelines by using plug-ins
-Jenkins has the advantage of a large ecosystem of numerous plug-ins developed over the years by its extensive user base. You can search and browse the plug-ins in the link:https://plugins.jenkins.io/[Jenkins Plug-in Index].
+== Extending the core capabilities of Jenkins and OpenShift Pipelines by using plugins
+Jenkins has the advantage of a large ecosystem of numerous plugins developed over the years by its extensive user base. You can search and browse the plugins in the link:https://plugins.jenkins.io/[Jenkins Plugin Index].
OpenShift Pipelines also has many tasks developed and contributed by the community and enterprise users. A publicly available catalog of reusable OpenShift Pipelines tasks are available in the link:https://hub.tekton.dev/[Tekton Hub].
-In addition, OpenShift Pipelines incorporates many of the plug-ins of the Jenkins ecosystem within its core capabilities. For example, authorization is a critical function in both Jenkins and OpenShift Pipelines. While Jenkins ensures authorization using the link:https://plugins.jenkins.io/role-strategy/[Role-based Authorization Strategy] plug-in, OpenShift Pipelines uses OpenShift's built-in Role-based Access Control system.
+In addition, OpenShift Pipelines incorporates many of the plugins of the Jenkins ecosystem within its core capabilities. For example, authorization is a critical function in both Jenkins and OpenShift Pipelines. While Jenkins ensures authorization using the link:https://plugins.jenkins.io/role-strategy/[Role-based Authorization Strategy] plugin, OpenShift Pipelines uses OpenShift's built-in Role-based Access Control system.
== Sharing reusable code in Jenkins and OpenShift Pipelines
Jenkins link:https://www.jenkins.io/doc/book/pipeline/shared-libraries/[shared libraries] provide reusable code for parts of Jenkins pipelines. The libraries are shared between link:https://www.jenkins.io/doc/book/pipeline/jenkinsfile/[Jenkinsfiles] to create highly modular pipelines without code repetition.
diff --git a/modules/jt-migrating-from-jenkins-plugins-to-openshift-pipelines-hub-tasks.adoc b/modules/jt-migrating-from-jenkins-plugins-to-openshift-pipelines-hub-tasks.adoc
index 7564b66f3c..d7e63e938e 100644
--- a/modules/jt-migrating-from-jenkins-plugins-to-openshift-pipelines-hub-tasks.adoc
+++ b/modules/jt-migrating-from-jenkins-plugins-to-openshift-pipelines-hub-tasks.adoc
@@ -5,11 +5,11 @@
:_content-type: PROCEDURE
[id="jt-migrating-from-jenkins-plugins-to-openshift-pipelines-hub-tasks_{context}"]
-= Migrating from Jenkins plug-ins to Tekton Hub tasks
+= Migrating from Jenkins plugins to Tekton Hub tasks
-You can extend the capability of Jenkins by using link:https://plugins.jenkinsci.org[plug-ins]. To achieve similar extensibility in OpenShift Pipelines, use any of the tasks available from link:https://hub.tekton.dev[Tekton Hub].
+You can extend the capability of Jenkins by using link:https://plugins.jenkinsci.org[plugins]. To achieve similar extensibility in OpenShift Pipelines, use any of the tasks available from link:https://hub.tekton.dev[Tekton Hub].
-For example, consider the link:https://hub.tekton.dev/tekton/task/git-clone[git-clone] task in Tekton Hub, which corresponds to the link:https://plugins.jenkins.io/git/[git plug-in] for Jenkins.
+For example, consider the link:https://hub.tekton.dev/tekton/task/git-clone[git-clone] task in Tekton Hub, which corresponds to the link:https://plugins.jenkins.io/git/[git plugin] for Jenkins.
.Example: `git-clone` task from Tekton Hub
[source,yaml,subs="attributes+"]
diff --git a/modules/machineset-azure-ultra-disk.adoc b/modules/machineset-azure-ultra-disk.adoc
index be380fc2f6..3e3dfb8ca1 100644
--- a/modules/machineset-azure-ultra-disk.adoc
+++ b/modules/machineset-azure-ultra-disk.adoc
@@ -35,7 +35,7 @@ Data disks do not support the ability to specify disk throughput or disk IOPS. Y
endif::mapi[]
ifdef::pvc[]
-Both the in-tree plug-in and CSI driver support using PVCs to enable ultra disks. You can also deploy machines with ultra disks as data disks without creating a PVC.
+Both the in-tree plugin and CSI driver support using PVCs to enable ultra disks. You can also deploy machines with ultra disks as data disks without creating a PVC.
endif::pvc[]
ifeval::["{context}" == "creating-machineset-azure"]
diff --git a/modules/master-node-sizing.adoc b/modules/master-node-sizing.adoc
index 0c6f2dc15d..cd00d6c89a 100644
--- a/modules/master-node-sizing.adoc
+++ b/modules/master-node-sizing.adoc
@@ -120,7 +120,7 @@ For all other configurations, you must estimate your total node count and use th
[IMPORTANT]
====
-The recommendations are based on the data points captured on {product-title} clusters with OpenShift SDN as the network plug-in.
+The recommendations are based on the data points captured on {product-title} clusters with OpenShift SDN as the network plugin.
====
[NOTE]
diff --git a/modules/microshift-cni.adoc b/modules/microshift-cni.adoc
index 09b9d764fe..60b6d4158e 100644
--- a/modules/microshift-cni.adoc
+++ b/modules/microshift-cni.adoc
@@ -50,7 +50,7 @@ Networking features not available with {product-title} {product-version}:
This brief overview describes networking components and their operation in {product-title}. The `microshift-networking` RPM is a package that automatically pulls in any networking-related dependencies and systemd services to initialize networking, for example, the `microshift-ovs-init` systemd service.
NetworkManager::
-NetworkManager is required to set up the initial gateway bridge on the {product-title} node. The NetworkManager and `NetworkManager-ovs` RPM packages are installed as dependencies to the `microshift-networking` RPM package, which contains the necessary configuration files. NetworkManager in {product-title} uses the `keyfile` plug-in and is restarted after installation of the `microshift-networking` RPM package.
+NetworkManager is required to set up the initial gateway bridge on the {product-title} node. The NetworkManager and `NetworkManager-ovs` RPM packages are installed as dependencies to the `microshift-networking` RPM package, which contains the necessary configuration files. NetworkManager in {product-title} uses the `keyfile` plugin and is restarted after installation of the `microshift-networking` RPM package.
microshift-ovs-init::
The `microshift-ovs-init.service` is installed by the `microshift-networking` RPM package as a dependent systemd service to microshift.service. It is responsible for setting up the OVS gateway bridge.
diff --git a/modules/microshift-ki-cni-iptables-deleted.adoc b/modules/microshift-ki-cni-iptables-deleted.adoc
index e11e245265..282846bb28 100644
--- a/modules/microshift-ki-cni-iptables-deleted.adoc
+++ b/modules/microshift-ki-cni-iptables-deleted.adoc
@@ -15,7 +15,7 @@ To troubleshoot this issue, delete the ovnkube-master pod to restart the ovnkube
* The OpenShift CLI (`oc`) is installed.
* Access to the cluster as a user with the `cluster-admin` role.
-* A cluster installed on infrastructure configured with the OVN-Kubernetes network plug-in.
+* A cluster installed on infrastructure configured with the OVN-Kubernetes network plugin.
* The KUBECONFIG environment variable is set.
.Procedure
diff --git a/modules/migration-debugging-velero-admission-webhooks-ibm-appconnect.adoc b/modules/migration-debugging-velero-admission-webhooks-ibm-appconnect.adoc
index a17adaf875..644a194a2e 100644
--- a/modules/migration-debugging-velero-admission-webhooks-ibm-appconnect.adoc
+++ b/modules/migration-debugging-velero-admission-webhooks-ibm-appconnect.adoc
@@ -9,7 +9,7 @@ If you experience issues when you use Velero to a restore an IBM AppConnect reso
.Procedure
-. Check if you have any mutating admission plug-ins of `kind: MutatingWebhookConfiguration` in the cluster:
+. Check if you have any mutating admission plugins of `kind: MutatingWebhookConfiguration` in the cluster:
+
[source,terminal]
----
diff --git a/modules/migration-mtc-release-notes-1-6.adoc b/modules/migration-mtc-release-notes-1-6.adoc
index 95221f7b2d..33dae42d22 100644
--- a/modules/migration-mtc-release-notes-1-6.adoc
+++ b/modules/migration-mtc-release-notes-1-6.adoc
@@ -27,6 +27,6 @@ The following features are deprecated:
This release has the following known issues:
* On {product-title} 3.10, the `MigrationController` pod takes too long to restart. The Bugzilla report contains a workaround. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1986796[*BZ#1986796*])
-* `Stage` pods fail during direct volume migration from a classic {product-title} source cluster on IBM Cloud. The IBM block storage plug-in does not allow the same volume to be mounted on multiple pods of the same node. As a result, the PVCs cannot be mounted on the Rsync pods and on the application pods simultaneously. To resolve this issue, stop the application pods before migration. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1887526[*BZ#1887526*])
+* `Stage` pods fail during direct volume migration from a classic {product-title} source cluster on IBM Cloud. The IBM block storage plugin does not allow the same volume to be mounted on multiple pods of the same node. As a result, the PVCs cannot be mounted on the Rsync pods and on the application pods simultaneously. To resolve this issue, stop the application pods before migration. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1887526[*BZ#1887526*])
* `MigPlan` custom resource does not display a warning when an AWS gp2 PVC has no available space. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1963927[*BZ#1963927*])
* Block storage for IBM Cloud must be in the same availability zone. See the link:https://cloud.ibm.com/docs/vpc?topic=vpc-block-storage-vpc-faq[IBM FAQ for block storage for virtual private cloud].
diff --git a/modules/migration-writing-ansible-playbook-hook.adoc b/modules/migration-writing-ansible-playbook-hook.adoc
index f84b77a1c1..f2238d0a84 100644
--- a/modules/migration-writing-ansible-playbook-hook.adoc
+++ b/modules/migration-writing-ansible-playbook-hook.adoc
@@ -67,7 +67,7 @@ You can use the `fail` module to produce a non-zero exit status in cases where a
[id="migration-writing-ansible-playbook-hook-environment-variables_{context}"]
== Environment variables
-The `MigPlan` CR name and migration namespaces are passed as environment variables to the hook container. These variables are accessed by using the `lookup` plug-in.
+The `MigPlan` CR name and migration namespaces are passed as environment variables to the hook container. These variables are accessed by using the `lookup` plugin.
.Example environment variables
[source,yaml]
diff --git a/modules/nbde-using-tang-servers-for-disk-encryption.adoc b/modules/nbde-using-tang-servers-for-disk-encryption.adoc
index bc337da7a3..3e1394dabf 100644
--- a/modules/nbde-using-tang-servers-for-disk-encryption.adoc
+++ b/modules/nbde-using-tang-servers-for-disk-encryption.adoc
@@ -13,7 +13,7 @@ _Tang_ is a server for binding data to network presence. It makes a node contain
_Clevis_ is a pluggable framework for automated decryption that provides automated unlocking of Linux Unified Key Setup-on-disk-format (LUKS) volumes. The Clevis package runs on the node and provides the client side of the feature.
-A _Clevis pin_ is a plug-in into the Clevis framework. There are three pin types:
+A _Clevis pin_ is a plugin into the Clevis framework. There are three pin types:
TPM2:: Binds the disk encryption to the TPM2.
Tang:: Binds the disk encryption to a Tang server to enable NBDE.
diff --git a/modules/node-tuning-operator-supported-tuned-daemon-plug-ins.adoc b/modules/node-tuning-operator-supported-tuned-daemon-plug-ins.adoc
index e8352d0fa3..524e9480b7 100644
--- a/modules/node-tuning-operator-supported-tuned-daemon-plug-ins.adoc
+++ b/modules/node-tuning-operator-supported-tuned-daemon-plug-ins.adoc
@@ -4,9 +4,9 @@
// * post_installation_configuration/node-tasks.adoc
[id="supported-tuned-daemon-plug-ins_{context}"]
-= Supported TuneD daemon plug-ins
+= Supported TuneD daemon plugins
-Excluding the `[main]` section, the following TuneD plug-ins are supported when
+Excluding the `[main]` section, the following TuneD plugins are supported when
using custom profiles defined in the `profile:` section of the Tuned CR:
* audio
@@ -26,8 +26,8 @@ using custom profiles defined in the `profile:` section of the Tuned CR:
* vm
* bootloader
-There is some dynamic tuning functionality provided by some of these plug-ins
-that is not supported. The following TuneD plug-ins are currently not supported:
+There is some dynamic tuning functionality provided by some of these plugins
+that is not supported. The following TuneD plugins are currently not supported:
* script
* systemd
@@ -35,11 +35,11 @@ that is not supported. The following TuneD plug-ins are currently not supported:
[WARNING]
====
-The TuneD bootloader plug-in is currently supported on {op-system-first} 8.x worker nodes. For {op-system-base-full} 7.x worker nodes, the TuneD bootloader plug-in is currently not supported.
+The TuneD bootloader plugin is currently supported on {op-system-first} 8.x worker nodes. For {op-system-base-full} 7.x worker nodes, the TuneD bootloader plugin is currently not supported.
====
See
link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/customizing-tuned-profiles_monitoring-and-managing-system-status-and-performance#available-tuned-plug-ins_customizing-tuned-profiles[Available
-TuneD Plug-ins] and
+TuneD Plugins] and
link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/getting-started-with-tuned_monitoring-and-managing-system-status-and-performance[Getting
Started with TuneD] for more information.
diff --git a/modules/nodes-cluster-enabling-features-about.adoc b/modules/nodes-cluster-enabling-features-about.adoc
index ee486c4025..c98a238cf0 100644
--- a/modules/nodes-cluster-enabling-features-about.adoc
+++ b/modules/nodes-cluster-enabling-features-about.adoc
@@ -15,7 +15,7 @@ You can activate the following feature set by using the `FeatureGate` CR:
The following Technology Preview features are enabled by this feature set:
+
** Microsoft Azure File CSI Driver Operator. Enables the provisioning of persistent volumes (PVs) by using the Container Storage Interface (CSI) driver for Microsoft Azure File Storage.
-** CSI automatic migration. Enables automatic migration for supported in-tree volume plug-ins to their equivalent Container Storage Interface (CSI) drivers. Supported for:
+** CSI automatic migration. Enables automatic migration for supported in-tree volume plugins to their equivalent Container Storage Interface (CSI) drivers. Supported for:
*** Amazon Web Services (AWS) Elastic Block Storage (EBS)
*** Google Compute Engine Persistent Disk
*** Azure File
diff --git a/modules/nodes-containers-downward-api-about.adoc b/modules/nodes-containers-downward-api-about.adoc
index ced4a15de3..60f308e18c 100644
--- a/modules/nodes-containers-downward-api-about.adoc
+++ b/modules/nodes-containers-downward-api-about.adoc
@@ -7,7 +7,7 @@
The Downward API contains such information as the pod's name, project, and resource values. Containers can consume
information from the downward API using environment variables or a volume
-plug-in.
+plugin.
Fields within the pod are selected using the `FieldRef` API type. `FieldRef`
has two fields:
diff --git a/modules/nodes-containers-downward-api-container-resources-plugin.adoc b/modules/nodes-containers-downward-api-container-resources-plugin.adoc
index 9897bb5433..2ad3bc3217 100644
--- a/modules/nodes-containers-downward-api-container-resources-plugin.adoc
+++ b/modules/nodes-containers-downward-api-container-resources-plugin.adoc
@@ -4,14 +4,14 @@
:_content-type: PROCEDURE
[id="nodes-containers-downward-api-container-resources-plugin_{context}"]
-= Consuming container resources using a volume plug-in
+= Consuming container resources using a volume plugin
When creating pods, you can use the Downward API to inject information about
-computing resource requests and limits using a volume plug-in.
+computing resource requests and limits using a volume plugin.
.Procedure
-To use the Volume Plug-in:
+To use the Volume Plugin:
. When creating a pod configuration, use the `spec.volumes.downwardAPI.items`
field to describe the desired resources that correspond to the
diff --git a/modules/nodes-containers-downward-api-container-resources.adoc b/modules/nodes-containers-downward-api-container-resources.adoc
index 5dcc3566e2..904a741cdc 100644
--- a/modules/nodes-containers-downward-api-container-resources.adoc
+++ b/modules/nodes-containers-downward-api-container-resources.adoc
@@ -10,5 +10,5 @@ When creating pods, you can use the Downward API to inject information about
computing resource requests and limits so that image and application authors can
correctly create an image for specific environments.
-You can do this using environment variable or a volume plug-in.
+You can do this using environment variable or a volume plugin.
diff --git a/modules/nodes-containers-downward-api-container-values-plugin.adoc b/modules/nodes-containers-downward-api-container-values-plugin.adoc
index 6afc696bb2..4843dc15bd 100644
--- a/modules/nodes-containers-downward-api-container-values-plugin.adoc
+++ b/modules/nodes-containers-downward-api-container-values-plugin.adoc
@@ -4,9 +4,9 @@
:_content-type: PROCEDURE
[id="nodes-containers-downward-api-container-values-plugin_{context}"]
-= Consuming container values using a volume plug-in
+= Consuming container values using a volume plugin
-You containers can consume API values using a volume plug-in.
+You containers can consume API values using a volume plugin.
Containers can consume:
@@ -20,7 +20,7 @@ Containers can consume:
.Procedure
-To use the volume plug-in:
+To use the volume plugin:
. Create a `volume-pod.yaml` file:
+
diff --git a/modules/nodes-containers-downward-api-container-values.adoc b/modules/nodes-containers-downward-api-container-values.adoc
index e6a7faedf5..0e2c16c77f 100644
--- a/modules/nodes-containers-downward-api-container-values.adoc
+++ b/modules/nodes-containers-downward-api-container-values.adoc
@@ -6,7 +6,7 @@
[id="nodes-containers-downward-api-container-values_{context}"]
= Understanding how to consume container values using the downward API
-You containers can consume API values using environment variables or a volume plug-in.
+You containers can consume API values using environment variables or a volume plugin.
Depending on the method you choose, containers can consume:
* Pod name
@@ -17,5 +17,5 @@ Depending on the method you choose, containers can consume:
* Pod labels
-Annotations and labels are available using only a volume plug-in.
+Annotations and labels are available using only a volume plugin.
diff --git a/modules/nodes-nodes-audit-log-basic.adoc b/modules/nodes-nodes-audit-log-basic.adoc
index 28441b88d4..fb011fcfa2 100644
--- a/modules/nodes-nodes-audit-log-basic.adoc
+++ b/modules/nodes-nodes-audit-log-basic.adoc
@@ -27,7 +27,7 @@ Audit works at the API server level, logging all requests coming to the server.
|`responseObject` |Optional. The API object returned in the response, in JSON format. The `ResponseObject` is recorded after conversion to the external type, and serialized as JSON. This is omitted for non-resource requests and is only logged at response level.
|`requestReceivedTimestamp` |The time that the request reached the API server.
|`stageTimestamp` |The time that the request reached the current audit stage.
-|`annotations` |Optional. An unstructured key value map stored with an audit event that may be set by plug-ins invoked in the request serving chain, including authentication, authorization and admission plug-ins. Note that these annotations are for the audit event, and do not correspond to the `metadata.annotations` of the submitted object. Keys should uniquely identify the informing component to avoid name collisions, for example `podsecuritypolicy.admission.k8s.io/policy`. Values should be short. Annotations are included in the metadata level.
+|`annotations` |Optional. An unstructured key value map stored with an audit event that may be set by plugins invoked in the request serving chain, including authentication, authorization and admission plugins. Note that these annotations are for the audit event, and do not correspond to the `metadata.annotations` of the submitted object. Keys should uniquely identify the informing component to avoid name collisions, for example `podsecuritypolicy.admission.k8s.io/policy`. Values should be short. Annotations are included in the metadata level.
|===
Example output for the Kubernetes API server:
diff --git a/modules/nodes-pods-configuring-run-once.adoc b/modules/nodes-pods-configuring-run-once.adoc
index cfcdfb9f0e..4322ae8bf4 100644
--- a/modules/nodes-pods-configuring-run-once.adoc
+++ b/modules/nodes-pods-configuring-run-once.adoc
@@ -11,12 +11,12 @@ or performing a build. Run-once pods are pods that have a `RestartPolicy` of
`Never` or `OnFailure`.
The cluster administrator can use the `RunOnceDuration` admission control
-plug-in to force a limit on the time that those run-once pods can be active.
+plugin to force a limit on the time that those run-once pods can be active.
Once the time limit expires, the cluster will try to actively terminate those
pods. The main reason to have such a limit is to prevent tasks such as builds to
run for an excessive amount of time.
-The plug-in configuration should include the default active deadline for
+The plugin configuration should include the default active deadline for
run-once pods. This deadline is enforced globally, but can be superseded on
a per-project basis.
diff --git a/modules/nodes-pods-plugins-about.adoc b/modules/nodes-pods-plugins-about.adoc
index d2b6ea7bd3..0169494a1e 100644
--- a/modules/nodes-pods-plugins-about.adoc
+++ b/modules/nodes-pods-plugins-about.adoc
@@ -5,22 +5,22 @@
:_content-type: CONCEPT
[id="nodes-pods-plugins-about_{context}"]
-= Understanding device plug-ins
+= Understanding device plugins
-The device plug-in provides a consistent and portable solution to consume hardware
-devices across clusters. The device plug-in provides support for these devices
+The device plugin provides a consistent and portable solution to consume hardware
+devices across clusters. The device plugin provides support for these devices
through an extension mechanism, which makes these devices available to
Containers, provides health checks of these devices, and securely shares them.
[IMPORTANT]
====
-{product-title} supports the device plug-in API, but the device plug-in
+{product-title} supports the device plugin API, but the device plugin
Containers are supported by individual vendors.
====
-A device plug-in is a gRPC service running on the nodes (external to
+A device plugin is a gRPC service running on the nodes (external to
the `kubelet`) that is responsible for managing specific
-hardware resources. Any device plug-in must support following remote procedure
+hardware resources. Any device plugin must support following remote procedure
calls (RPCs):
[source,golang]
@@ -49,28 +49,29 @@ service DevicePlugin {
----
[discrete]
-=== Example device plug-ins
-* link:https://github.com/GoogleCloudPlatform/Container-engine-accelerators/tree/master/cmd/nvidia_gpu[Nvidia GPU device plug-in for COS-based operating system]
-* link:https://github.com/NVIDIA/k8s-device-plugin[Nvidia official GPU device plug-in]
-* link:https://github.com/vikaschoudhary16/sfc-device-plugin[Solarflare device plug-in]
-* link:https://github.com/kubevirt/kubernetes-device-plugins[KubeVirt device plug-ins: vfio and kvm]
-* link:https://github.com/ibm-s390-cloud/k8s-cex-dev-plugin[Kubernetes device plug-in for IBM Crypto Express (CEX) cards]
+=== Example device plugins
+* link:https://github.com/GoogleCloudPlatform/Container-engine-accelerators/tree/master/cmd/nvidia_gpu[Nvidia GPU device plugin for COS-based operating system]
+* link:https://github.com/NVIDIA/k8s-device-plugin[Nvidia official GPU device plugin]
+* link:https://github.com/vikaschoudhary16/sfc-device-plugin[Solarflare device plugin]
+* link:https://github.com/kubevirt/kubernetes-device-plugins[KubeVirt device plugins: vfio and kvm]
+* link:https://github.com/ibm-s390-cloud/k8s-cex-dev-plugin[Kubernetes device plugin for IBM Crypto Express (CEX) cards]
[NOTE]
====
-For easy device plug-in reference implementation, there is a stub device plug-in
+For easy device plugin reference implementation, there is a stub device plugin
in the Device Manager code:
*_vendor/k8s.io/kubernetes/pkg/kubelet/cm/deviceplugin/device_plugin_stub.go_*.
====
-== Methods for deploying a device plug-in
+[id="methods-for-deploying-a-device-plugin_{context}"]
+== Methods for deploying a device plugin
-* Daemon sets are the recommended approach for device plug-in deployments.
-* Upon start, the device plug-in will try to create a UNIX domain socket at
+* Daemon sets are the recommended approach for device plugin deployments.
+* Upon start, the device plugin will try to create a UNIX domain socket at
*_/var/lib/kubelet/device-plugin/_* on the node to serve RPCs from Device Manager.
-* Since device plug-ins must manage hardware resources, access to the host
+* Since device plugins must manage hardware resources, access to the host
file system, as well as socket creation, they must be run in a privileged
security context.
* More specific details regarding deployment steps can be found with each device
-plug-in implementation.
+plugin implementation.
diff --git a/modules/nodes-pods-plugins-device-mgr.adoc b/modules/nodes-pods-plugins-device-mgr.adoc
index 54bcb57284..33c13b4671 100644
--- a/modules/nodes-pods-plugins-device-mgr.adoc
+++ b/modules/nodes-pods-plugins-device-mgr.adoc
@@ -8,13 +8,13 @@
= Understanding the Device Manager
Device Manager provides a mechanism for advertising specialized node hardware resources
-with the help of plug-ins known as device plug-ins.
+with the help of plugins known as device plugins.
You can advertise specialized hardware without requiring any upstream code changes.
[IMPORTANT]
====
-{product-title} supports the device plug-in API, but the device plug-in
+{product-title} supports the device plugin API, but the device plugin
Containers are supported by individual vendors.
====
@@ -22,25 +22,25 @@ Device Manager advertises devices as *Extended Resources*. User pods can consume
devices, advertised by Device Manager, using the same *Limit/Request* mechanism,
which is used for requesting any other *Extended Resource*.
-Upon start, the device plug-in registers itself with Device Manager invoking `Register` on the
+Upon start, the device plugin registers itself with Device Manager invoking `Register` on the
*_/var/lib/kubelet/device-plugins/kubelet.sock_* and starts a gRPC service at
*_/var/lib/kubelet/device-plugins/.sock_* for serving Device Manager
requests.
Device Manager, while processing a new registration request, invokes
-`ListAndWatch` remote procedure call (RPC) at the device plug-in service. In
-response, Device Manager gets a list of *Device* objects from the plug-in over a
+`ListAndWatch` remote procedure call (RPC) at the device plugin service. In
+response, Device Manager gets a list of *Device* objects from the plugin over a
gRPC stream. Device Manager will keep watching on the stream for new updates
-from the plug-in. On the plug-in side, the plug-in will also keep the stream
+from the plugin. On the plugin side, the plugin will also keep the stream
open and whenever there is a change in the state of any of the devices, a new
device list is sent to the Device Manager over the same streaming connection.
While handling a new pod admission request, Kubelet passes requested `Extended
Resources` to the Device Manager for device allocation. Device Manager checks in
-its database to verify if a corresponding plug-in exists or not. If the plug-in exists
+its database to verify if a corresponding plugin exists or not. If the plugin exists
and there are free allocatable devices as well as per local cache, `Allocate`
-RPC is invoked at that particular device plug-in.
+RPC is invoked at that particular device plugin.
-Additionally, device plug-ins can also perform several other device-specific
+Additionally, device plugins can also perform several other device-specific
operations, such as driver installation, device initialization, and device
resets. These functionalities vary from implementation to implementation.
diff --git a/modules/nodes-pods-plugins-install.adoc b/modules/nodes-pods-plugins-install.adoc
index 0016930b0d..37159dc349 100644
--- a/modules/nodes-pods-plugins-install.adoc
+++ b/modules/nodes-pods-plugins-install.adoc
@@ -7,11 +7,11 @@
[id="nodes-pods-plugins-install_{context}"]
= Enabling Device Manager
-Enable Device Manager to implement a device plug-in to advertise specialized
+Enable Device Manager to implement a device plugin to advertise specialized
hardware without any upstream code changes.
Device Manager provides a mechanism for advertising specialized node hardware resources
-with the help of plug-ins known as device plug-ins.
+with the help of plugins known as device plugins.
. Obtain the label associated with the static `MachineConfigPool` CRD for the type of node you want to configure by entering the following command.
Perform one of the following steps:
@@ -78,5 +78,5 @@ kubeletconfig.machineconfiguration.openshift.io/devicemgr created
. Ensure that Device Manager was actually enabled by confirming that
*_/var/lib/kubelet/device-plugins/kubelet.sock_* is created on the node. This is
the UNIX domain socket on which the Device Manager gRPC server listens for new
-plug-in registrations. This sock file is created when the Kubelet is started
+plugin registrations. This sock file is created when the Kubelet is started
only if Device Manager is enabled.
diff --git a/modules/nodes-pods-secrets-about.adoc b/modules/nodes-pods-secrets-about.adoc
index 91b0c595ef..f8a2a00616 100644
--- a/modules/nodes-pods-secrets-about.adoc
+++ b/modules/nodes-pods-secrets-about.adoc
@@ -10,7 +10,7 @@ The `Secret` object type provides a mechanism to hold sensitive information such
as passwords, {product-title} client configuration files,
private source repository credentials, and so on. Secrets decouple sensitive
content from the pods. You can mount secrets into containers using a volume
-plug-in or the system can use secrets to perform actions on behalf of a pod.
+plugin or the system can use secrets to perform actions on behalf of a pod.
Key properties include:
diff --git a/modules/nodes-scheduler-node-projects-about.adoc b/modules/nodes-scheduler-node-projects-about.adoc
index 36cb7f8e08..fe0fbb08d4 100644
--- a/modules/nodes-scheduler-node-projects-about.adoc
+++ b/modules/nodes-scheduler-node-projects-about.adoc
@@ -26,8 +26,8 @@ metadata:
This admission controller has the following behavior:
. If the Namespace has an annotation with a key scheduler.alpha.kubernetes.io/node-selector, use its value as the node selector.
-. If the namespace lacks such an annotation, use the `clusterDefaultNodeSelector` defined in the `PodNodeSelector` plug-in configuration file as the node selector.
+. If the namespace lacks such an annotation, use the `clusterDefaultNodeSelector` defined in the `PodNodeSelector` plugin configuration file as the node selector.
. Evaluate the pod's node selector against the namespace node selector for conflicts. Conflicts result in rejection.
-. Evaluate the pod's node selector against the namespace-specific whitelist defined the plug-in configuration file. Conflicts result in rejection.
+. Evaluate the pod's node selector against the namespace-specific whitelist defined the plugin configuration file. Conflicts result in rejection.
diff --git a/modules/nodes-scheduler-profiles-about.adoc b/modules/nodes-scheduler-profiles-about.adoc
index b98bc0be3a..4bdf6e840b 100644
--- a/modules/nodes-scheduler-profiles-about.adoc
+++ b/modules/nodes-scheduler-profiles-about.adoc
@@ -14,4 +14,4 @@ The following scheduler profiles are available:
`HighNodeUtilization`:: This profile attempts to place as many pods as possible on to as few nodes as possible. This minimizes node count and has high resource usage per node.
-`NoScoring`:: This is a low-latency profile that strives for the quickest scheduling cycle by disabling all score plug-ins. This might sacrifice better scheduling decisions for faster ones.
+`NoScoring`:: This is a low-latency profile that strives for the quickest scheduling cycle by disabling all score plugins. This might sacrifice better scheduling decisions for faster ones.
diff --git a/modules/nodes-secondary-scheduler-about.adoc b/modules/nodes-secondary-scheduler-about.adoc
index 70ad874f3c..7ba5e5df58 100644
--- a/modules/nodes-secondary-scheduler-about.adoc
+++ b/modules/nodes-secondary-scheduler-about.adoc
@@ -15,4 +15,4 @@ The custom scheduler must have the `/bin/kube-scheduler` binary and be based on
You can use the {secondary-scheduler-operator} to deploy a custom secondary scheduler in {product-title}, but Red Hat does not directly support the functionality of the custom secondary scheduler.
====
-The {secondary-scheduler-operator} creates the default roles and role bindings required by the secondary scheduler. You can specify which scheduling plug-ins to enable or disable by configuring the `KubeSchedulerConfiguration` resource for the secondary scheduler.
+The {secondary-scheduler-operator} creates the default roles and role bindings required by the secondary scheduler. You can specify which scheduling plugins to enable or disable by configuring the `KubeSchedulerConfiguration` resource for the secondary scheduler.
diff --git a/modules/nodes-secondary-scheduler-configuring-console.adoc b/modules/nodes-secondary-scheduler-configuring-console.adoc
index 5255542800..bf62a79756 100644
--- a/modules/nodes-secondary-scheduler-configuring-console.adoc
+++ b/modules/nodes-secondary-scheduler-configuring-console.adoc
@@ -47,7 +47,7 @@ data:
<2> The config map must be created in the `openshift-secondary-scheduler-operator` namespace.
<3> The `KubeSchedulerConfiguration` resource for the secondary scheduler. For more information, see link:https://kubernetes.io/docs/reference/config-api/kube-scheduler-config.v1beta3/#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration[`KubeSchedulerConfiguration`] in the Kubernetes API documentation.
<4> The name of the secondary scheduler. Pods that set their `spec.schedulerName` field to this value are scheduled with this secondary scheduler.
-<5> The plug-ins to enable or disable for the secondary scheduler. For a list default scheduling plug-ins, see link:https://kubernetes.io/docs/reference/scheduling/config/#scheduling-plugins[Scheduling plugins] in the Kubernetes documentation.
+<5> The plugins to enable or disable for the secondary scheduler. For a list default scheduling plugins, see link:https://kubernetes.io/docs/reference/scheduling/config/#scheduling-plugins[Scheduling plugins] in the Kubernetes documentation.
.. Click *Create*.
diff --git a/modules/nvidia-gpu-admin-dashboard-installing.adoc b/modules/nvidia-gpu-admin-dashboard-installing.adoc
index 3ee5b9ca70..f613931960 100644
--- a/modules/nvidia-gpu-admin-dashboard-installing.adoc
+++ b/modules/nvidia-gpu-admin-dashboard-installing.adoc
@@ -6,9 +6,9 @@
[id="nvidia-gpu-admin-dashboard-installing_{context}"]
= Installing the NVIDIA GPU administration dashboard
-Install the NVIDIA GPU plug-in by using Helm on the OpenShift Container Platform (OCP) Console to add GPU capabilities.
+Install the NVIDIA GPU plugin by using Helm on the OpenShift Container Platform (OCP) Console to add GPU capabilities.
-The OpenShift Console NVIDIA GPU plug-in works as a remote bundle for the OCP console. To run the OpenShift Console NVIDIA GPU plug-in
+The OpenShift Console NVIDIA GPU plugin works as a remote bundle for the OCP console. To run the OpenShift Console NVIDIA GPU plugin
an instance of the OCP console must be running.
@@ -21,7 +21,7 @@ an instance of the OCP console must be running.
.Procedure
-Use the following procedure to install the OpenShift Console NVIDIA GPU plug-in.
+Use the following procedure to install the OpenShift Console NVIDIA GPU plugin.
. Add the Helm repository:
+
diff --git a/modules/nvidia-gpu-admin-dashboard-introduction.adoc b/modules/nvidia-gpu-admin-dashboard-introduction.adoc
index 7eb9b3b6db..8c3cb76a9f 100644
--- a/modules/nvidia-gpu-admin-dashboard-introduction.adoc
+++ b/modules/nvidia-gpu-admin-dashboard-introduction.adoc
@@ -6,9 +6,9 @@
[id="nvidia-gpu-admin-dashboard-introduction_{context}"]
= Introduction
-The OpenShift Console NVIDIA GPU plug-in is a dedicated administration dashboard for NVIDIA GPU usage visualization
+The OpenShift Console NVIDIA GPU plugin is a dedicated administration dashboard for NVIDIA GPU usage visualization
in the OpenShift Container Platform (OCP) Console. The visualizations in the administration dashboard provide guidance on how to
best optimize GPU resources in clusters, such as when a GPU is under- or over-utilized.
-The OpenShift Console NVIDIA GPU plug-in works as a remote bundle for the OCP console.
-To run the plug-in the OCP console must be running.
+The OpenShift Console NVIDIA GPU plugin works as a remote bundle for the OCP console.
+To run the plugin the OCP console must be running.
diff --git a/modules/nvidia-gpu-admin-dashboard-using.adoc b/modules/nvidia-gpu-admin-dashboard-using.adoc
index 8199a4fdf4..24179938c6 100644
--- a/modules/nvidia-gpu-admin-dashboard-using.adoc
+++ b/modules/nvidia-gpu-admin-dashboard-using.adoc
@@ -6,7 +6,7 @@
[id="nvidia-gpu-admin-dashboard-using_{context}"]
= Using the NVIDIA GPU administration dashboard
-After deploying the OpenShift Console NVIDIA GPU plug-in, log in to the OpenShift Container Platform web console using your login credentials to access the *Administrator* perspective.
+After deploying the OpenShift Console NVIDIA GPU plugin, log in to the OpenShift Container Platform web console using your login credentials to access the *Administrator* perspective.
To view the changes, you need to refresh the console to see the **GPUs** tab under **Compute**.
diff --git a/modules/nw-about-multicast.adoc b/modules/nw-about-multicast.adoc
index 59f9285e28..1f4298e4b6 100644
--- a/modules/nw-about-multicast.adoc
+++ b/modules/nw-about-multicast.adoc
@@ -24,15 +24,15 @@ At this time, multicast is best used for low-bandwidth coordination or service
discovery and not a high-bandwidth solution.
====
-Multicast traffic between {product-title} pods is disabled by default. If you are using the {sdn} network plug-in, you can enable multicast on a per-project basis.
+Multicast traffic between {product-title} pods is disabled by default. If you are using the {sdn} network plugin, you can enable multicast on a per-project basis.
ifdef::openshift-sdn[]
-When using the OpenShift SDN network plug-in in `networkpolicy` isolation mode:
+When using the OpenShift SDN network plugin in `networkpolicy` isolation mode:
* Multicast packets sent by a pod will be delivered to all other pods in the project, regardless of `NetworkPolicy` objects. Pods might be able to communicate over multicast even when they cannot communicate over unicast.
* Multicast packets sent by a pod in one project will never be delivered to pods in any other project, even if there are `NetworkPolicy` objects that allow communication between the projects.
-When using the OpenShift SDN network plug-in in `multitenant` isolation mode:
+When using the OpenShift SDN network plugin in `multitenant` isolation mode:
* Multicast packets sent by a pod will be delivered to all other pods in the
project.
diff --git a/modules/nw-cfg-tuning-interface-cni.adoc b/modules/nw-cfg-tuning-interface-cni.adoc
index 98c656bece..a74479ff4e 100644
--- a/modules/nw-cfg-tuning-interface-cni.adoc
+++ b/modules/nw-cfg-tuning-interface-cni.adoc
@@ -23,7 +23,7 @@ spec:
"cniVersion": "0.4.0", <3>
"name": "", <4>
"plugins": [{
- "type": "" <5>
+ "type": "" <5>
},
{
"type": "tuning", <6>
@@ -38,8 +38,8 @@ spec:
<2> Specifies the namespace that the object is associated with.
<3> Specifies the CNI specification version.
<4> Specifies the name for the configuration. It is recommended to match the configuration name to the name value of the network attachment definition.
-<5> Specifies the name of the main CNI plug-in to configure.
-<6> Specifies the name of the CNI meta plug-in.
+<5> Specifies the name of the main CNI plugin to configure.
+<6> Specifies the name of the CNI meta plugin.
<7> Specifies the sysctl to set.
+
An example yaml file is shown here:
diff --git a/modules/nw-cluster-mtu-change-about.adoc b/modules/nw-cluster-mtu-change-about.adoc
index e00600e062..0520651c6e 100644
--- a/modules/nw-cluster-mtu-change-about.adoc
+++ b/modules/nw-cluster-mtu-change-about.adoc
@@ -13,7 +13,7 @@ You might want to change the MTU of the cluster network for several reasons:
* The MTU detected during cluster installation is not correct for your infrastructure
* Your cluster infrastructure now requires a different MTU, such as from the addition of nodes that need a different MTU for optimal performance
-You can change the cluster MTU for only the OVN-Kubernetes and OpenShift SDN cluster network plug-ins.
+You can change the cluster MTU for only the OVN-Kubernetes and OpenShift SDN cluster network plugins.
// https://github.com/openshift/enhancements/blob/master/enhancements/network/allow-mtu-changes.md
[id="service-interruption-considerations_{context}"]
@@ -31,11 +31,11 @@ When you initiate an MTU change on your cluster the following effects might impa
When planning your MTU migration there are two related but distinct MTU values to consider.
* *Hardware MTU*: This MTU value is set based on the specifics of your network infrastructure.
-* *Cluster network MTU*: This MTU value is always less than your hardware MTU to account for the cluster network overlay overhead. The specific overhead is determined by your network plug-in:
+* *Cluster network MTU*: This MTU value is always less than your hardware MTU to account for the cluster network overlay overhead. The specific overhead is determined by your network plugin:
** *OVN-Kubernetes*: `100` bytes
** *OpenShift SDN*: `50` bytes
-If your cluster requires different MTU values for different nodes, you must subtract the overhead value for your network plug-in from the lowest MTU value that is used by any node in your cluster. For example, if some nodes in your cluster have an MTU of `9001`, and some have an MTU of `1500`, you must set this value to `1400`.
+If your cluster requires different MTU values for different nodes, you must subtract the overhead value for your network plugin from the lowest MTU value that is used by any node in your cluster. For example, if some nodes in your cluster have an MTU of `9001`, and some have an MTU of `1500`, you must set this value to `1400`.
[id="how-the-migration-process-works_{context}"]
== How the migration process works
@@ -60,7 +60,7 @@ Set the following values in the Cluster Network Operator configuration:
- The `mtu.machine.to` must be set to either the new hardware MTU or to the current hardware MTU if the MTU for the hardware is not changing. This value is transient and is used as part of the migration process. Separately, if you specify a hardware MTU that is different from your existing hardware MTU value, you must manually configure the MTU to persist by other means, such as with a machine config, DHCP setting, or a Linux kernel command line.
- The `mtu.network.from` field must equal the `network.status.clusterNetworkMTU` field, which is the current MTU of the cluster network.
-- The `mtu.network.to` field must be set to the target cluster network MTU and must be lower than the hardware MTU to allow for the overlay overhead of the network plug-in. For OVN-Kubernetes, the overhead is `100` bytes and for OpenShift SDN the overhead is `50` bytes.
+- The `mtu.network.to` field must be set to the target cluster network MTU and must be lower than the hardware MTU to allow for the overlay overhead of the network plugin. For OVN-Kubernetes, the overhead is `100` bytes and for OpenShift SDN the overhead is `50` bytes.
If the values provided are valid, the CNO writes out a new temporary configuration with the MTU for the cluster network set to the value of the `mtu.network.to` field.
@@ -73,7 +73,7 @@ If the values provided are valid, the CNO writes out a new temporary configurati
- Changing the MTU through boot parameters
|N/A
-|Set the `mtu` value in the CNO configuration for the network plug-in and set `spec.migration` to `null`.
+|Set the `mtu` value in the CNO configuration for the network plugin and set `spec.migration` to `null`.
|
*Machine Config Operator (MCO)*: Performs a rolling reboot of each node in the cluster with the new MTU configuration.
diff --git a/modules/nw-cluster-mtu-change.adoc b/modules/nw-cluster-mtu-change.adoc
index 8c542bcb43..ab86f3eedf 100644
--- a/modules/nw-cluster-mtu-change.adoc
+++ b/modules/nw-cluster-mtu-change.adoc
@@ -14,7 +14,7 @@ The following procedure describes how to change the cluster MTU by using either
* You installed the OpenShift CLI (`oc`).
* You are logged in to the cluster with a user with `cluster-admin` privileges.
-* You identified the target MTU for your cluster. The correct MTU varies depending on the network plug-in that your cluster uses:
+* You identified the target MTU for your cluster. The correct MTU varies depending on the network plugin that your cluster uses:
** *OVN-Kubernetes*: The cluster MTU must be set to `100` less than the lowest hardware MTU value in your cluster.
** *OpenShift SDN*: The cluster MTU must be set to `50` less than the lowest hardware MTU value in your cluster.
@@ -65,7 +65,7 @@ where:
... Find the primary network interface:
-**** If you are using the OpenShift SDN network plug-in, enter the following command:
+**** If you are using the OpenShift SDN network plugin, enter the following command:
+
[source,terminal]
----
@@ -78,7 +78,7 @@ where:
``:: Specifies the name of a node in your cluster.
--
-**** If you are using the OVN-Kubernetes network plug-in, enter the following command:
+**** If you are using the OVN-Kubernetes network plugin, enter the following command:
+
[source,terminal]
----
@@ -118,7 +118,7 @@ ovs-port-phys0 332ef950-b2e5-4991-a0dc-3158977c35ca ovs-port ens4
----
+
--
-For the OVN-Kubernetes network plug-in, two or three connection manager profiles are returned.
+For the OVN-Kubernetes network plugin, two or three connection manager profiles are returned.
* If the previous command returns only two profiles, then you must use a default NetworkManager connection configuration as a template.
* If the previous command returns three profiles, use the profile that is not named `ovs-if-phys0` or `ovs-port-phys0` as a template for the following modifications.
@@ -194,7 +194,7 @@ nm-generated=true
**** Set the following values:
***** `802-3-ethernet.mtu`: Specify the MTU for the primary network interface of the system.
***** `connection.interface-name`: Optional: Specify the network interface name that this configuration applies to.
-***** `connection.autoconnect-priority`: Optional: Consider specifying an integer priority value above `0` to ensure this profile is used over other profiles for the same interface. If you are using the OVN-Kubernetes network plug-in, this value must be less than `100`.
+***** `connection.autoconnect-priority`: Optional: Consider specifying an integer priority value above `0` to ensure this profile is used over other profiles for the same interface. If you are using the OVN-Kubernetes network plugin, this value must be less than `100`.
**** Remove the `connection.uuid` field.
**** Change the following values:
***** `connection.id`: Optional: Specify a different NetworkManager connection profile name.
@@ -409,7 +409,7 @@ If the machine config is successfully deployed, the previous output contains the
The machine config must not contain the `ExecStart=/usr/local/bin/mtu-migration.sh` line.
. To finalize the MTU migration, enter one of the following commands:
-** If you are using the OVN-Kubernetes network plug-in:
+** If you are using the OVN-Kubernetes network plugin:
+
[source,terminal]
+
@@ -424,7 +424,7 @@ where:
``:: Specifies the new cluster network MTU that you specified with ``.
--
-** If you are using the OpenShift SDN network plug-in:
+** If you are using the OpenShift SDN network plugin:
+
[source,terminal]
----
diff --git a/modules/nw-cluster-network-operator.adoc b/modules/nw-cluster-network-operator.adoc
index e811df25c9..0b2433d775 100644
--- a/modules/nw-cluster-network-operator.adoc
+++ b/modules/nw-cluster-network-operator.adoc
@@ -6,7 +6,7 @@
= Cluster Network Operator
The Cluster Network Operator implements the `network` API from the `operator.openshift.io` API group.
-The Operator deploys the OVN-Kubernetes network plug-in, or the network provider plug-in that you selected during cluster installation, by using a daemon set.
+The Operator deploys the OVN-Kubernetes network plugin, or the network provider plugin that you selected during cluster installation, by using a daemon set.
.Procedure
diff --git a/modules/nw-configure-sysctl-interface-sriov-network-bonded.adoc b/modules/nw-configure-sysctl-interface-sriov-network-bonded.adoc
index 868a166c50..7649496032 100644
--- a/modules/nw-configure-sysctl-interface-sriov-network-bonded.adoc
+++ b/modules/nw-configure-sysctl-interface-sriov-network-bonded.adoc
@@ -13,7 +13,7 @@ You can set interface specific `sysctl` settings on a bonded interface created f
Do not edit `NetworkAttachmentDefinition` custom resources that the SR-IOV Network Operator manages. Doing so might disrupt network traffic on your additional network.
====
-To change specific interface-level network `sysctl` settings create the `SriovNetwork` custom resource (CR) with the Container Network Interface (CNI) tuning plug-in by using the following procedure.
+To change specific interface-level network `sysctl` settings create the `SriovNetwork` custom resource (CR) with the Container Network Interface (CNI) tuning plugin by using the following procedure.
.Prerequisites
@@ -110,7 +110,7 @@ For `balance-rr` or `balance-xor` modes, you must set the `trust` mode to `on` f
<4> The `failover` attribute is mandatory for active-backup mode.
<5> The `linksInContainer=true` flag informs the Bond CNI that the interfaces required are to be found inside the container. By default Bond CNI looks for these interfaces on the host which does not work for integration with SRIOV and Multus.
<6> The `links` section defines which interfaces will be used to create the bond. By default, Multus names the attached interfaces as: "net", plus a consecutive number, starting with one.
-<7> A configuration object for the IPAM CNI plug-in as a YAML block scalar. The plug-in manages IP address assignment for the attachment definition. In this pod example IP addresses are configured manually, so in this case `ipam` is set to static.
+<7> A configuration object for the IPAM CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition. In this pod example IP addresses are configured manually, so in this case `ipam` is set to static.
<8> Add additional capabilities to the device. For example, set the `type` field to `tuning`. Specify the interface-level network `sysctl` you want to set in the sysctl field. This example sets all interface-level network `sysctl` settings that can be set.
. Create the bond network attachment resource:
diff --git a/modules/nw-configure-sysctl-interface-sriov-network.adoc b/modules/nw-configure-sysctl-interface-sriov-network.adoc
index cdcebbed38..0ca2aee4cb 100644
--- a/modules/nw-configure-sysctl-interface-sriov-network.adoc
+++ b/modules/nw-configure-sysctl-interface-sriov-network.adoc
@@ -15,7 +15,7 @@ The SR-IOV Network Operator manages additional network definitions. When you spe
Do not edit `NetworkAttachmentDefinition` custom resources that the SR-IOV Network Operator manages. Doing so might disrupt network traffic on your additional network.
====
-To change the interface-level network `net.ipv4.conf.IFNAME.accept_redirects` `sysctl` settings, create an additional SR-IOV network with the Container Network Interface (CNI) tuning plug-in.
+To change the interface-level network `net.ipv4.conf.IFNAME.accept_redirects` `sysctl` settings, create an additional SR-IOV network with the Container Network Interface (CNI) tuning plugin.
.Prerequisites
@@ -53,7 +53,7 @@ spec:
<2> The namespace where the SR-IOV Network Operator is installed.
<3> The value for the `spec.resourceName` parameter from the `SriovNetworkNodePolicy` object that defines the SR-IOV hardware for this additional network.
<4> The target namespace for the `SriovNetwork` object. Only pods in the target namespace can attach to the additional network.
-<5> A configuration object for the IPAM CNI plug-in as a YAML block scalar. The plug-in manages IP address assignment for the attachment definition.
+<5> A configuration object for the IPAM CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition.
<6> Optional: Set capabilities for the additional network. You can specify `"{ "ips": true }"` to enable IP address support or `"{ "mac": true }"` to enable MAC address support.
<7> Optional: The metaPlugins parameter is used to add additional capabilities to the device. In this use case set the `type` field to `tuning`. Specify the interface-level network `sysctl` you want to set in the `sysctl` field.
diff --git a/modules/nw-dns-loglevel.adoc b/modules/nw-dns-loglevel.adoc
index 2b736c412f..b6c6690e20 100644
--- a/modules/nw-dns-loglevel.adoc
+++ b/modules/nw-dns-loglevel.adoc
@@ -9,7 +9,7 @@ You can configure the CoreDNS log level to determine the amount of detail in log
[NOTE]
====
-The errors plug-in is always enabled. The following `logLevel` settings report different error responses:
+The errors plugin is always enabled. The following `logLevel` settings report different error responses:
* `logLevel`: `Normal` enables the "errors" class: `log . { class error }`.
diff --git a/modules/nw-dual-stack-convert-back-single-stack.adoc b/modules/nw-dual-stack-convert-back-single-stack.adoc
index 1c829875cc..85061f3b5d 100644
--- a/modules/nw-dual-stack-convert-back-single-stack.adoc
+++ b/modules/nw-dual-stack-convert-back-single-stack.adoc
@@ -8,7 +8,7 @@ As a cluster administrator, you can convert your dual-stack cluster network to a
* You installed the OpenShift CLI (`oc`).
* You are logged in to the cluster with a user with `cluster-admin` privileges.
-* Your cluster uses the OVN-Kubernetes network plug-in.
+* Your cluster uses the OVN-Kubernetes network plugin.
* The cluster nodes have IPv6 addresses.
* You have enabled dual-stack networking.
diff --git a/modules/nw-dual-stack-convert.adoc b/modules/nw-dual-stack-convert.adoc
index eef18f9d9a..d409d8bca5 100644
--- a/modules/nw-dual-stack-convert.adoc
+++ b/modules/nw-dual-stack-convert.adoc
@@ -13,7 +13,7 @@ After converting to dual-stack networking only newly created pods are assigned I
* You installed the OpenShift CLI (`oc`).
* You are logged in to the cluster with a user with `cluster-admin` privileges.
-* Your cluster uses the OVN-Kubernetes network plug-in.
+* Your cluster uses the OVN-Kubernetes network plugin.
* The cluster nodes have IPv6 addresses.
* You have configured an IPv6-enabled router based on your infrastructure.
diff --git a/modules/nw-egress-ips-about.adoc b/modules/nw-egress-ips-about.adoc
index f3f046c8da..00d163fd86 100644
--- a/modules/nw-egress-ips-about.adoc
+++ b/modules/nw-egress-ips-about.adoc
@@ -76,7 +76,7 @@ The annotation value is an array with a single object with fields that provide t
* `ifaddr`: Specifies the subnet mask for one or both IP address families.
* `capacity`: Specifies the IP address capacity for the node. On AWS, the IP address capacity is provided per IP address family. On Azure and GCP, the IP address capacity includes both IPv4 and IPv6 addresses.
-Automatic attachment and detachment of egress IP addresses for traffic between nodes are available. This allows for traffic from many pods in namespaces to have a consistent source IP address to locations outside of the cluster. This also supports OpenShift SDN and OVN-Kubernetes, which is the default networking plug-in in Red Hat OpenShift Networking in {product-title} {product-version}.
+Automatic attachment and detachment of egress IP addresses for traffic between nodes are available. This allows for traffic from many pods in namespaces to have a consistent source IP address to locations outside of the cluster. This also supports OpenShift SDN and OVN-Kubernetes, which is the default networking plugin in Red Hat OpenShift Networking in {product-title} {product-version}.
[NOTE]
====
@@ -142,13 +142,13 @@ ifdef::openshift-sdn[]
[id="nw-egress-ips-limitations_{context}"]
== Limitations
-The following limitations apply when using egress IP addresses with the OpenShift SDN network plug-in:
+The following limitations apply when using egress IP addresses with the OpenShift SDN network plugin:
- You cannot use manually assigned and automatically assigned egress IP addresses on the same nodes.
- If you manually assign egress IP addresses from an IP address range, you must not make that range available for automatic IP assignment.
- You cannot share egress IP addresses across multiple namespaces using the OpenShift SDN egress IP address implementation.
-If you need to share IP addresses across namespaces, the OVN-Kubernetes network plug-in egress IP address implementation allows you to span IP addresses across multiple namespaces.
+If you need to share IP addresses across namespaces, the OVN-Kubernetes network plugin egress IP address implementation allows you to span IP addresses across multiple namespaces.
[NOTE]
====
diff --git a/modules/nw-egress-router-about.adoc b/modules/nw-egress-router-about.adoc
index 91d4c3e570..2962968c29 100644
--- a/modules/nw-egress-router-about.adoc
+++ b/modules/nw-egress-router-about.adoc
@@ -50,7 +50,7 @@ endif::openshift-sdn[]
ifdef::ovn[]
[NOTE]
====
-The egress router CNI plug-in supports redirect mode only. This is a difference with the egress router implementation that you can deploy with OpenShift SDN. Unlike the egress router for OpenShift SDN, the egress router CNI plug-in does not support HTTP proxy mode or DNS proxy mode.
+The egress router CNI plugin supports redirect mode only. This is a difference with the egress router implementation that you can deploy with OpenShift SDN. Unlike the egress router for OpenShift SDN, the egress router CNI plugin does not support HTTP proxy mode or DNS proxy mode.
====
endif::ovn[]
@@ -68,7 +68,7 @@ If only some of the nodes in your cluster are capable of claiming the specified
endif::openshift-sdn[]
ifdef::ovn[]
-The egress router implementation uses the egress router Container Network Interface (CNI) plug-in. The plug-in adds a secondary network interface to a pod.
+The egress router implementation uses the egress router Container Network Interface (CNI) plugin. The plugin adds a secondary network interface to a pod.
An egress router is a pod that has two network interfaces. For example, the pod can have `eth0` and `net1` network interfaces. The `eth0` interface is on the cluster network and the pod continues to use the interface for ordinary cluster-related network traffic. The `net1` interface is on a secondary network and has an IP address and gateway for that network. Other pods in the {product-title} cluster can access the egress router service and the service enables the pods to access external services. The egress router acts as a bridge between pods and an external system.
diff --git a/modules/nw-egressnetworkpolicy-about.adoc b/modules/nw-egressnetworkpolicy-about.adoc
index 6d2fe7173d..699798b52e 100644
--- a/modules/nw-egressnetworkpolicy-about.adoc
+++ b/modules/nw-egressnetworkpolicy-about.adoc
@@ -49,7 +49,7 @@ endif::ovn[]
If your egress firewall includes a deny rule for `0.0.0.0/0`, access to your {product-title} API servers is blocked. You must include the IP address range that the API servers listen on in your egress firewall rules.
ifdef::ovn[]
-If you use the OVN-Kubernetes network plug-in, you must include the built-in join network `100.64.0.0/16` to allow access when using node ports together with an egress firewall. If you changed this join network during cluster installation, use the value that you specified instead of `100.64.0.0/16`.
+If you use the OVN-Kubernetes network plugin, you must include the built-in join network `100.64.0.0/16` to allow access when using node ports together with an egress firewall. If you changed this join network during cluster installation, use the value that you specified instead of `100.64.0.0/16`.
endif::ovn[]
The following example illustrates the order of the egress firewall rules necessary to ensure API server access:
@@ -104,14 +104,14 @@ An egress firewall has the following limitations:
ifdef::ovn[]
* A maximum of one {kind} object with a maximum of 8,000 rules can be defined per project.
-* If you are using the OVN-Kubernetes network plug-in with shared gateway mode in Red Hat OpenShift Networking, return ingress replies are affected by egress firewall rules. If the egress firewall rules drop the ingress reply destination IP, the traffic is dropped.
+* If you are using the OVN-Kubernetes network plugin with shared gateway mode in Red Hat OpenShift Networking, return ingress replies are affected by egress firewall rules. If the egress firewall rules drop the ingress reply destination IP, the traffic is dropped.
endif::ovn[]
ifdef::openshift-sdn[]
* A maximum of one {kind} object with a maximum of 1,000 rules can be defined per project.
* The `default` project cannot use an egress firewall.
-* When using the OpenShift SDN network plug-in in multitenant mode, the following limitations apply:
+* When using the OpenShift SDN network plugin in multitenant mode, the following limitations apply:
- Global projects cannot use an egress firewall. You can make a project global by using the `oc adm pod-network make-projects-global` command.
diff --git a/modules/nw-egressnetworkpolicy-create.adoc b/modules/nw-egressnetworkpolicy-create.adoc
index e32990f28a..c4b614187a 100644
--- a/modules/nw-egressnetworkpolicy-create.adoc
+++ b/modules/nw-egressnetworkpolicy-create.adoc
@@ -27,7 +27,7 @@ If the project already has an {kind} object defined, you must edit the existing
.Prerequisites
-* A cluster that uses the {cni} network plug-in.
+* A cluster that uses the {cni} network plugin.
* Install the OpenShift CLI (`oc`).
* You must log in to the cluster as a cluster administrator.
diff --git a/modules/nw-egressnetworkpolicy-delete.adoc b/modules/nw-egressnetworkpolicy-delete.adoc
index 4fa0519ce3..06c8f09efd 100644
--- a/modules/nw-egressnetworkpolicy-delete.adoc
+++ b/modules/nw-egressnetworkpolicy-delete.adoc
@@ -22,7 +22,7 @@ As a cluster administrator, you can remove an egress firewall from a project.
.Prerequisites
-* A cluster using the {cni} network plug-in.
+* A cluster using the {cni} network plugin.
* Install the OpenShift CLI (`oc`).
* You must log in to the cluster as a cluster administrator.
diff --git a/modules/nw-egressnetworkpolicy-edit.adoc b/modules/nw-egressnetworkpolicy-edit.adoc
index 75f80f334f..592d57846e 100644
--- a/modules/nw-egressnetworkpolicy-edit.adoc
+++ b/modules/nw-egressnetworkpolicy-edit.adoc
@@ -22,7 +22,7 @@ As a cluster administrator, you can update the egress firewall for a project.
.Prerequisites
-* A cluster using the {cni} network plug-in.
+* A cluster using the {cni} network plugin.
* Install the OpenShift CLI (`oc`).
* You must log in to the cluster as a cluster administrator.
diff --git a/modules/nw-egressnetworkpolicy-view.adoc b/modules/nw-egressnetworkpolicy-view.adoc
index fbbdcf61e7..7b7605f034 100644
--- a/modules/nw-egressnetworkpolicy-view.adoc
+++ b/modules/nw-egressnetworkpolicy-view.adoc
@@ -22,7 +22,7 @@ You can view an {kind} object in your cluster.
.Prerequisites
-* A cluster using the {cni} network plug-in.
+* A cluster using the {cni} network plugin.
* Install the OpenShift Command-line Interface (CLI), commonly known as `oc`.
* You must log in to the cluster.
diff --git a/modules/nw-high-performance-multicast.adoc b/modules/nw-high-performance-multicast.adoc
index 3cfc143547..8bdf59f48d 100644
--- a/modules/nw-high-performance-multicast.adoc
+++ b/modules/nw-high-performance-multicast.adoc
@@ -5,7 +5,7 @@
[id="nw-high-performance-multicast_{context}"]
= High performance multicast
-The OpenShift SDN network plug-in supports multicast between pods on the default network. This is best used for low-bandwidth coordination or service discovery, and not high-bandwidth applications.
+The OpenShift SDN network plugin supports multicast between pods on the default network. This is best used for low-bandwidth coordination or service discovery, and not high-bandwidth applications.
For applications such as streaming media, like Internet Protocol television (IPTV) and multipoint videoconferencing, you can utilize Single Root I/O Virtualization (SR-IOV) hardware to provide near-native performance.
When using additional SR-IOV interfaces for multicast:
diff --git a/modules/nw-modifying-operator-install-config.adoc b/modules/nw-modifying-operator-install-config.adoc
index 4546bd590d..5f05a3de15 100644
--- a/modules/nw-modifying-operator-install-config.adoc
+++ b/modules/nw-modifying-operator-install-config.adoc
@@ -30,7 +30,7 @@ endif::[]
[id="modifying-nwoperator-config-startup_{context}"]
= Specifying advanced network configuration
-You can use advanced network configuration for your network plug-in to integrate your cluster into your existing network environment.
+You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment.
You can specify advanced network configuration only before you install the cluster.
[IMPORTANT]
diff --git a/modules/nw-multus-advanced-annotations.adoc b/modules/nw-multus-advanced-annotations.adoc
index 4d6c5c8e71..bf1595c01c 100644
--- a/modules/nw-multus-advanced-annotations.adoc
+++ b/modules/nw-multus-advanced-annotations.adoc
@@ -123,12 +123,12 @@ creating. The name must be unique within the specified `namespace`.
<2> Specify the namespace to create the network attachment in. If
you do not specify a value, then the `default` namespace is used.
-<3> Specify the CNI plug-in configuration in JSON format, which
+<3> Specify the CNI plugin configuration in JSON format, which
is based on the following template.
-The following object describes the configuration parameters for utilizing static MAC address and IP address using the macvlan CNI plug-in:
+The following object describes the configuration parameters for utilizing static MAC address and IP address using the macvlan CNI plugin:
-.macvlan CNI plug-in JSON configuration object using static IP and MAC address
+.macvlan CNI plugin JSON configuration object using static IP and MAC address
[source,json]
----
{
@@ -151,13 +151,13 @@ The following object describes the configuration parameters for utilizing static
<1> Specifies the name for the additional network attachment to create. The name must be unique within the specified `namespace`.
-<2> Specifies an array of CNI plug-in configurations. The first object specifies a macvlan plug-in configuration and the second object specifies a tuning plug-in configuration.
+<2> Specifies an array of CNI plugin configurations. The first object specifies a macvlan plugin configuration and the second object specifies a tuning plugin configuration.
-<3> Specifies that a request is made to enable the static IP address functionality of the CNI plug-in runtime configuration capabilities.
+<3> Specifies that a request is made to enable the static IP address functionality of the CNI plugin runtime configuration capabilities.
-<4> Specifies the interface that the macvlan plug-in uses.
+<4> Specifies the interface that the macvlan plugin uses.
-<5> Specifies that a request is made to enable the static MAC address functionality of a CNI plug-in.
+<5> Specifies that a request is made to enable the static MAC address functionality of a CNI plugin.
The above network attachment can be referenced in a JSON formatted annotation, along with keys to specify which static IP and MAC address will be assigned to a given pod.
@@ -168,7 +168,7 @@ Edit the pod with:
$ oc edit pod
----
-.macvlan CNI plug-in JSON configuration object using static IP and MAC address
+.macvlan CNI plugin JSON configuration object using static IP and MAC address
[source,yaml]
----
diff --git a/modules/nw-multus-bridge-object.adoc b/modules/nw-multus-bridge-object.adoc
index d4f512742a..4d264bbf28 100644
--- a/modules/nw-multus-bridge-object.adoc
+++ b/modules/nw-multus-bridge-object.adoc
@@ -6,9 +6,9 @@
= Configuration for a bridge additional network
The following object describes the configuration parameters for the bridge CNI
-plug-in:
+plugin:
-.Bridge CNI plug-in JSON configuration object
+.Bridge CNI plugin JSON configuration object
[cols=".^2,.^2,.^6",options="header"]
|====
|Field|Type|Description
@@ -31,7 +31,7 @@ plug-in:
|`ipam`
|`object`
-|The configuration object for the IPAM CNI plug-in. The plug-in manages IP address assignment for the attachment definition.
+|The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition.
|`ipMasq`
|`boolean`
diff --git a/modules/nw-multus-host-device-object.adoc b/modules/nw-multus-host-device-object.adoc
index e6eb19ea44..3d7c1d8969 100644
--- a/modules/nw-multus-host-device-object.adoc
+++ b/modules/nw-multus-host-device-object.adoc
@@ -11,10 +11,10 @@ Specify your network device by setting only one of the
following parameters: `device`, `hwaddr`, `kernelpath`, or `pciBusID`.
====
-The following object describes the configuration parameters for the host-device CNI plug-in:
+The following object describes the configuration parameters for the host-device CNI plugin:
// containernetworking/plugins/.../host-device.go#L50
-.Host device CNI plug-in JSON configuration object
+.Host device CNI plugin JSON configuration object
[cols=".^2,.^2,.^6",options="header"]
|====
|Field|Type|Description
@@ -29,7 +29,7 @@ The following object describes the configuration parameters for the host-device
|`type`
|`string`
-|The name of the CNI plug-in to configure: `host-device`.
+|The name of the CNI plugin to configure: `host-device`.
|`device`
|`string`
diff --git a/modules/nw-multus-ipam-object.adoc b/modules/nw-multus-ipam-object.adoc
index 52f9c41ab6..b6d21ba540 100644
--- a/modules/nw-multus-ipam-object.adoc
+++ b/modules/nw-multus-ipam-object.adoc
@@ -6,7 +6,7 @@
// Because the Cluster Network Operator abstracts the configuration for
// Macvlan, including IPAM configuration, this must be provided as YAML
-// for the Macvlan CNI plug-in only. In the future other Multus plug-ins
+// for the Macvlan CNI plugin only. In the future other Multus plugins
// might be managed the same way by the CNO.
ifeval::["{context}" == "configuring-sriov-net-attach"]
@@ -17,13 +17,13 @@ endif::[]
[id="nw-multus-ipam-object_{context}"]
= Configuration of IP address assignment for an additional network
-The IP address management (IPAM) Container Network Interface (CNI) plug-in provides IP addresses for other CNI plug-ins.
+The IP address management (IPAM) Container Network Interface (CNI) plugin provides IP addresses for other CNI plugins.
You can use the following IP address assignment types:
- Static assignment.
- Dynamic assignment through a DHCP server. The DHCP server you specify must be reachable from the additional network.
-- Dynamic assignment through the Whereabouts IPAM CNI plug-in.
+- Dynamic assignment through the Whereabouts IPAM CNI plugin.
////
IMPORTANT: If you set the `type` parameter to the `DHCP` value, you cannot set
@@ -192,7 +192,7 @@ spec:
[id="nw-multus-whereabouts_{context}"]
== Dynamic IP address assignment configuration with Whereabouts
-The Whereabouts CNI plug-in allows the dynamic assignment of an IP address to an additional network without the use of a DHCP server.
+The Whereabouts CNI plugin allows the dynamic assignment of an IP address to an additional network without the use of a DHCP server.
The following table describes the configuration for dynamic IP address assignment with Whereabouts:
diff --git a/modules/nw-multus-ipvlan-object.adoc b/modules/nw-multus-ipvlan-object.adoc
index 84a5fe088c..0f56400090 100644
--- a/modules/nw-multus-ipvlan-object.adoc
+++ b/modules/nw-multus-ipvlan-object.adoc
@@ -9,9 +9,9 @@
= Configuration for an IPVLAN additional network
The following object describes the configuration parameters for the IPVLAN CNI
-plug-in:
+plugin:
-.IPVLAN CNI plug-in JSON configuration object
+.IPVLAN CNI plugin JSON configuration object
[cols=".^2,.^2,.^6",options="header"]
|====
|Field|Type|Description
@@ -26,7 +26,7 @@ plug-in:
|`type`
|`string`
-|The name of the CNI plug-in to configure: `ipvlan`.
+|The name of the CNI plugin to configure: `ipvlan`.
|`mode`
|`string`
@@ -42,7 +42,7 @@ plug-in:
|`ipam`
|`object`
-|The configuration object for the IPAM CNI plug-in. The plug-in manages IP address assignment for the attachment definition.
+|The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition.
Do not specify `dhcp`. Configuring IPVLAN with DHCP is not supported because IPVLAN interfaces share the MAC address with the host interface.
diff --git a/modules/nw-multus-macvlan-object.adoc b/modules/nw-multus-macvlan-object.adoc
index 44668a6523..edb830b0b0 100644
--- a/modules/nw-multus-macvlan-object.adoc
+++ b/modules/nw-multus-macvlan-object.adoc
@@ -6,9 +6,9 @@
= Configuration for a MACVLAN additional network
The following object describes the configuration parameters for the macvlan CNI
-plug-in:
+plugin:
-.MACVLAN CNI plug-in JSON configuration object
+.MACVLAN CNI plugin JSON configuration object
[cols=".^2,.^2,.^6",options="header"]
|====
|Field|Type|Description
@@ -23,7 +23,7 @@ plug-in:
|`type`
|`string`
-|The name of the CNI plug-in to configure: `macvlan`.
+|The name of the CNI plugin to configure: `macvlan`.
|`mode`
|`string`
@@ -39,7 +39,7 @@ plug-in:
|`ipam`
|`object`
-|The configuration object for the IPAM CNI plug-in. The plug-in manages IP address assignment for the attachment definition.
+|The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition.
|====
diff --git a/modules/nw-network-config.adoc b/modules/nw-network-config.adoc
index 5073bd0d10..39e913bed2 100644
--- a/modules/nw-network-config.adoc
+++ b/modules/nw-network-config.adoc
@@ -39,4 +39,4 @@ The CIDR range `172.17.0.0/16` is reserved by libVirt. You cannot use this range
Phase 2:: After creating the manifest files by running `openshift-install create manifests`, you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify advanced network configuration.
-You cannot override the values specified in phase 1 in the `install-config.yaml` file during phase 2. However, you can further customize the network plug-in during phase 2.
+You cannot override the values specified in phase 1 in the `install-config.yaml` file during phase 2. However, you can further customize the network plugin during phase 2.
diff --git a/modules/nw-networkpolicy-about.adoc b/modules/nw-networkpolicy-about.adoc
index 44b6f9c3ec..0d7b88488a 100644
--- a/modules/nw-networkpolicy-about.adoc
+++ b/modules/nw-networkpolicy-about.adoc
@@ -7,7 +7,7 @@
[id="nw-networkpolicy-about_{context}"]
= About network policy
-In a cluster using a network plug-in that supports Kubernetes network policy, network isolation is controlled entirely by `NetworkPolicy` objects.
+In a cluster using a network plugin that supports Kubernetes network policy, network isolation is controlled entirely by `NetworkPolicy` objects.
In {product-title} {product-version}, OpenShift SDN supports using network policy in its default network isolation mode.
[WARNING]
diff --git a/modules/nw-networkpolicy-allow-application-all-namespaces.adoc b/modules/nw-networkpolicy-allow-application-all-namespaces.adoc
index 1c61eb99c7..0e8181e186 100644
--- a/modules/nw-networkpolicy-allow-application-all-namespaces.adoc
+++ b/modules/nw-networkpolicy-allow-application-all-namespaces.adoc
@@ -26,7 +26,7 @@ Follow this procedure to configure a policy that allows traffic from all pods in
.Prerequisites
-* Your cluster uses a network plug-in that supports `NetworkPolicy` objects, such as
+* Your cluster uses a network plugin that supports `NetworkPolicy` objects, such as
ifndef::ovn[]
the OpenShift SDN network provider with `mode: NetworkPolicy` set.
endif::ovn[]
diff --git a/modules/nw-networkpolicy-allow-application-particular-namespace.adoc b/modules/nw-networkpolicy-allow-application-particular-namespace.adoc
index 1f9f3ad476..72ca4ed291 100644
--- a/modules/nw-networkpolicy-allow-application-particular-namespace.adoc
+++ b/modules/nw-networkpolicy-allow-application-particular-namespace.adoc
@@ -28,7 +28,7 @@ Follow this procedure to configure a policy that allows traffic to a pod with th
.Prerequisites
-* Your cluster uses a network plug-in that supports `NetworkPolicy` objects, such as
+* Your cluster uses a network plugin that supports `NetworkPolicy` objects, such as
ifndef::ovn[]
the OpenShift SDN network provider with `mode: NetworkPolicy` set.
endif::ovn[]
diff --git a/modules/nw-networkpolicy-allow-external-clients.adoc b/modules/nw-networkpolicy-allow-external-clients.adoc
index 5830781957..1b69f0d6ea 100644
--- a/modules/nw-networkpolicy-allow-external-clients.adoc
+++ b/modules/nw-networkpolicy-allow-external-clients.adoc
@@ -27,7 +27,7 @@ Follow this procedure to configure a policy that allows external service from th
.Prerequisites
-* Your cluster uses a network plug-in that supports `NetworkPolicy` objects, such as
+* Your cluster uses a network plugin that supports `NetworkPolicy` objects, such as
ifndef::ovn[]
the OpenShift SDN network provider with `mode: NetworkPolicy` set.
endif::ovn[]
diff --git a/modules/nw-networkpolicy-audit-concept.adoc b/modules/nw-networkpolicy-audit-concept.adoc
index 49fcc84b36..5b133c2627 100644
--- a/modules/nw-networkpolicy-audit-concept.adoc
+++ b/modules/nw-networkpolicy-audit-concept.adoc
@@ -1,7 +1,7 @@
[id="nw-networkpolicy-audit-concept_{context}"]
= Audit logging
-The OVN-Kubernetes network plug-in uses Open Virtual Network (OVN) ACLs to manage egress firewalls and network policies. Audit logging exposes allow and deny ACL events.
+The OVN-Kubernetes network plugin uses Open Virtual Network (OVN) ACLs to manage egress firewalls and network policies. Audit logging exposes allow and deny ACL events.
You can configure the destination for audit logs, such as a syslog server or a UNIX domain socket.
Regardless of any additional configuration, an audit log is always saved to `/var/log/ovn/acl-audit-log.log` on each OVN-Kubernetes pod in the cluster.
diff --git a/modules/nw-networkpolicy-create-cli.adoc b/modules/nw-networkpolicy-create-cli.adoc
index 9810654607..91202ad4ad 100644
--- a/modules/nw-networkpolicy-create-cli.adoc
+++ b/modules/nw-networkpolicy-create-cli.adoc
@@ -30,7 +30,7 @@ endif::multi[]
.Prerequisites
-* Your cluster uses a network plug-in that supports `NetworkPolicy` objects, such as
+* Your cluster uses a network plugin that supports `NetworkPolicy` objects, such as
ifndef::ovn[]
the OpenShift SDN network provider with `mode: NetworkPolicy` set.
endif::ovn[]
diff --git a/modules/nw-networkpolicy-delete-cli.adoc b/modules/nw-networkpolicy-delete-cli.adoc
index 374d83eeac..c082bf78a5 100644
--- a/modules/nw-networkpolicy-delete-cli.adoc
+++ b/modules/nw-networkpolicy-delete-cli.adoc
@@ -29,7 +29,7 @@ endif::multi[]
.Prerequisites
-* Your cluster uses a network plug-in that supports `NetworkPolicy` objects, such as
+* Your cluster uses a network plugin that supports `NetworkPolicy` objects, such as
ifndef::ovn[]
the OpenShift SDN network provider with `mode: NetworkPolicy` set.
endif::ovn[]
diff --git a/modules/nw-networkpolicy-deny-all-allowed.adoc b/modules/nw-networkpolicy-deny-all-allowed.adoc
index 8cb82e89a1..6dee79f875 100644
--- a/modules/nw-networkpolicy-deny-all-allowed.adoc
+++ b/modules/nw-networkpolicy-deny-all-allowed.adoc
@@ -25,7 +25,7 @@ If you log in with a user with the `cluster-admin` role, then you can create a n
.Prerequisites
-* Your cluster uses a network plug-in that supports `NetworkPolicy` objects, such as
+* Your cluster uses a network plugin that supports `NetworkPolicy` objects, such as
ifndef::ovn[]
the OpenShift SDN network provider with `mode: NetworkPolicy` set.
endif::ovn[]
diff --git a/modules/nw-networkpolicy-edit.adoc b/modules/nw-networkpolicy-edit.adoc
index c816103462..85d3c0397a 100644
--- a/modules/nw-networkpolicy-edit.adoc
+++ b/modules/nw-networkpolicy-edit.adoc
@@ -28,7 +28,7 @@ endif::multi[]
.Prerequisites
-* Your cluster uses a network plug-in that supports `NetworkPolicy` objects, such as
+* Your cluster uses a network plugin that supports `NetworkPolicy` objects, such as
ifndef::ovn[]
the OpenShift SDN network provider with `mode: NetworkPolicy` set.
endif::ovn[]
diff --git a/modules/nw-networkpolicy-multitenant-isolation.adoc b/modules/nw-networkpolicy-multitenant-isolation.adoc
index 87bc27d284..a32e080e93 100644
--- a/modules/nw-networkpolicy-multitenant-isolation.adoc
+++ b/modules/nw-networkpolicy-multitenant-isolation.adoc
@@ -16,7 +16,7 @@ project namespaces.
.Prerequisites
-* Your cluster uses a network plug-in that supports `NetworkPolicy` objects, such as
+* Your cluster uses a network plugin that supports `NetworkPolicy` objects, such as
ifndef::ovn[]
the OpenShift SDN network provider with `mode: NetworkPolicy` set.
endif::ovn[]
diff --git a/modules/nw-networkpolicy-optimize.adoc b/modules/nw-networkpolicy-optimize.adoc
index 6d3fbcef74..1d707fab5f 100644
--- a/modules/nw-networkpolicy-optimize.adoc
+++ b/modules/nw-networkpolicy-optimize.adoc
@@ -9,7 +9,7 @@ Use a network policy to isolate pods that are differentiated from one another by
[NOTE]
====
-The guidelines for efficient use of network policy rules applies to only the OpenShift SDN network plug-in.
+The guidelines for efficient use of network policy rules applies to only the OpenShift SDN network plugin.
====
It is inefficient to apply `NetworkPolicy` objects to large numbers of individual pods in a single namespace. Pod labels do not exist at the IP address level, so a network policy generates a separate Open vSwitch (OVS) flow rule for every possible link between every pod selected with a `podSelector`.
diff --git a/modules/nw-operator-cr.adoc b/modules/nw-operator-cr.adoc
index 3e04ad9107..da8c363eb9 100644
--- a/modules/nw-operator-cr.adoc
+++ b/modules/nw-operator-cr.adoc
@@ -48,7 +48,7 @@ The CNO configuration inherits the following fields during cluster installation
`clusterNetwork`:: IP address pools from which pod IP addresses are allocated.
`serviceNetwork`:: IP address pool for services.
-`defaultNetwork.type`:: Cluster network plug-in, such as OpenShift SDN or OVN-Kubernetes.
+`defaultNetwork.type`:: Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes.
// For the post installation assembly, no further content is provided.
ifdef::post-install-network-configuration,operator[]
@@ -58,7 +58,7 @@ After cluster installation, you cannot modify the fields listed in the previous
====
endif::[]
ifndef::post-install-network-configuration[]
-You can specify the cluster network plug-in configuration for your cluster by setting the fields for the `defaultNetwork` object in the CNO object named `cluster`.
+You can specify the cluster network plugin configuration for your cluster by setting the fields for the `defaultNetwork` object in the CNO object named `cluster`.
[id="nw-operator-cr-cno-object_{context}"]
== Cluster Network Operator configuration object
@@ -98,7 +98,7 @@ endif::operator[]
|`spec.serviceNetwork`
|`array`
-|A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plug-ins support only a single IP address block for the service network. For example:
+|A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example:
[source,yaml]
----
@@ -116,13 +116,13 @@ endif::operator[]
|`spec.defaultNetwork`
|`object`
-|Configures the network plug-in for the cluster network.
+|Configures the network plugin for the cluster network.
|`spec.kubeProxyConfig`
|`object`
|
The fields for this object specify the kube-proxy configuration.
-If you are using the OVN-Kubernetes cluster network plug-in, the kube-proxy configuration has no effect.
+If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect.
|====
@@ -139,27 +139,27 @@ The values for the `defaultNetwork` object are defined in the following table:
|`type`
|`string`
-|Either `OpenShiftSDN` or `OVNKubernetes`. The {openshift-networking} network plug-in is selected during installation. This value cannot be changed after cluster installation.
+|Either `OpenShiftSDN` or `OVNKubernetes`. The {openshift-networking} network plugin is selected during installation. This value cannot be changed after cluster installation.
[NOTE]
====
-{product-title} uses the OVN-Kubernetes network plug-in by default.
+{product-title} uses the OVN-Kubernetes network plugin by default.
====
|`openshiftSDNConfig`
|`object`
-|This object is only valid for the OpenShift SDN network plug-in.
+|This object is only valid for the OpenShift SDN network plugin.
|`ovnKubernetesConfig`
|`object`
-|This object is only valid for the OVN-Kubernetes network plug-in.
+|This object is only valid for the OVN-Kubernetes network plugin.
|====
[discrete]
[id="nw-operator-configuration-parameters-for-openshift-sdn_{context}"]
-==== Configuration for the OpenShift SDN network plug-in
+==== Configuration for the OpenShift SDN network plugin
-The following table describes the configuration fields for the OpenShift SDN network plug-in:
+The following table describes the configuration fields for the OpenShift SDN network plugin:
.`openshiftSDNConfig` object
[cols=".^2,.^2,.^6a",options="header"]
@@ -213,7 +213,7 @@ endif::operator[]
ifdef::operator[]
[NOTE]
====
-You can only change the configuration for your cluster network plug-in during cluster installation.
+You can only change the configuration for your cluster network plugin during cluster installation.
====
endif::operator[]
@@ -230,9 +230,9 @@ defaultNetwork:
[discrete]
[id="nw-operator-configuration-parameters-for-ovn-sdn_{context}"]
-==== Configuration for the OVN-Kubernetes network plug-in
+==== Configuration for the OVN-Kubernetes network plugin
-The following table describes the configuration fields for the OVN-Kubernetes network plug-in:
+The following table describes the configuration fields for the OVN-Kubernetes network plugin:
.`ovnKubernetesConfig` object
[cols=".^2,.^2,.^6a",options="header"]
@@ -308,7 +308,7 @@ This field cannot be changed after installation.
ifdef::ibm-cloud[]
[NOTE]
====
-IPsec for the OVN-Kubernetes network plug-in is not supported when installing a cluster on IBM Cloud.
+IPsec for the OVN-Kubernetes network plugin is not supported when installing a cluster on IBM Cloud.
====
endif::ibm-cloud[]
@@ -364,7 +364,7 @@ If you set this field to `true`, you do not receive the performance benefits of
ifdef::operator[]
[NOTE]
====
-You can only change the configuration for your cluster network plug-in during cluster installation, except for the `gatewayConfig` field that can be changed at runtime as a post-installation activity.
+You can only change the configuration for your cluster network plugin during cluster installation, except for the `gatewayConfig` field that can be changed at runtime as a post-installation activity.
====
endif::operator[]
diff --git a/modules/nw-ovn-kuberentes-limitations.adoc b/modules/nw-ovn-kuberentes-limitations.adoc
index feaabfde81..eefb88c72f 100644
--- a/modules/nw-ovn-kuberentes-limitations.adoc
+++ b/modules/nw-ovn-kuberentes-limitations.adoc
@@ -5,7 +5,7 @@
[id="nw-ovn-kubernetes-limitations_{context}"]
= OVN-Kubernetes limitations
-The OVN-Kubernetes network plug-in has the following limitations:
+The OVN-Kubernetes network plugin has the following limitations:
* The `sessionAffinityConfig.clientIP.timeoutSeconds` service has no effect in an OpenShift OVN environment, but does in an OpenShift SDN environment. This incompatibility can make it difficult for users to migrate from OpenShift SDN to OVN.
diff --git a/modules/nw-ovn-kubernetes-features.adoc b/modules/nw-ovn-kubernetes-features.adoc
index 83fc04ecbc..4bbb90536f 100644
--- a/modules/nw-ovn-kubernetes-features.adoc
+++ b/modules/nw-ovn-kubernetes-features.adoc
@@ -5,7 +5,7 @@
[id="nw-ovn-kubernetes-features_{context}"]
= OVN-Kubernetes features
-The OVN-Kubernetes network plug-in implements the following features:
+The OVN-Kubernetes network plugin implements the following features:
// OVN (Open Virtual Network) is consistent with upstream usage.
diff --git a/modules/nw-ovn-kubernetes-matrix.adoc b/modules/nw-ovn-kubernetes-matrix.adoc
index f3ace4706b..22acdb3d29 100644
--- a/modules/nw-ovn-kubernetes-matrix.adoc
+++ b/modules/nw-ovn-kubernetes-matrix.adoc
@@ -4,9 +4,9 @@
:_content-type: REFERENCE
[id="nw-ovn-kubernetes-matrix_{context}"]
-= Supported network plug-in feature matrix
+= Supported network plugin feature matrix
-{openshift-networking} offers two options for the network plug-in, OpenShift SDN and OVN-Kubernetes, for the network plug-in. The following table summarizes the current feature support for both network plug-ins:
+{openshift-networking} offers two options for the network plugin, OpenShift SDN and OVN-Kubernetes, for the network plugin. The following table summarizes the current feature support for both network plugins:
.Default CNI network plugin feature comparison
[cols="50%,25%,25%",options="header"]
diff --git a/modules/nw-ovn-kubernetes-metrics.adoc b/modules/nw-ovn-kubernetes-metrics.adoc
index 4da3e91746..50555328ef 100644
--- a/modules/nw-ovn-kubernetes-metrics.adoc
+++ b/modules/nw-ovn-kubernetes-metrics.adoc
@@ -5,7 +5,7 @@
[id="nw-ovn-kubernetes-metrics_{context}"]
= Exposed metrics for OVN-Kubernetes
-The OVN-Kubernetes network plug-in exposes certain metrics for use by the Prometheus-based {product-title} cluster monitoring stack.
+The OVN-Kubernetes network plugin exposes certain metrics for use by the Prometheus-based {product-title} cluster monitoring stack.
// openshift/ovn-kubernetes => go-controller/pkg/metrics/master.go
diff --git a/modules/nw-ovn-kubernetes-migration-about.adoc b/modules/nw-ovn-kubernetes-migration-about.adoc
index 0264029418..ae2e65b9de 100644
--- a/modules/nw-ovn-kubernetes-migration-about.adoc
+++ b/modules/nw-ovn-kubernetes-migration-about.adoc
@@ -3,11 +3,11 @@
// * networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.adoc
[id="nw-ovn-kubernetes-migration-about_{context}"]
-= Migration to the OVN-Kubernetes network plug-in
+= Migration to the OVN-Kubernetes network plugin
-Migrating to the OVN-Kubernetes network plug-in is a manual process that includes some downtime during which your cluster is unreachable. Although a rollback procedure is provided, the migration is intended to be a one-way process.
+Migrating to the OVN-Kubernetes network plugin is a manual process that includes some downtime during which your cluster is unreachable. Although a rollback procedure is provided, the migration is intended to be a one-way process.
-A migration to the OVN-Kubernetes network plug-in is supported on the following platforms:
+A migration to the OVN-Kubernetes network plugin is supported on the following platforms:
* Bare metal hardware
* Amazon Web Services (AWS)
@@ -19,15 +19,15 @@ A migration to the OVN-Kubernetes network plug-in is supported on the following
* VMware vSphere
[id="considerations-migrating-ovn-kubernetes-network-provider_{context}"]
-== Considerations for migrating to the OVN-Kubernetes network plug-in
+== Considerations for migrating to the OVN-Kubernetes network plugin
-If you have more than 150 nodes in your {product-title} cluster, then open a support case for consultation on your migration to the OVN-Kubernetes network plug-in.
+If you have more than 150 nodes in your {product-title} cluster, then open a support case for consultation on your migration to the OVN-Kubernetes network plugin.
The subnets assigned to nodes and the IP addresses assigned to individual pods are not preserved during the migration.
-While the OVN-Kubernetes network plug-in implements many of the capabilities present in the OpenShift SDN network plug-in, the configuration is not the same.
+While the OVN-Kubernetes network plugin implements many of the capabilities present in the OpenShift SDN network plugin, the configuration is not the same.
-* If your cluster uses any of the following OpenShift SDN network plug-in capabilities, you must manually configure the same capability in the OVN-Kubernetes network plug-in:
+* If your cluster uses any of the following OpenShift SDN network plugin capabilities, you must manually configure the same capability in the OVN-Kubernetes network plugin:
+
--
* Namespace isolation
@@ -36,7 +36,7 @@ While the OVN-Kubernetes network plug-in implements many of the capabilities pre
* If your cluster or surrounding network uses any part of the `100.64.0.0/16` address range, you must choose another unused IP range by specifying the `v4InternalSubnet` spec under the `spec.defaultNetwork.ovnKubernetesConfig` object definition. OVN-Kubernetes uses the IP range `100.64.0.0/16` internally by default.
-The following sections highlight the differences in configuration between the aforementioned capabilities in OVN-Kubernetes and OpenShift SDN network plug-ins.
+The following sections highlight the differences in configuration between the aforementioned capabilities in OVN-Kubernetes and OpenShift SDN network plugins.
[discrete]
[id="namespace-isolation_{context}"]
@@ -46,7 +46,7 @@ OVN-Kubernetes supports only the network policy isolation mode.
[IMPORTANT]
====
-If your cluster uses OpenShift SDN configured in either the multitenant or subnet isolation modes, you cannot migrate to the OVN-Kubernetes network plug-in.
+If your cluster uses OpenShift SDN configured in either the multitenant or subnet isolation modes, you cannot migrate to the OVN-Kubernetes network plugin.
====
[discrete]
@@ -163,7 +163,7 @@ CNO:: Performs the following actions:
--
* Destroys the OpenShift SDN control plane pods.
* Deploys the OVN-Kubernetes control plane pods.
-* Updates the Multus objects to reflect the new network plug-in.
+* Updates the Multus objects to reflect the new network plugin.
--
|
@@ -196,7 +196,7 @@ CNO:: Performs the following actions:
--
* Destroys the OVN-Kubernetes control plane pods.
* Deploys the OpenShift SDN control plane pods.
-* Updates the Multus objects to reflect the new network plug-in.
+* Updates the Multus objects to reflect the new network plugin.
--
|
diff --git a/modules/nw-ovn-kubernetes-migration.adoc b/modules/nw-ovn-kubernetes-migration.adoc
index 10aa76e165..173e58140e 100644
--- a/modules/nw-ovn-kubernetes-migration.adoc
+++ b/modules/nw-ovn-kubernetes-migration.adoc
@@ -6,7 +6,7 @@
[id="nw-ovn-kubernetes-migration_{context}"]
= Migrating to the OVN-Kubernetes network plugin
-As a cluster administrator, you can change the network plug-in for your cluster to OVN-Kubernetes.
+As a cluster administrator, you can change the network plugin for your cluster to OVN-Kubernetes.
During the migration, you must reboot every node in your cluster.
[IMPORTANT]
@@ -17,7 +17,7 @@ Perform the migration only when an interruption in service is acceptable.
.Prerequisites
-* A cluster configured with the OpenShift SDN CNI network plug-in in the network policy isolation mode.
+* A cluster configured with the OpenShift SDN CNI network plugin in the network policy isolation mode.
* Install the OpenShift CLI (`oc`).
* Access to the cluster as a user with the `cluster-admin` role.
* A recent backup of the etcd database is available.
@@ -224,7 +224,7 @@ where `pod` is the name of a machine config daemon pod.
... Resolve any errors in the logs shown by the output from the previous command.
-. To start the migration, configure the OVN-Kubernetes network plug-in by using one of the following commands:
+. To start the migration, configure the OVN-Kubernetes network plugin by using one of the following commands:
** To specify the network provider without changing the cluster network IP address block, enter the following command:
+
diff --git a/modules/nw-ovn-kubernetes-rollback.adoc b/modules/nw-ovn-kubernetes-rollback.adoc
index 78faa0eff2..7c27c1494e 100644
--- a/modules/nw-ovn-kubernetes-rollback.adoc
+++ b/modules/nw-ovn-kubernetes-rollback.adoc
@@ -6,7 +6,7 @@
[id="nw-ovn-kubernetes-rollback_{context}"]
= Rolling back the network plugin to OpenShift SDN
-As a cluster administrator, you can rollback your cluster to the OpenShift SDN Container Network Interface (CNI) network plug-in.
+As a cluster administrator, you can rollback your cluster to the OpenShift SDN Container Network Interface (CNI) network plugin.
During the rollback, you must reboot every node in your cluster.
[IMPORTANT]
@@ -40,7 +40,7 @@ $ oc patch MachineConfigPool worker --type='merge' --patch \
'{ "spec":{ "paused" :true } }'
----
-. To start the migration, set the network plug-in back to OpenShift SDN by entering the following commands:
+. To start the migration, set the network plugin back to OpenShift SDN by entering the following commands:
+
[source,terminal]
----
diff --git a/modules/nw-sriov-add-pod-runtimeconfig.adoc b/modules/nw-sriov-add-pod-runtimeconfig.adoc
index f44f960dca..e470d09165 100644
--- a/modules/nw-sriov-add-pod-runtimeconfig.adoc
+++ b/modules/nw-sriov-add-pod-runtimeconfig.adoc
@@ -36,7 +36,7 @@ spec:
<1> Replace `` with a name for the object. The SR-IOV Network Operator creates a `NetworkAttachmentDefinition` object with same name.
<2> Specify the namespace where the SR-IOV Network Operator is installed.
<3> Replace `` with the namespace where the `NetworkAttachmentDefinition` object is created.
-<4> Specify static type for the ipam CNI plug-in as a YAML block scalar.
+<4> Specify static type for the ipam CNI plugin as a YAML block scalar.
<5> Specify `mac` and `ips` `capabilities` to `true`.
<6> Replace `` with the value for the `spec.resourceName` parameter from the `SriovNetworkNodePolicy` object that defines the SR-IOV hardware for this additional network.
diff --git a/modules/nw-sriov-cfg-bond-interface-with-virtual-functions.adoc b/modules/nw-sriov-cfg-bond-interface-with-virtual-functions.adoc
index c1f804fee5..a994b2682e 100644
--- a/modules/nw-sriov-cfg-bond-interface-with-virtual-functions.adoc
+++ b/modules/nw-sriov-cfg-bond-interface-with-virtual-functions.adoc
@@ -9,7 +9,7 @@ Bonding enables multiple network interfaces to be aggregated into a single logic
Bond-CNI can be created using Single Root I/O Virtualization (SR-IOV) virtual functions and placing them in the container network namespace.
-{product-title} only supports Bond-CNI using SR-IOV virtual functions. The SR-IOV Network Operator provides the SR-IOV CNI plug-in needed to manage the virtual functions. Other CNIs or types of interfaces are not supported.
+{product-title} only supports Bond-CNI using SR-IOV virtual functions. The SR-IOV Network Operator provides the SR-IOV CNI plugin needed to manage the virtual functions. Other CNIs or types of interfaces are not supported.
.Prerequisites
diff --git a/modules/nw-sriov-configuring-device.adoc b/modules/nw-sriov-configuring-device.adoc
index 9114a26dfe..1d88803c24 100644
--- a/modules/nw-sriov-configuring-device.adoc
+++ b/modules/nw-sriov-configuring-device.adoc
@@ -62,10 +62,10 @@ spec:
----
<1> Specify a name for the CR object.
<2> Specify the namespace where the SR-IOV Operator is installed.
-<3> Specify the resource name of the SR-IOV device plug-in. You can create multiple `SriovNetworkNodePolicy` objects for a resource name.
+<3> Specify the resource name of the SR-IOV device plugin. You can create multiple `SriovNetworkNodePolicy` objects for a resource name.
<4> Specify the node selector to select which nodes are configured.
Only SR-IOV network devices on selected nodes are configured. The SR-IOV
-Container Network Interface (CNI) plug-in and device plug-in are deployed only on selected nodes.
+Container Network Interface (CNI) plugin and device plugin are deployed only on selected nodes.
<5> Optional: Specify an integer value between `0` and `99`. A smaller number gets higher priority, so a priority of `10` is higher than a priority of `99`. The default value is `99`.
<6> Optional: Specify a value for the maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models.
<7> Specify the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than `128`.
diff --git a/modules/nw-sriov-dpdk-example-intel.adoc b/modules/nw-sriov-dpdk-example-intel.adoc
index 10ee6f6fe1..d5702e22b1 100644
--- a/modules/nw-sriov-dpdk-example-intel.adoc
+++ b/modules/nw-sriov-dpdk-example-intel.adoc
@@ -72,7 +72,7 @@ spec:
vlan:
resourceName: intelnics
----
-<1> Specify a configuration object for the ipam CNI plug-in as a YAML block scalar. The plug-in manages IP address assignment for the attachment definition.
+<1> Specify a configuration object for the ipam CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition.
+
[NOTE]
=====
diff --git a/modules/nw-sriov-dpdk-example-mellanox.adoc b/modules/nw-sriov-dpdk-example-mellanox.adoc
index 2047381482..5ef863bbe4 100644
--- a/modules/nw-sriov-dpdk-example-mellanox.adoc
+++ b/modules/nw-sriov-dpdk-example-mellanox.adoc
@@ -77,7 +77,7 @@ spec:
vlan:
resourceName: mlxnics
----
-<1> Specify a configuration object for the IP Address Management (IPAM) Container Network Interface (CNI) plug-in as a YAML block scalar. The plug-in manages IP address assignment for the attachment definition.
+<1> Specify a configuration object for the IP Address Management (IPAM) Container Network Interface (CNI) plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition.
+
[NOTE]
=====
diff --git a/modules/nw-sriov-ibnetwork-object.adoc b/modules/nw-sriov-ibnetwork-object.adoc
index 1ea652ef10..14127e07c5 100644
--- a/modules/nw-sriov-ibnetwork-object.adoc
+++ b/modules/nw-sriov-ibnetwork-object.adoc
@@ -32,7 +32,7 @@ spec:
<4> The target namespace for the `SriovIBNetwork` object. Only pods in the target namespace can attach to the network device.
-<5> Optional: A configuration object for the IPAM CNI plug-in as a YAML block scalar. The plug-in manages IP address assignment for the attachment definition.
+<5> Optional: A configuration object for the IPAM CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition.
<6> Optional: The link state of virtual function (VF). Allowed values are `enable`, `disable` and `auto`.
diff --git a/modules/nw-sriov-interface-level-sysctl-basic-node-policy.adoc b/modules/nw-sriov-interface-level-sysctl-basic-node-policy.adoc
index a5566bb723..a93feb2806 100644
--- a/modules/nw-sriov-interface-level-sysctl-basic-node-policy.adoc
+++ b/modules/nw-sriov-interface-level-sysctl-basic-node-policy.adoc
@@ -42,8 +42,8 @@ spec:
+
<1> The name for the custom resource object.
<2> The namespace where the SR-IOV Network Operator is installed.
-<3> The resource name of the SR-IOV network device plug-in. You can create multiple SR-IOV network node policies for a resource name.
-<4> The node selector specifies the nodes to configure. Only SR-IOV network devices on the selected nodes are configured. The SR-IOV Container Network Interface (CNI) plug-in and device plug-in are deployed on selected nodes only.
+<3> The resource name of the SR-IOV network device plugin. You can create multiple SR-IOV network node policies for a resource name.
+<4> The node selector specifies the nodes to configure. Only SR-IOV network devices on the selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed on selected nodes only.
<5> Optional: The priority is an integer value between `0` and `99`. A smaller value receives higher priority. For example, a priority of `10` is a higher priority than `99`. The default value is `99`.
<6> The number of the virtual functions (VFs) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than `128`.
<7> The NIC selector identifies the device for the Operator to configure. You do not have to specify values for all the parameters. It is recommended to identify the network device with enough precision to avoid selecting a device unintentionally.
diff --git a/modules/nw-sriov-interface-level-sysctl-bonded-node-policy.adoc b/modules/nw-sriov-interface-level-sysctl-bonded-node-policy.adoc
index c80a5b3333..a2609fe3ea 100644
--- a/modules/nw-sriov-interface-level-sysctl-bonded-node-policy.adoc
+++ b/modules/nw-sriov-interface-level-sysctl-bonded-node-policy.adoc
@@ -42,8 +42,8 @@ spec:
+
<1> The name for the custom resource object.
<2> The namespace where the SR-IOV Network Operator is installed.
-<3> The resource name of the SR-IOV network device plug-in. You can create multiple SR-IOV network node policies for a resource name.
-<4> The node selector specifies the nodes to configure. Only SR-IOV network devices on the selected nodes are configured. The SR-IOV Container Network Interface (CNI) plug-in and device plug-in are deployed on selected nodes only.
+<3> The resource name of the SR-IOV network device plugin. You can create multiple SR-IOV network node policies for a resource name.
+<4> The node selector specifies the nodes to configure. Only SR-IOV network devices on the selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed on selected nodes only.
<5> Optional: The priority is an integer value between `0` and `99`. A smaller value receives higher priority. For example, a priority of `10` is a higher priority than `99`. The default value is `99`.
<6> The number of virtual functions (VFs) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than `128`.
<7> The NIC selector identifies the device for the Operator to configure. You do not have to specify values for all the parameters. It is recommended to identify the network device with enough precision to avoid selecting a device unintentionally.
diff --git a/modules/nw-sriov-network-attachment.adoc b/modules/nw-sriov-network-attachment.adoc
index 654befbe92..e54e7a2a42 100644
--- a/modules/nw-sriov-network-attachment.adoc
+++ b/modules/nw-sriov-network-attachment.adoc
@@ -163,7 +163,7 @@ You must enclose the value you specify in quotes or the CR is rejected by the SR
<12> Optional: Replace `` with the capabilities to configure for this network.
ifdef::ocp-sriov-net[]
You can specify `"{ "ips": true }"` to enable IP address support or `"{ "mac": true }"` to enable MAC address support.
-<13> A configuration object for the IPAM CNI plug-in as a YAML block scalar. The plug-in manages IP address assignment for the attachment definition.
+<13> A configuration object for the IPAM CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition.
endif::ocp-sriov-net[]
[start=2]
diff --git a/modules/nw-sriov-network-object.adoc b/modules/nw-sriov-network-object.adoc
index bfc184c4cc..95479a491d 100644
--- a/modules/nw-sriov-network-object.adoc
+++ b/modules/nw-sriov-network-object.adoc
@@ -61,7 +61,7 @@ You must enclose the value you specify in quotes or the object is rejected by th
====
+
ifdef::ocp-sriov-net[]
-<7> A configuration object for the IPAM CNI plug-in as a YAML block scalar. The plug-in manages IP address assignment for the attachment definition.
+<7> A configuration object for the IPAM CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition.
<8> Optional: The link state of virtual function (VF). Allowed value are `enable`, `disable` and `auto`.
<9> Optional: A maximum transmission rate, in Mbps, for the VF.
<10> Optional: A minimum transmission rate, in Mbps, for the VF. This value must be less than or equal to the maximum transmission rate.
diff --git a/modules/nw-sriov-networknodepolicy-object.adoc b/modules/nw-sriov-networknodepolicy-object.adoc
index d09acdc3ad..89d670417a 100644
--- a/modules/nw-sriov-networknodepolicy-object.adoc
+++ b/modules/nw-sriov-networknodepolicy-object.adoc
@@ -40,9 +40,9 @@ spec:
<2> The namespace where the SR-IOV Network Operator is installed.
-<3> The resource name of the SR-IOV network device plug-in. You can create multiple SR-IOV network node policies for a resource name.
+<3> The resource name of the SR-IOV network device plugin. You can create multiple SR-IOV network node policies for a resource name.
-<4> The node selector specifies the nodes to configure. Only SR-IOV network devices on the selected nodes are configured. The SR-IOV Container Network Interface (CNI) plug-in and device plug-in are deployed on selected nodes only.
+<4> The node selector specifies the nodes to configure. Only SR-IOV network devices on the selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed on selected nodes only.
<5> Optional: The priority is an integer value between `0` and `99`. A smaller value receives higher priority. For example, a priority of `10` is a higher priority than `99`. The default value is `99`.
diff --git a/modules/nw-sriov-rdma-example-mellanox.adoc b/modules/nw-sriov-rdma-example-mellanox.adoc
index 16221d3295..0f4c149f5f 100644
--- a/modules/nw-sriov-rdma-example-mellanox.adoc
+++ b/modules/nw-sriov-rdma-example-mellanox.adoc
@@ -79,7 +79,7 @@ spec:
vlan:
resourceName: mlxnics
----
-<1> Specify a configuration object for the ipam CNI plug-in as a YAML block scalar. The plug-in manages IP address assignment for the attachment definition.
+<1> Specify a configuration object for the ipam CNI plugin as a YAML block scalar. The plugin manages IP address assignment for the attachment definition.
+
[NOTE]
=====
diff --git a/modules/oadp-configuring-velero-plugins.adoc b/modules/oadp-configuring-velero-plugins.adoc
index dc6320af24..0bbf58a055 100644
--- a/modules/oadp-configuring-velero-plugins.adoc
+++ b/modules/oadp-configuring-velero-plugins.adoc
@@ -4,31 +4,31 @@
:_content-type: CONCEPT
[id="oadp-configuring-velero-plugins_{context}"]
-= About OADP Velero plug-ins
+= About OADP Velero plugins
-You can configure two types of plug-ins when you install Velero:
+You can configure two types of plugins when you install Velero:
-* Default cloud provider plug-ins
-* Custom plug-ins
+* Default cloud provider plugins
+* Custom plugins
-Both types of plug-in are optional, but most users configure at least one cloud provider plug-in.
+Both types of plugin are optional, but most users configure at least one cloud provider plugin.
-== Default Velero cloud provider plug-ins
+== Default Velero cloud provider plugins
-You can install any of the following default Velero cloud provider plug-ins when you configure the `oadp_v1alpha1_dpa.yaml` file during deployment:
+You can install any of the following default Velero cloud provider plugins when you configure the `oadp_v1alpha1_dpa.yaml` file during deployment:
* `aws` (Amazon Web Services)
* `gcp` (Google Cloud Platform)
* `azure` (Microsoft Azure)
-* `openshift` (OpenShift Velero plug-in)
+* `openshift` (OpenShift Velero plugin)
* `csi` (Container Storage Interface)
* `kubevirt` (KubeVirt)
-You specify the desired default plug-ins in the `oadp_v1alpha1_dpa.yaml` file during deployment.
+You specify the desired default plugins in the `oadp_v1alpha1_dpa.yaml` file during deployment.
.Example file
-The following `.yaml` file installs the `openshift`, `aws`, `azure`, and `gcp` plug-ins:
+The following `.yaml` file installs the `openshift`, `aws`, `azure`, and `gcp` plugins:
[source,yaml]
----
@@ -46,15 +46,15 @@ The following `.yaml` file installs the `openshift`, `aws`, `azure`, and `gcp` p
- gcp
----
-== Custom Velero plug-ins
+== Custom Velero plugins
-You can install a custom Velero plug-in by specifying the plug-in `image` and `name` when you configure the `oadp_v1alpha1_dpa.yaml` file during deployment.
+You can install a custom Velero plugin by specifying the plugin `image` and `name` when you configure the `oadp_v1alpha1_dpa.yaml` file during deployment.
-You specify the desired custom plug-ins in the `oadp_v1alpha1_dpa.yaml` file during deployment.
+You specify the desired custom plugins in the `oadp_v1alpha1_dpa.yaml` file during deployment.
.Example file
-The following `.yaml` file installs the default `openshift`, `azure`, and `gcp` plug-ins and a custom plug-in that has the name `custom-plugin-example` and the image `quay.io/example-repo/custom-velero-plugin`:
+The following `.yaml` file installs the default `openshift`, `azure`, and `gcp` plugins and a custom plugin that has the name `custom-plugin-example` and the image `quay.io/example-repo/custom-velero-plugin`:
[source,yaml]
----
diff --git a/modules/oadp-creating-default-secret.adoc b/modules/oadp-creating-default-secret.adoc
index 7ebd99f60f..b50694a4de 100644
--- a/modules/oadp-creating-default-secret.adoc
+++ b/modules/oadp-creating-default-secret.adoc
@@ -16,7 +16,7 @@ ifdef::installing-oadp-aws,installing-oadp-azure,installing-oadp-gcp,installing-
The default name of the `Secret` is `{credentials}`.
endif::[]
ifdef::installing-oadp-ocs[]
-The default name of the `Secret` is `{credentials}`, unless your backup storage provider has a default plug-in, such as `aws`, `azure`, or `gcp`. In that case, the default name is specified in the provider-specific OADP installation procedure.
+The default name of the `Secret` is `{credentials}`, unless your backup storage provider has a default plugin, such as `aws`, `azure`, or `gcp`. In that case, the default name is specified in the provider-specific OADP installation procedure.
endif::[]
[NOTE]
diff --git a/modules/oadp-enabling-csi-dpa.adoc b/modules/oadp-enabling-csi-dpa.adoc
index e989984c4d..cd5a6746c0 100644
--- a/modules/oadp-enabling-csi-dpa.adoc
+++ b/modules/oadp-enabling-csi-dpa.adoc
@@ -34,5 +34,5 @@ spec:
featureFlags:
- EnableCSI <2>
----
-<1> Add the `csi` default plug-in.
+<1> Add the `csi` default plugin.
<2> Add the `EnableCSI` feature flag.
diff --git a/modules/oadp-installing-dpa.adoc b/modules/oadp-installing-dpa.adoc
index dcb65023db..deee3fe9a8 100644
--- a/modules/oadp-installing-dpa.adoc
+++ b/modules/oadp-installing-dpa.adoc
@@ -80,7 +80,7 @@ spec:
region: <8>
profile: "default"
----
-<1> The `openshift` plug-in is mandatory.
+<1> The `openshift` plugin is mandatory.
<2> Set to `false` if you want to disable the Restic installation. Restic deploys a daemon set, which means that each worker node has `Restic` pods running. You configure Restic for backups by adding `spec.defaultVolumesToRestic: true` to the `Backup` CR.
<3> Specify the node selector to be supplied to Restic podSpec.
<4> Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix.
@@ -132,7 +132,7 @@ spec:
name: default
provider: {provider}
----
-<1> The `openshift` plug-in is mandatory.
+<1> The `openshift` plugin is mandatory.
<2> Set to `false` if you want to disable the Restic installation. Restic deploys a daemon set, which means that each worker node has `Restic` pods running. You configure Restic for backups by adding `spec.defaultVolumesToRestic: true` to the `Backup` CR.
<3> Specify the node selector to be supplied to Restic podSpec.
<4> Specify the Azure resource group.
@@ -180,7 +180,7 @@ spec:
project:
snapshotLocation: us-west1 <8>
----
-<1> The `openshift` plug-in is mandatory.
+<1> The `openshift` plugin is mandatory.
<2> Set to `false` if you want to disable the Restic installation. Restic deploys a daemon set, which means that each worker node has `Restic` pods running. You configure Restic for backups by adding `spec.defaultVolumesToRestic: true` to the `Backup` CR.
<3> Specify the node selector to be supplied to Restic podSpec.
<4> If you do not specify this value, the default name, `{credentials}`, is used. If you specify a custom name, the custom name is used for the backup location.
@@ -225,7 +225,7 @@ spec:
bucket: <6>
prefix: <7>
----
-<1> The `openshift` plug-in is mandatory.
+<1> The `openshift` plugin is mandatory.
<2> Set to `false` if you want to disable the Restic installation. Restic deploys a daemon set, which means that each worker node has `Restic` pods running. You configure Restic for backups by adding `spec.defaultVolumesToRestic: true` to the `Backup` CR.
<3> Specify the node selector to be supplied to Restic podSpec.
<4> Specify the URL of the S3 endpoint.
@@ -265,14 +265,14 @@ spec:
bucket: <9>
prefix: <10>
----
-<1> Optional: The `kubevirt` plug-in is used with {VirtProductName}.
-<2> Specify the default plug-in for the backup provider, for example, `gcp`, if appropriate.
-<3> Specify the `csi` default plug-in if you use CSI snapshots to back up PVs. The `csi` plug-in uses the link:https://{velero-domain}/docs/main/csi/[Velero CSI beta snapshot APIs]. You do not need to configure a snapshot location.
-<4> The `openshift` plug-in is mandatory.
+<1> Optional: The `kubevirt` plugin is used with {VirtProductName}.
+<2> Specify the default plugin for the backup provider, for example, `gcp`, if appropriate.
+<3> Specify the `csi` default plugin if you use CSI snapshots to back up PVs. The `csi` plugin uses the link:https://{velero-domain}/docs/main/csi/[Velero CSI beta snapshot APIs]. You do not need to configure a snapshot location.
+<4> The `openshift` plugin is mandatory.
<5> Set to `false` if you want to disable the Restic installation. Restic deploys a daemon set, which means that each worker node has `Restic` pods running. You configure Restic for backups by adding `spec.defaultVolumesToRestic: true` to the `Backup` CR.
<6> Specify the node selector to be supplied to Restic podSpec
<7> Specify the backup provider.
-<8> If you use a default plug-in for the backup provider, you must specify the correct default name for the `Secret`, for example, `cloud-credentials-gcp`. If you specify a custom name, the custom name is used for the backup location. If you do not specify a `Secret` name, the default name is used.
+<8> If you use a default plugin for the backup provider, you must specify the correct default name for the `Secret`, for example, `cloud-credentials-gcp`. If you specify a custom name, the custom name is used for the backup location. If you do not specify a `Secret` name, the default name is used.
<9> Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix.
<10> Specify a prefix for Velero backups, for example, `velero`, if the bucket is used for multiple purposes.
endif::[]
@@ -308,14 +308,14 @@ spec:
bucket: <9>
prefix: <10>
----
-<1> The `kubevirt` plug-in is mandatory for {VirtProductName}.
-<2> Specify the plug-in for the backup provider, for example, `gcp`, if it exists.
-<3> The `csi` plug-in is mandatory for backing up PVs with CSI snapshots. The `csi` plug-in uses the link:https://{velero-domain}/docs/main/csi/[Velero CSI beta snapshot APIs]. You do not need to configure a snapshot location.
-<4> The `openshift` plug-in is mandatory.
+<1> The `kubevirt` plugin is mandatory for {VirtProductName}.
+<2> Specify the plugin for the backup provider, for example, `gcp`, if it exists.
+<3> The `csi` plugin is mandatory for backing up PVs with CSI snapshots. The `csi` plugin uses the link:https://{velero-domain}/docs/main/csi/[Velero CSI beta snapshot APIs]. You do not need to configure a snapshot location.
+<4> The `openshift` plugin is mandatory.
<5> Set to `false` if you want to disable the Restic installation. Restic deploys a daemon set, which means that each worker node has `Restic` pods running. You configure Restic for backups by adding `spec.defaultVolumesToRestic: true` to the `Backup` CR.
<6> Specify the node selector to be supplied to Restic podSpec
<7> Specify the backup provider.
-<8> If you use a default plug-in for the backup provider, you must specify the correct default name for the `Secret`, for example, `cloud-credentials-gcp`. If you specify a custom name, the custom name is used for the backup location. If you do not specify a `Secret` name, the default name is used.
+<8> If you use a default plugin for the backup provider, you must specify the correct default name for the `Secret`, for example, `cloud-credentials-gcp`. If you specify a custom name, the custom name is used for the backup location. If you do not specify a `Secret` name, the default name is used.
<9> Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix.
<10> Specify a prefix for Velero backups, for example, `velero`, if the bucket is used for multiple purposes.
endif::[]
diff --git a/modules/oadp-plugins.adoc b/modules/oadp-plugins.adoc
index c0ff8195f1..0fbed2e6f5 100644
--- a/modules/oadp-plugins.adoc
+++ b/modules/oadp-plugins.adoc
@@ -4,16 +4,16 @@
:_content-type: CONCEPT
[id="oadp-plugins_{context}"]
-= OADP plug-ins
+= OADP plugins
-The OpenShift API for Data Protection (OADP) provides default Velero plug-ins that are integrated with storage providers to support backup and snapshot operations. You can create link:https://{velero-domain}/docs/v{velero-version}/custom-plugins/[custom plug-ins] based on the Velero plug-ins.
+The OpenShift API for Data Protection (OADP) provides default Velero plugins that are integrated with storage providers to support backup and snapshot operations. You can create link:https://{velero-domain}/docs/v{velero-version}/custom-plugins/[custom plugins] based on the Velero plugins.
-OADP also provides plug-ins for {product-title} resource backups, OpenShift Virtualization resource backups, and Container Storage Interface (CSI) snapshots.
+OADP also provides plugins for {product-title} resource backups, OpenShift Virtualization resource backups, and Container Storage Interface (CSI) snapshots.
[cols="3", options="header"]
-.OADP plug-ins
+.OADP plugins
|===
-|OADP plug-in |Function |Storage location
+|OADP plugin |Function |Storage location
.2+|`aws` |Backs up and restores Kubernetes objects. |AWS S3
|Backs up and restores volumes with snapshots. |AWS EBS
@@ -34,5 +34,5 @@ OADP also provides plug-ins for {product-title} resource backups, OpenShift Virt
--
1. Mandatory.
2. Virtual machine disks are backed up with CSI snapshots or Restic.
-3. The `csi` plug-in uses the link:https://velero.io/docs/main/csi/[Velero CSI beta snapshot API].
+3. The `csi` plugin uses the link:https://velero.io/docs/main/csi/[Velero CSI beta snapshot API].
--
diff --git a/modules/oadp-using-data-mover-for-csi-snapshots.adoc b/modules/oadp-using-data-mover-for-csi-snapshots.adoc
index f4612dff05..34a9be8907 100644
--- a/modules/oadp-using-data-mover-for-csi-snapshots.adoc
+++ b/modules/oadp-using-data-mover-for-csi-snapshots.adoc
@@ -58,7 +58,7 @@ stringData:
By default, the Operator looks for a secret named `dm-credential`. If you are using a different name, you need to specify the name through a Data Protection Application (DPA) CR using `dpa.spec.features.dataMover.credentialName`.
====
-. Create a DPA CR similar to the following example. The default plug-ins include CSI.
+. Create a DPA CR similar to the following example. The default plugins include CSI.
+
.Example Data Protection Application (DPA) CR
[source,yaml]
@@ -177,7 +177,7 @@ If the status of the `VolumeSnapshotBackup` CR becomes `Failed`, refer to the Ve
. You can restore a volume snapshot by performing the following steps:
-.. Delete the application namespace and the `volumeSnapshotContent` that was created by the Velero CSI plug-in.
+.. Delete the application namespace and the `volumeSnapshotContent` that was created by the Velero CSI plugin.
.. Create a `Restore` CR and set `restorePVs` to `true`.
+
diff --git a/modules/oc-compliance-fetching-raw-results.adoc b/modules/oc-compliance-fetching-raw-results.adoc
index 1be47f505d..7fa44f9e00 100644
--- a/modules/oc-compliance-fetching-raw-results.adoc
+++ b/modules/oc-compliance-fetching-raw-results.adoc
@@ -10,7 +10,7 @@ When a compliance scan finishes, the results of the individual checks are listed
.Procedure
-* Fetching the results from the PV with the Compliance Operator is a four-step process. However, with the `oc-compliance` plug-in, you can use a single command:
+* Fetching the results from the PV with the Compliance Operator is a four-step process. However, with the `oc-compliance` plugin, you can use a single command:
+
[source,terminal]
----
diff --git a/modules/oc-compliance-installing.adoc b/modules/oc-compliance-installing.adoc
index c7b99ace73..589a1e69e1 100644
--- a/modules/oc-compliance-installing.adoc
+++ b/modules/oc-compliance-installing.adoc
@@ -4,7 +4,7 @@
:_content-type: PROCEDURE
[id="installing-oc-compliance_{context}"]
-= Installing the oc-compliance plug-in
+= Installing the oc-compliance plugin
.Procedure
diff --git a/modules/oc-compliance-rerunning-scans.adoc b/modules/oc-compliance-rerunning-scans.adoc
index 6d3246b170..96472183b5 100644
--- a/modules/oc-compliance-rerunning-scans.adoc
+++ b/modules/oc-compliance-rerunning-scans.adoc
@@ -10,7 +10,7 @@ Although it is possible to run scans as scheduled jobs, you must often re-run a
.Procedure
-* Rerunning a scan with the Compliance Operator requires use of an annotation on the scan object. However, with the `oc-compliance` plug-in you can rerun a scan with a single command. Enter the following command to rerun the scans for the `ScanSettingBinding` object named `my-binding`:
+* Rerunning a scan with the Compliance Operator requires use of an annotation on the scan object. However, with the `oc-compliance` plugin you can rerun a scan with a single command. Enter the following command to rerun the scans for the `ScanSettingBinding` object named `my-binding`:
+
[source,terminal]
----
diff --git a/modules/oc-mirror-about.adoc b/modules/oc-mirror-about.adoc
index a3644dc66c..9ac4659126 100644
--- a/modules/oc-mirror-about.adoc
+++ b/modules/oc-mirror-about.adoc
@@ -4,9 +4,9 @@
:_content-type: CONCEPT
[id="installation-oc-mirror-about_{context}"]
-= About the oc-mirror plug-in
+= About the oc-mirror plugin
-You can use the oc-mirror OpenShift CLI (`oc`) plug-in to mirror all required {product-title} content and other images to your mirror registry by using a single tool. It provides the following features:
+You can use the oc-mirror OpenShift CLI (`oc`) plugin to mirror all required {product-title} content and other images to your mirror registry by using a single tool. It provides the following features:
* Provides a centralized method to mirror {product-title} releases, Operators, helm charts, and other images.
* Maintains update paths for {product-title} and Operators.
@@ -15,11 +15,11 @@ You can use the oc-mirror OpenShift CLI (`oc`) plug-in to mirror all required {p
* Prunes images from the target mirror registry that were excluded from the image set configuration since the previous execution.
* Optionally generates supporting artifacts for OpenShift Update Service (OSUS) usage.
-When using the oc-mirror plug-in, you specify which content to mirror in an image set configuration file. In this YAML file, you can fine-tune the configuration to only include the {product-title} releases and Operators that your cluster needs. This reduces the amount of data that you need to download and transfer. The oc-mirror plug-in can also mirror arbitrary helm charts and additional container images to assist users in seamlessly synchronizing their workloads onto mirror registries.
+When using the oc-mirror plugin, you specify which content to mirror in an image set configuration file. In this YAML file, you can fine-tune the configuration to only include the {product-title} releases and Operators that your cluster needs. This reduces the amount of data that you need to download and transfer. The oc-mirror plugin can also mirror arbitrary helm charts and additional container images to assist users in seamlessly synchronizing their workloads onto mirror registries.
-The first time you run the oc-mirror plug-in, it populates your mirror registry with the required content to perform your disconnected cluster installation. In order for your disconnected cluster to continue receiving updates, you must keep your mirror registry updated. To update your mirror registry, you run the oc-mirror plug-in using the same configuration as the first time you ran it. The oc-mirror plug-in references the metadata from the storage backend and only downloads what has been released since the last time you ran the tool. This provides update paths for {product-title} and Operators and performs dependency resolution as required.
+The first time you run the oc-mirror plugin, it populates your mirror registry with the required content to perform your disconnected cluster installation. In order for your disconnected cluster to continue receiving updates, you must keep your mirror registry updated. To update your mirror registry, you run the oc-mirror plugin using the same configuration as the first time you ran it. The oc-mirror plugin references the metadata from the storage backend and only downloads what has been released since the last time you ran the tool. This provides update paths for {product-title} and Operators and performs dependency resolution as required.
[IMPORTANT]
====
-When using the oc-mirror CLI plug-in to populate a mirror registry, any further updates to the mirror registry must be made using the oc-mirror tool.
+When using the oc-mirror CLI plugin to populate a mirror registry, any further updates to the mirror registry must be made using the oc-mirror tool.
====
diff --git a/modules/oc-mirror-creating-image-set-config.adoc b/modules/oc-mirror-creating-image-set-config.adoc
index b070f039c6..0118329f16 100644
--- a/modules/oc-mirror-creating-image-set-config.adoc
+++ b/modules/oc-mirror-creating-image-set-config.adoc
@@ -6,13 +6,13 @@
[id="oc-mirror-creating-image-set-config_{context}"]
= Creating the image set configuration
-Before you can use the oc-mirror plug-in to mirror image sets, you must create an image set configuration file. This image set configuration file defines which {product-title} releases, Operators, and other images to mirror, along with other configuration settings for the oc-mirror plug-in.
+Before you can use the oc-mirror plugin to mirror image sets, you must create an image set configuration file. This image set configuration file defines which {product-title} releases, Operators, and other images to mirror, along with other configuration settings for the oc-mirror plugin.
-You must specify a storage backend in the image set configuration file. This storage backend can be a local directory or a registry that supports link:https://docs.docker.com/registry/spec/manifest-v2-2[Docker v2-2]. The oc-mirror plug-in stores metadata in this storage backend during image set creation.
+You must specify a storage backend in the image set configuration file. This storage backend can be a local directory or a registry that supports link:https://docs.docker.com/registry/spec/manifest-v2-2[Docker v2-2]. The oc-mirror plugin stores metadata in this storage backend during image set creation.
[IMPORTANT]
====
-Do not delete or modify the metadata that is generated by the oc-mirror plug-in. You must use the same storage backend every time you run the oc-mirror plug-in for the same mirror registry.
+Do not delete or modify the metadata that is generated by the oc-mirror plugin. You must use the same storage backend every time you run the oc-mirror plugin for the same mirror registry.
====
.Prerequisites
diff --git a/modules/oc-mirror-differential-updates.adoc b/modules/oc-mirror-differential-updates.adoc
index 4dc3a5cce4..2106bbac14 100644
--- a/modules/oc-mirror-differential-updates.adoc
+++ b/modules/oc-mirror-differential-updates.adoc
@@ -6,18 +6,18 @@
[id="oc-mirror-differential-updates_{context}"]
= Updating your mirror registry content
-After you publish the initial image set to the mirror registry, you can use the oc-mirror plug-in to keep your disconnected clusters updated.
+After you publish the initial image set to the mirror registry, you can use the oc-mirror plugin to keep your disconnected clusters updated.
Depending on your image set configuration, oc-mirror automatically detects newer releases of {product-title} and your selected Operators that have been released after you completed the inital mirror. It is recommended to run oc-mirror at regular intervals, for example in a nightly cron job, to receive product and security updates on a timely basis.
.Prerequisites
-* You have used the oc-mirror plug-in to mirror the initial image set to your mirror registry.
-* You have access to the storage backend that was used for the initial execution of the oc-mirror plug-in.
+* You have used the oc-mirror plugin to mirror the initial image set to your mirror registry.
+* You have access to the storage backend that was used for the initial execution of the oc-mirror plugin.
+
[NOTE]
====
-You must use the same storage backend as the initial execution of oc-mirror for the same mirror registry. Do not delete or modify the metadata image that is generated by the oc-mirror plug-in.
+You must use the same storage backend as the initial execution of oc-mirror for the same mirror registry. Do not delete or modify the metadata image that is generated by the oc-mirror plugin.
====
.Procedure
@@ -29,7 +29,7 @@ You must use the same storage backend as the initial execution of oc-mirror for
[IMPORTANT]
====
* You must provide the same storage backend so that only a differential image set is created and mirrored.
-* If you specified a top-level namespace for the mirror registry during the initial image set creation, then you must use this same namespace every time you run the oc-mirror plug-in for the same mirror registry.
+* If you specified a top-level namespace for the mirror registry during the initial image set creation, then you must use this same namespace every time you run the oc-mirror plugin for the same mirror registry.
====
. Install the `ImageContentSourcePolicy` and `CatalogSource` resources into the cluster.
diff --git a/modules/oc-mirror-disk-to-mirror.adoc b/modules/oc-mirror-disk-to-mirror.adoc
index 31ff7c2115..d0c616619a 100644
--- a/modules/oc-mirror-disk-to-mirror.adoc
+++ b/modules/oc-mirror-disk-to-mirror.adoc
@@ -6,12 +6,12 @@
[id="oc-mirror-disk-to-mirror_{context}"]
= Mirroring from disk to mirror
-You can use the oc-mirror plug-in to mirror the contents of a generated image set to the target mirror registry.
+You can use the oc-mirror plugin to mirror the contents of a generated image set to the target mirror registry.
.Prerequisites
* You have installed the OpenShift CLI (`oc`) in the disconnected environment.
-* You have installed the `oc-mirror` CLI plug-in in the disconnected environment.
+* You have installed the `oc-mirror` CLI plugin in the disconnected environment.
* You have generated the image set file by using the `oc mirror` command.
* You have transferred the image set file to the disconnected environment.
// TODO: Confirm prereq about not needing a cluster, but need pull secret misc
diff --git a/modules/oc-mirror-dry-run.adoc b/modules/oc-mirror-dry-run.adoc
index 3453f23bd7..aa49407a63 100644
--- a/modules/oc-mirror-dry-run.adoc
+++ b/modules/oc-mirror-dry-run.adoc
@@ -12,7 +12,7 @@ You can use oc-mirror to perform a dry run, without actually mirroring any image
* You have access to the internet to obtain the necessary container images.
* You have installed the OpenShift CLI (`oc`).
-* You have installed the `oc-mirror` CLI plug-in.
+* You have installed the `oc-mirror` CLI plugin.
* You have created the image set configuration file.
.Procedure
diff --git a/modules/oc-mirror-imageset-config-params.adoc b/modules/oc-mirror-imageset-config-params.adoc
index cab2f2cb08..5f4d000873 100644
--- a/modules/oc-mirror-imageset-config-params.adoc
+++ b/modules/oc-mirror-imageset-config-params.adoc
@@ -6,7 +6,7 @@
[id="oc-mirror-imageset-config-params_{context}"]
= Image set configuration parameters
-The oc-mirror plug-in requires an image set configuration file that defines what images to mirror. The following table lists the available parameters for the `ImageSetConfiguration` resource.
+The oc-mirror plugin requires an image set configuration file that defines what images to mirror. The following table lists the available parameters for the `ImageSetConfiguration` resource.
// TODO: Consider adding examples for the general "Object" params
@@ -48,7 +48,7 @@ additionalImages:
|Array of strings. For example: `docker.io/library/alpine`
|`mirror.helm`
-|The helm configuration of the image set. Note that the oc-mirror plug-in supports only helm charts that do not require user input when rendered.
+|The helm configuration of the image set. Note that the oc-mirror plugin supports only helm charts that do not require user input when rendered.
|Object
|`mirror.helm.local`
diff --git a/modules/oc-mirror-installing-plugin.adoc b/modules/oc-mirror-installing-plugin.adoc
index 8797370044..95095386f3 100644
--- a/modules/oc-mirror-installing-plugin.adoc
+++ b/modules/oc-mirror-installing-plugin.adoc
@@ -4,9 +4,9 @@
:_content-type: PROCEDURE
[id="installation-oc-mirror-installing-plugin_{context}"]
-= Installing the oc-mirror OpenShift CLI plug-in
+= Installing the oc-mirror OpenShift CLI plugin
-To use the oc-mirror OpenShift CLI plug-in to mirror registry images, you must install the plug-in. If you are mirroring image sets in a fully disconnected environment, ensure that you install the oc-mirror plug-in on the host with internet access and the host in the disconnected environment with access to the mirror registry.
+To use the oc-mirror OpenShift CLI plugin to mirror registry images, you must install the plugin. If you are mirroring image sets in a fully disconnected environment, ensure that you install the oc-mirror plugin on the host with internet access and the host in the disconnected environment with access to the mirror registry.
.Prerequisites
@@ -14,7 +14,7 @@ To use the oc-mirror OpenShift CLI plug-in to mirror registry images, you must i
.Procedure
-. Download the oc-mirror CLI plug-in.
+. Download the oc-mirror CLI plugin.
.. Navigate to the link:https://console.redhat.com/openshift/downloads[Downloads] page of the {cluster-manager-url}.
@@ -27,7 +27,7 @@ To use the oc-mirror OpenShift CLI plug-in to mirror registry images, you must i
$ tar xvzf oc-mirror.tar.gz
----
-. If necessary, update the plug-in file to be executable:
+. If necessary, update the plugin file to be executable:
+
[source,terminal]
----
@@ -39,7 +39,7 @@ $ chmod +x oc-mirror
Do not rename the `oc-mirror` file.
====
-. Install the oc-mirror CLI plug-in by placing the file in your `PATH`, for example, `/usr/local/bin`:
+. Install the oc-mirror CLI plugin by placing the file in your `PATH`, for example, `/usr/local/bin`:
+
[source,terminal]
----
@@ -48,7 +48,7 @@ $ sudo mv oc-mirror /usr/local/bin/.
.Verification
-* Run `oc mirror help` to verify that the plug-in was successfully installed:
+* Run `oc mirror help` to verify that the plugin was successfully installed:
+
[source,terminal]
----
diff --git a/modules/oc-mirror-mirror-to-disk.adoc b/modules/oc-mirror-mirror-to-disk.adoc
index 8b6e2111cf..51fec08637 100644
--- a/modules/oc-mirror-mirror-to-disk.adoc
+++ b/modules/oc-mirror-mirror-to-disk.adoc
@@ -6,27 +6,27 @@
[id="oc-mirror-mirror-to-disk_{context}"]
= Mirroring from mirror to disk
-You can use the oc-mirror plug-in to generate an image set and save the contents to disk. The generated image set can then be transferred to the disconnected environment and mirrored to the target registry.
+You can use the oc-mirror plugin to generate an image set and save the contents to disk. The generated image set can then be transferred to the disconnected environment and mirrored to the target registry.
[IMPORTANT]
====
Depending on the configuration specified in the image set configuration file, using oc-mirror to mirror images might download several hundreds of gigabytes of data to disk.
-The initial image set download when you populate the mirror registry is often the largest. Because you only download the images that changed since the last time you ran the command, when you run the oc-mirror plug-in again, the generated image set is often smaller.
+The initial image set download when you populate the mirror registry is often the largest. Because you only download the images that changed since the last time you ran the command, when you run the oc-mirror plugin again, the generated image set is often smaller.
====
-You are required to specify a storage backend in the image set configuration file. This storage backend can be a local directory or a docker v2 registry. The oc-mirror plug-in stores metadata in this storage backend during image set creation.
+You are required to specify a storage backend in the image set configuration file. This storage backend can be a local directory or a docker v2 registry. The oc-mirror plugin stores metadata in this storage backend during image set creation.
[IMPORTANT]
====
-Do not delete or modify the metadata that is generated by the oc-mirror plug-in. You must use the same storage backend every time you run the oc-mirror plug-in for the same mirror registry.
+Do not delete or modify the metadata that is generated by the oc-mirror plugin. You must use the same storage backend every time you run the oc-mirror plugin for the same mirror registry.
====
.Prerequisites
* You have access to the internet to obtain the necessary container images.
* You have installed the OpenShift CLI (`oc`).
-* You have installed the `oc-mirror` CLI plug-in.
+* You have installed the `oc-mirror` CLI plugin.
* You have created the image set configuration file.
// TODO: Don't need a running cluster, but need some pull secrets. Sync w/ team on this
diff --git a/modules/oc-mirror-mirror-to-mirror.adoc b/modules/oc-mirror-mirror-to-mirror.adoc
index e5fa3fdefc..65515751b7 100644
--- a/modules/oc-mirror-mirror-to-mirror.adoc
+++ b/modules/oc-mirror-mirror-to-mirror.adoc
@@ -6,20 +6,20 @@
[id="oc-mirror-mirror-to-mirror_{context}"]
= Mirroring from mirror to mirror
-You can use the oc-mirror plug-in to mirror an image set directly to a target mirror registry that is accessible during image set creation.
+You can use the oc-mirror plugin to mirror an image set directly to a target mirror registry that is accessible during image set creation.
-You are required to specify a storage backend in the image set configuration file. This storage backend can be a local directory or a Docker v2 registry. The oc-mirror plug-in stores metadata in this storage backend during image set creation.
+You are required to specify a storage backend in the image set configuration file. This storage backend can be a local directory or a Docker v2 registry. The oc-mirror plugin stores metadata in this storage backend during image set creation.
[IMPORTANT]
====
-Do not delete or modify the metadata that is generated by the oc-mirror plug-in. You must use the same storage backend every time you run the oc-mirror plug-in for the same mirror registry.
+Do not delete or modify the metadata that is generated by the oc-mirror plugin. You must use the same storage backend every time you run the oc-mirror plugin for the same mirror registry.
====
.Prerequisites
* You have access to the internet to obtain the necessary container images.
* You have installed the OpenShift CLI (`oc`).
-* You have installed the `oc-mirror` CLI plug-in.
+* You have installed the `oc-mirror` CLI plugin.
* You have created the image set configuration file.
.Procedure
diff --git a/modules/oc-mirror-oci-format.adoc b/modules/oc-mirror-oci-format.adoc
index 3e7f469ee6..abae4aaa48 100644
--- a/modules/oc-mirror-oci-format.adoc
+++ b/modules/oc-mirror-oci-format.adoc
@@ -6,16 +6,16 @@
[id="oc-mirror-oci-format_{context}"]
= Mirroring file-based catalog Operator images in OCI format
-You can use the oc-mirror plug-in to mirror Operators in the Open Container Initiative (OCI) image format, instead of Docker v2 format. You can copy Operator images to a file-based catalog on disk in OCI format. Then you can copy local OCI images to your target mirror registry.
+You can use the oc-mirror plugin to mirror Operators in the Open Container Initiative (OCI) image format, instead of Docker v2 format. You can copy Operator images to a file-based catalog on disk in OCI format. Then you can copy local OCI images to your target mirror registry.
-:FeatureName: Using the oc-mirror plug-in to mirror Operator images in OCI format
+:FeatureName: Using the oc-mirror plugin to mirror Operator images in OCI format
include::snippets/technology-preview.adoc[]
.Prerequisites
* You have access to the internet to obtain the necessary container images.
* You have installed the OpenShift CLI (`oc`).
-* You have installed the `oc-mirror` CLI plug-in.
+* You have installed the `oc-mirror` CLI plugin.
.Procedure
diff --git a/modules/oc-mirror-support.adoc b/modules/oc-mirror-support.adoc
index 8d4e896b6f..e8ba194be6 100644
--- a/modules/oc-mirror-support.adoc
+++ b/modules/oc-mirror-support.adoc
@@ -6,6 +6,6 @@
[id="oc-mirror-support_{context}"]
= oc-mirror compatibility and support
-The oc-mirror plug-in supports mirroring {product-title} payload images and Operator catalogs for {product-title} versions 4.9 and later.
+The oc-mirror plugin supports mirroring {product-title} payload images and Operator catalogs for {product-title} versions 4.9 and later.
-Use the latest available version of the oc-mirror plug-in regardless of which versions of {product-title} you need to mirror.
+Use the latest available version of the oc-mirror plugin regardless of which versions of {product-title} you need to mirror.
diff --git a/modules/oc-mirror-updating-registry-about.adoc b/modules/oc-mirror-updating-registry-about.adoc
index b1b063c8aa..d06e245177 100644
--- a/modules/oc-mirror-updating-registry-about.adoc
+++ b/modules/oc-mirror-updating-registry-about.adoc
@@ -6,7 +6,7 @@
[id="oc-mirror-updating-registry-about_{context}"]
= About updating your mirror registry content
-When you run the oc-mirror plug-in again, it generates an image set that only contains new and updated images since the previous execution. Because it only pulls in the differences since the previous image set was created, the generated image set is often smaller and faster to process than the initial image set.
+When you run the oc-mirror plugin again, it generates an image set that only contains new and updated images since the previous execution. Because it only pulls in the differences since the previous image set was created, the generated image set is often smaller and faster to process than the initial image set.
[IMPORTANT]
====
diff --git a/modules/odc-customizing-user-perspectives.adoc b/modules/odc-customizing-user-perspectives.adoc
index ca3ef0f4da..4171ca8dce 100644
--- a/modules/odc-customizing-user-perspectives.adoc
+++ b/modules/odc-customizing-user-perspectives.adoc
@@ -6,7 +6,7 @@
[id="odc-customizing-user-perspectives_{context}"]
= Customizing user perspectives
-The {product-title} web console provides two perspectives by default, *Administrator* and *Developer*. You might have more perspectives available depending on installed console plug-ins. As a cluster administrator, you can show or hide a perspective for all users or for a specific user role. Customizing perspectives ensures that users can view only the perspectives that are applicable to their role and tasks. For example, you can hide the *Administrator* perspective from unprivileged users so that they cannot manage cluster resources, users, and projects. Similarly, you can show the *Developer* perspective to users with the developer role so that they can create, deploy, and monitor applications.
+The {product-title} web console provides two perspectives by default, *Administrator* and *Developer*. You might have more perspectives available depending on installed console plugins. As a cluster administrator, you can show or hide a perspective for all users or for a specific user role. Customizing perspectives ensures that users can view only the perspectives that are applicable to their role and tasks. For example, you can hide the *Administrator* perspective from unprivileged users so that they cannot manage cluster resources, users, and projects. Similarly, you can show the *Developer* perspective to users with the developer role so that they can create, deploy, and monitor applications.
You can also customize the perspective visibility for users based on role-based access control (RBAC). For example, if you customize a perspective for monitoring purposes, which requires specific permissions, you can define that the perspective is visible only to users with required permissions.
diff --git a/modules/op-release-notes-1-5.adoc b/modules/op-release-notes-1-5.adoc
index a4ba2bd484..d7d8c247ff 100644
--- a/modules/op-release-notes-1-5.adoc
+++ b/modules/op-release-notes-1-5.adoc
@@ -121,7 +121,7 @@ spec:
* Support for optional workspaces are added to the `start` command.
-* If the plug-ins are not present in the `plugins` directory, they are searched in the current path.
+* If the plugins are not present in the `plugins` directory, they are searched in the current path.
* The `tkn start [task | clustertask | pipeline]` command starts interactively and ask for the `params` value, even when you specify the default parameters are specified. To stop the interactive prompts, pass the `--use-param-defaults` flag at the time of invoking the command. For example:
+
diff --git a/modules/op-release-notes-1-8.adoc b/modules/op-release-notes-1-8.adoc
index 31f06489ed..7b8cbab5f1 100644
--- a/modules/op-release-notes-1-8.adoc
+++ b/modules/op-release-notes-1-8.adoc
@@ -107,7 +107,7 @@ Because this feature is available by default, you no longer need to set the `pip
// link:https://github.com/tektoncd/cli/pull/1540[(#1540)]
// Chmouel Boudjnah @chmouel
-* This update adds a list of available plug-ins to the output of the `tkn --help` command.
+* This update adds a list of available plugins to the output of the `tkn --help` command.
// link:https://github.com/tektoncd/cli/pull/1535[(#1535)]
// Chmouel Boudjnah @chmouel
diff --git a/modules/openshift-architecture-common-terms.adoc b/modules/openshift-architecture-common-terms.adoc
index 4ff18512f7..161f9ae80b 100644
--- a/modules/openshift-architecture-common-terms.adoc
+++ b/modules/openshift-architecture-common-terms.adoc
@@ -11,8 +11,8 @@ This glossary defines common terms that are used in the architecture content.
access policies::
A set of roles that dictate how users, applications, and entities within a cluster interacts with one another. An access policy increases cluster security.
-admission plug-ins::
-Admission plug-ins enforce security policies, resource limitations, or configuration requirements.
+admission plugins::
+Admission plugins enforce security policies, resource limitations, or configuration requirements.
authentication::
To control access to an {product-title} cluster, a cluster administrator can configure user authentication and ensure only approved users access the cluster. To interact with an {product-title} cluster, you must authenticate to the {product-title} API. You can authenticate by providing an OAuth access token or an X.509 client certificate in your requests to the {product-title} API.
diff --git a/modules/openshift-storage-common-terms.adoc b/modules/openshift-storage-common-terms.adoc
index d2c843101d..7f8d14b539 100644
--- a/modules/openshift-storage-common-terms.adoc
+++ b/modules/openshift-storage-common-terms.adoc
@@ -30,7 +30,7 @@ Pods and containers can require temporary or transient local storage for their o
Fiber channel:: A networking technology that is used to transfer data among data centers, computer servers, switches and storage.
-FlexVolume:: FlexVolume is an out-of-tree plug-in interface that uses an exec-based model to interface with storage drivers. You must install the FlexVolume driver binaries in a pre-defined volume plug-in path on each node and in some cases the control plane nodes.
+FlexVolume:: FlexVolume is an out-of-tree plugin interface that uses an exec-based model to interface with storage drivers. You must install the FlexVolume driver binaries in a pre-defined volume plugin path on each node and in some cases the control plane nodes.
fsGroup:: The fsGroup defines a file system group ID of a pod.
diff --git a/modules/optimizing-mtu-networking.adoc b/modules/optimizing-mtu-networking.adoc
index 77c52a7b62..4815108239 100644
--- a/modules/optimizing-mtu-networking.adoc
+++ b/modules/optimizing-mtu-networking.adoc
@@ -9,7 +9,7 @@ There are two important maximum transmission units (MTUs): the network interface
The NIC MTU is only configured at the time of {product-title} installation. The MTU must be less than or equal to the maximum supported value of the NIC of your network. If you are optimizing for throughput, choose the largest possible value. If you are optimizing for lowest latency, choose a lower value.
-The network plug-in overlay MTU must be less than the NIC MTU by 50 bytes at a minimum. This accounts for the SDN overlay header. So, on a normal ethernet network, set this to `1450`. On a jumbo frame ethernet network, set this to `8950`.
+The network plugin overlay MTU must be less than the NIC MTU by 50 bytes at a minimum. This accounts for the SDN overlay header. So, on a normal ethernet network, set this to `1450`. On a jumbo frame ethernet network, set this to `8950`.
For OVN and Geneve, the MTU must be less than the NIC MTU by 100 bytes at a minimum.
diff --git a/modules/osdk-ansible-inside-operator-local.adoc b/modules/osdk-ansible-inside-operator-local.adoc
index 17a47588ec..c58e99fb54 100644
--- a/modules/osdk-ansible-inside-operator-local.adoc
+++ b/modules/osdk-ansible-inside-operator-local.adoc
@@ -16,7 +16,7 @@ You can customize the roles path by setting the environment variable `ANSIBLE_RO
.Prerequisites
- link:https://ansible-runner.readthedocs.io/en/latest/install.html[Ansible Runner] v2.0.2+
-- link:https://github.com/ansible/ansible-runner-http[Ansible Runner HTTP Event Emitter plug-in] v1.0.0+
+- link:https://github.com/ansible/ansible-runner-http[Ansible Runner HTTP Event Emitter plugin] v1.0.0+
- Performed the previous steps for testing the Kubernetes Collection locally
.Procedure
diff --git a/modules/osdk-ansible-k8s-install.adoc b/modules/osdk-ansible-k8s-install.adoc
index 5b42035386..febae014ba 100644
--- a/modules/osdk-ansible-k8s-install.adoc
+++ b/modules/osdk-ansible-k8s-install.adoc
@@ -33,7 +33,7 @@ $ pip3 install openshift
$ ansible-galaxy collection install community.kubernetes
----
-* If you have already initialized your Operator, you might have a `requirements.yml` file at the top level of your project. This file specifies Ansible dependencies that must be installed for your Operator to function. By default, this file installs the `community.kubernetes` collection as well as the `operator_sdk.util` collection, which provides modules and plug-ins for Operator-specific fuctions.
+* If you have already initialized your Operator, you might have a `requirements.yml` file at the top level of your project. This file specifies Ansible dependencies that must be installed for your Operator to function. By default, this file installs the `community.kubernetes` collection as well as the `operator_sdk.util` collection, which provides modules and plugins for Operator-specific fuctions.
+
To install the dependent modules from the `requirements.yml` file:
+
diff --git a/modules/osdk-cli-ref-init.adoc b/modules/osdk-cli-ref-init.adoc
index 0baa6be427..07ae73d2a3 100644
--- a/modules/osdk-cli-ref-init.adoc
+++ b/modules/osdk-cli-ref-init.adoc
@@ -6,7 +6,7 @@
[id="osdk-cli-ref-init_{context}"]
= init
-The `operator-sdk init` command initializes an Operator project and generates, or _scaffolds_, a default project directory layout for the given plug-in.
+The `operator-sdk init` command initializes an Operator project and generates, or _scaffolds_, a default project directory layout for the given plugin.
This command writes the following files:
@@ -28,7 +28,7 @@ This command writes the following files:
|Help output for the `init` command.
|`--plugins` (string)
-|Name and optionally version of the plug-in to initialize the project with. Available plug-ins are `ansible.sdk.operatorframework.io/v1`, `go.kubebuilder.io/v2`, `go.kubebuilder.io/v3`, and `helm.sdk.operatorframework.io/v1`.
+|Name and optionally version of the plugin to initialize the project with. Available plugins are `ansible.sdk.operatorframework.io/v1`, `go.kubebuilder.io/v2`, `go.kubebuilder.io/v3`, and `helm.sdk.operatorframework.io/v1`.
|`--project-version`
|Project version. Available values are `2` and `3-alpha`, which is the default.
diff --git a/modules/osdk-common-prereqs.adoc b/modules/osdk-common-prereqs.adoc
index f2fce28f7e..6dd9645813 100644
--- a/modules/osdk-common-prereqs.adoc
+++ b/modules/osdk-common-prereqs.adoc
@@ -41,7 +41,7 @@ endif::[]
ifdef::ansible[]
* link:https://docs.ansible.com/ansible/2.9/index.html[Ansible] v2.9.0
* link:https://ansible-runner.readthedocs.io/en/latest/install.html[Ansible Runner] v2.0.2+
-* link:https://github.com/ansible/ansible-runner-http[Ansible Runner HTTP Event Emitter plug-in] v1.0.0+
+* link:https://github.com/ansible/ansible-runner-http[Ansible Runner HTTP Event Emitter plugin] v1.0.0+
* link:https://www.python.org/downloads/[Python] 3.8.6+
* link:https://pypi.org/project/openshift/[OpenShift Python client] v0.12.0+
endif::[]
diff --git a/modules/osdk-create-project.adoc b/modules/osdk-create-project.adoc
index e457e2069f..07c7e01105 100644
--- a/modules/osdk-create-project.adoc
+++ b/modules/osdk-create-project.adoc
@@ -58,13 +58,13 @@ endif::[]
. Run the `operator-sdk init` command
ifdef::ansible[]
-with the `ansible` plug-in
+with the `ansible` plugin
endif::[]
ifdef::helm[]
-with the `helm` plug-in
+with the `helm` plugin
endif::[]
ifdef::java[]
-with the `quarkus` plug-in
+with the `quarkus` plugin
endif::[]
to initialize the project:
+
@@ -78,7 +78,7 @@ $ operator-sdk init \
+
[NOTE]
====
-The `operator-sdk init` command uses the Go plug-in by default.
+The `operator-sdk init` command uses the Go plugin by default.
====
+
The `operator-sdk init` command generates a `go.mod` file to be used with link:https://golang.org/ref/mod[Go modules]. The `--repo` flag is required when creating a project outside of `$GOPATH/src/`, because generated files require a valid module path.
@@ -102,7 +102,7 @@ $ operator-sdk init \
+
[NOTE]
====
-By default, the `helm` plug-in initializes a project using a boilerplate Helm chart. You can use additional flags, such as the `--helm-chart` flag, to initialize a project using an existing Helm chart.
+By default, the `helm` plugin initializes a project using a boilerplate Helm chart. You can use additional flags, such as the `--helm-chart` flag, to initialize a project using an existing Helm chart.
====
+
The `init` command creates the `nginx-operator` project specifically for watching a resource with API version `example.com/v1` and kind `Nginx`.
diff --git a/modules/osdk-csv-manual-annotations.adoc b/modules/osdk-csv-manual-annotations.adoc
index cb4479b92f..52dfe7db4a 100644
--- a/modules/osdk-csv-manual-annotations.adoc
+++ b/modules/osdk-csv-manual-annotations.adoc
@@ -27,9 +27,9 @@ The following table lists Operator metadata annotations that can be manually def
|Infrastructure features supported by the Operator. Users can view and filter by these features when discovering Operators through OperatorHub in the web console. Valid, case-sensitive values:
- `disconnected`: Operator supports being mirrored into disconnected catalogs, including all dependencies, and does not require internet access. All related images required for mirroring are listed by the Operator.
-- `cnf`: Operator provides a Cloud-native Network Functions (CNF) Kubernetes plug-in.
-- `cni`: Operator provides a Container Network Interface (CNI) Kubernetes plug-in.
-- `csi`: Operator provides a Container Storage Interface (CSI) Kubernetes plug-in.
+- `cnf`: Operator provides a Cloud-native Network Functions (CNF) Kubernetes plugin.
+- `cni`: Operator provides a Container Network Interface (CNI) Kubernetes plugin.
+- `csi`: Operator provides a Container Storage Interface (CSI) Kubernetes plugin.
- `fips`: Operator accepts the FIPS mode of the underlying platform and works on nodes that are booted into FIPS mode.
[IMPORTANT]
diff --git a/modules/osdk-hh-project-layout.adoc b/modules/osdk-hh-project-layout.adoc
index 218b930fa0..2130efc2cd 100644
--- a/modules/osdk-hh-project-layout.adoc
+++ b/modules/osdk-hh-project-layout.adoc
@@ -20,13 +20,13 @@ The Hybrid Helm Operator scaffolding is customized to be compatible with both He
|Build file with helper targets to help you work with your project.
|`PROJECT`
-|YAML file containing metadata information for the Operator. Represents the project's configuration and is used to track useful information for the CLI and plug-ins.
+|YAML file containing metadata information for the Operator. Represents the project's configuration and is used to track useful information for the CLI and plugins.
|`bin/`
|Contains useful binaries such as the `manager` which is used to run your project locally and the `kustomize` utility used for the project configuration.
|`config/`
-|Contains configuration files, including all link:https://kustomize.io/[Kustomize] manifests, to launch your Operator project on a cluster. Plug-ins might use it to provide functionality. For example, for the Operator SDK to help create your Operator bundle, the CLI looks up the CRDs and CRs which are scaffolded in this directory.
+|Contains configuration files, including all link:https://kustomize.io/[Kustomize] manifests, to launch your Operator project on a cluster. Plugins might use it to provide functionality. For example, for the Operator SDK to help create your Operator bundle, the CLI looks up the CRDs and CRs which are scaffolded in this directory.
`config/crd/`:: Contains custom resource definitions (CRDs).
@@ -57,7 +57,7 @@ The Hybrid Helm Operator scaffolding is customized to be compatible with both He
|Main program of the Operator. Instantiates a new manager that registers all custom resource definitions (CRDs) in the `apis/` directory and starts all controllers in the `controllers/` directory.
|`helm-charts/`
-|Contains the Helm charts which can be specified using the `create api` command with the Helm plug-in.
+|Contains the Helm charts which can be specified using the `create api` command with the Helm plugin.
|`watches.yaml`
|Contains group/version/kind (GVK) and Helm chart location. Used to configure the Helm watches.
diff --git a/modules/osdk-java-create-api-controller.adoc b/modules/osdk-java-create-api-controller.adoc
index 21d3d9770d..4f81a1c510 100644
--- a/modules/osdk-java-create-api-controller.adoc
+++ b/modules/osdk-java-create-api-controller.adoc
@@ -20,7 +20,7 @@ $ operator-sdk create api \
--version=v1 \ <3>
--kind=Memcached <4>
----
-<1> Set the plug-in flag to `quarkus`.
+<1> Set the plugin flag to `quarkus`.
<2> Set the group flag to `cache`.
<3> Set the version flag to `v1`.
<4> Set the kind flag to `Memcached`.
diff --git a/modules/osdk-quickstart.adoc b/modules/osdk-quickstart.adoc
index d7ad32c83d..1fdae611be 100644
--- a/modules/osdk-quickstart.adoc
+++ b/modules/osdk-quickstart.adoc
@@ -59,13 +59,13 @@ $ cd {app}-operator
.. Run the `operator-sdk init` command
ifdef::ansible[]
-with the `ansible` plug-in
+with the `ansible` plugin
endif::[]
ifdef::helm[]
-with the `helm` plug-in
+with the `helm` plugin
endif::[]
ifdef::java[]
-with the `quarkus` plug-in
+with the `quarkus` plugin
endif::[]
to initialize the project:
+
@@ -77,7 +77,7 @@ $ operator-sdk init \
--repo=github.com/example-inc/{app}-operator
----
+
-The command uses the Go plug-in by default.
+The command uses the Go plugin by default.
endif::[]
ifdef::ansible[]
----
diff --git a/modules/osdk-scorecard-config.adoc b/modules/osdk-scorecard-config.adoc
index 9d6a50fece..c7a6a208e4 100644
--- a/modules/osdk-scorecard-config.adoc
+++ b/modules/osdk-scorecard-config.adoc
@@ -7,7 +7,7 @@
[id="osdk-scorecard-config_{context}"]
= Scorecard configuration
-The scorecard tool uses a configuration that allows you to configure internal plug-ins, as well as several global configuration options. Tests are driven by a configuration file named `config.yaml`, which is generated by the `make bundle` command, located in your `bundle/` directory:
+The scorecard tool uses a configuration that allows you to configure internal plugins, as well as several global configuration options. Tests are driven by a configuration file named `config.yaml`, which is generated by the `make bundle` command, located in your `bundle/` directory:
[source,terminal]
----
diff --git a/modules/osdk-updating-projects.adoc b/modules/osdk-updating-projects.adoc
index 437bebd86f..4b92a87843 100644
--- a/modules/osdk-updating-projects.adoc
+++ b/modules/osdk-updating-projects.adoc
@@ -250,7 +250,7 @@ require (
endif::golang,hybrid[]
ifdef::hybrid[]
-. Edit your `go.mod` file to update the Helm Operator plug-ins:
+. Edit your `go.mod` file to update the Helm Operator plugins:
+
[source,golang]
----
@@ -285,7 +285,7 @@ var err error
cfg, err = testEnv.Start()
----
-. If you use the Kubernetes declarative plug-in, update your Dockerfile with the following changes:
+. If you use the Kubernetes declarative plugin, update your Dockerfile with the following changes:
.. Add the following changes below the line that begins `COPY controllers/ controllers/`:
+
diff --git a/modules/ossm-extensions-migrating-to-wasmplugin.adoc b/modules/ossm-extensions-migrating-to-wasmplugin.adoc
index a783fd59fb..909eacc14d 100644
--- a/modules/ossm-extensions-migrating-to-wasmplugin.adoc
+++ b/modules/ossm-extensions-migrating-to-wasmplugin.adoc
@@ -6,7 +6,7 @@ This module included in the following assemblies:
[id="ossm-extensions-migrating-to-wasmplugin_{context}"]
= Migrating to `WasmPlugin` resources
-To upgrade your WebAssembly extensions from the `ServiceMeshExtension` API to the `WasmPlugin` API, you rename your plug-in file.
+To upgrade your WebAssembly extensions from the `ServiceMeshExtension` API to the `WasmPlugin` API, you rename your plugin file.
.Prerequisites
@@ -14,11 +14,11 @@ To upgrade your WebAssembly extensions from the `ServiceMeshExtension` API to th
.Procedure
-. Update your container image. If the plug-in is already in `/plugin.wasm` inside the container, skip to the next step. If not:
+. Update your container image. If the plugin is already in `/plugin.wasm` inside the container, skip to the next step. If not:
-.. Ensure the plug-in file is named `plugin.wasm`. You must name the extension file `plugin.wasm`.
+.. Ensure the plugin file is named `plugin.wasm`. You must name the extension file `plugin.wasm`.
-.. Ensure the plug-in file is located in the root (/) directory. You must store extension files in the root of the container filesystem..
+.. Ensure the plugin file is located in the root (/) directory. You must store extension files in the root of the container filesystem..
.. Rebuild your container image and push it to a container registry.
diff --git a/modules/ossm-extensions-migration-overview.adoc b/modules/ossm-extensions-migration-overview.adoc
index 4c58426a2b..3b4b599b2a 100644
--- a/modules/ossm-extensions-migration-overview.adoc
+++ b/modules/ossm-extensions-migration-overview.adoc
@@ -10,7 +10,7 @@ The `ServiceMeshExtension` API, which was deprecated in {SMProductName} version
The APIs are very similar. The migration consists of two steps:
-. Renaming your plug-in file and updating the module packaging.
+. Renaming your plugin file and updating the module packaging.
. Creating a `WasmPlugin` resource that references the updated container image.
@@ -87,4 +87,4 @@ The new `WasmPlugin` container image format is similar to the `ServiceMeshExtens
* The `ServiceMeshExtension` container format required a metadata file named `manifest.yaml` in the root directory of the container filesystem. The `WasmPlugin` container format does not require a `manifest.yaml` file.
-* The `.wasm` file (the actual plug-in) that previously could have any filename now must be named `plugin.wasm` and must be located in the root directory of the container filesystem.
+* The `.wasm` file (the actual plugin) that previously could have any filename now must be named `plugin.wasm` and must be located in the root directory of the container filesystem.
diff --git a/modules/ossm-extensions-ref-wasmplugin.adoc b/modules/ossm-extensions-ref-wasmplugin.adoc
index 7545b0c7ca..daa69d1e39 100644
--- a/modules/ossm-extensions-ref-wasmplugin.adoc
+++ b/modules/ossm-extensions-ref-wasmplugin.adoc
@@ -61,7 +61,7 @@ spec:
|spec.selector
|WorkloadSelector
-|Criteria used to select the specific set of pods/VMs on which this plug-in configuration should be applied. If omitted, this configuration will be applied to all workload instances in the same namespace. If the `WasmPlugin` field is present in the config root namespace, it will be applied to all applicable workloads in any namespace.
+|Criteria used to select the specific set of pods/VMs on which this plugin configuration should be applied. If omitted, this configuration will be applied to all workload instances in the same namespace. If the `WasmPlugin` field is present in the config root namespace, it will be applied to all applicable workloads in any namespace.
|No
|spec.url
@@ -96,12 +96,12 @@ spec:
|spec.pluginName
|string
-|The plug-in name used in the Envoy configuration. Some Wasm modules might require this value to select the Wasm plug-in to execute.
+|The plugin name used in the Envoy configuration. Some Wasm modules might require this value to select the Wasm plugin to execute.
|No
|spec.pluginConfig
|Struct
-|The configuration that will be passed on to the plug-in.
+|The configuration that will be passed on to the plugin.
|No
|spec.pluginConfig.verificationKey
@@ -137,7 +137,7 @@ The `PullPolicy` object specifies the pull behavior to be applied when fetching
|If an existing version of the image has been pulled before, that will be used. If no version of the image is present locally, we will pull the latest version.
|Always
-|Always pull the latest version of an image when applying this plug-in.
+|Always pull the latest version of an image when applying this plugin.
|===
`Struct` represents a structured data value, consisting of fields which map to dynamically typed values. In some languages, Struct might be supported by a native representation. For example, in scripting languages like JavaScript a struct is represented as an object.
@@ -152,7 +152,7 @@ The `PullPolicy` object specifies the pull behavior to be applied when fetching
|Map of dynamically typed values.
|===
-`PluginPhase` specifies the phase in the filter chain where the plug-in will be injected.
+`PluginPhase` specifies the phase in the filter chain where the plugin will be injected.
.PluginPhase
[options="header"]
@@ -160,14 +160,14 @@ The `PullPolicy` object specifies the pull behavior to be applied when fetching
|===
| Field | Description
|
-|Control plane decides where to insert the plug-in. This will generally be at the end of the filter chain, right before the Router. Do not specify PluginPhase if the plug-in is independent of others.
+|Control plane decides where to insert the plugin. This will generally be at the end of the filter chain, right before the Router. Do not specify PluginPhase if the plugin is independent of others.
|AUTHN
-|Insert plug-in before Istio authentication filters.
+|Insert plugin before Istio authentication filters.
|AUTHZ
-|Insert plug-in before Istio authorization filters and after Istio authentication filters.
+|Insert plugin before Istio authorization filters and after Istio authentication filters.
|STATS
-|Insert plug-in before Istio stats filters and after Istio authorization filters.
+|Insert plugin before Istio stats filters and after Istio authorization filters.
|===
diff --git a/modules/ossm-migrating-to-20.adoc b/modules/ossm-migrating-to-20.adoc
index 40207324b8..23be3fcf3d 100644
--- a/modules/ossm-migrating-to-20.adoc
+++ b/modules/ossm-migrating-to-20.adoc
@@ -111,7 +111,7 @@ The `ServiceMeshControlPlane` resource has been changed for {SMProductName} vers
The architectural units used by previous versions have been replaced by Istiod. In 2.0 the {SMProductShortName} control plane components Mixer, Pilot, Citadel, Galley, and the sidecar injector functionality have been combined into a single component, Istiod.
-Although Mixer is no longer supported as a control plane component, Mixer policy and telemetry plug-ins are now supported through WASM extensions in Istiod. Mixer can be enabled for policy and telemetry if you need to integrate legacy Mixer plug-ins.
+Although Mixer is no longer supported as a control plane component, Mixer policy and telemetry plugins are now supported through WASM extensions in Istiod. Mixer can be enabled for policy and telemetry if you need to integrate legacy Mixer plugins.
Secret Discovery Service (SDS) is used to distribute certificates and keys to sidecars directly from Istiod. In {SMProductName} version 1.1, secrets were generated by Citadel, which were used by the proxies to retrieve their client certificates and keys.
@@ -179,7 +179,7 @@ This resource is replaced by using a `security.istio.io/v1beta1` AuthorizationPo
[id="ossm-migrating-mixer_{context}"]
=== Mixer plugins
-Mixer components are disabled by default in version 2.0. If you rely on Mixer plug-ins for your workload, you must configure your version 2.0 `ServiceMeshControlPlane` to include the Mixer components.
+Mixer components are disabled by default in version 2.0. If you rely on Mixer plugins for your workload, you must configure your version 2.0 `ServiceMeshControlPlane` to include the Mixer components.
To enable the Mixer policy components, add the following snippet to your `ServiceMeshControlPlane`.
@@ -199,7 +199,7 @@ spec:
type: Mixer
----
-Legacy mixer plug-ins can also be migrated to WASM and integrated using the new ServiceMeshExtension (maistra.io/v1alpha1) custom resource.
+Legacy mixer plugins can also be migrated to WASM and integrated using the new ServiceMeshExtension (maistra.io/v1alpha1) custom resource.
Built-in WASM filters included in the upstream Istio distribution are not available in {SMProductName} 2.0.
diff --git a/modules/ossm-multitenant.adoc b/modules/ossm-multitenant.adoc
index 1dc4c0e418..e7722787ae 100644
--- a/modules/ossm-multitenant.adoc
+++ b/modules/ossm-multitenant.adoc
@@ -19,7 +19,7 @@ Every project in the `ServiceMeshMemberRoll` `members` list will have a `RoleBin
{SMProductName} configures each member project to ensure network access between itself, the control plane, and other member projects. The exact configuration differs depending on how {product-title} software-defined networking (SDN) is configured. See About OpenShift SDN for additional details.
-If the {product-title} cluster is configured to use the SDN plug-in:
+If the {product-title} cluster is configured to use the SDN plugin:
* *`NetworkPolicy`*: {SMProductName} creates a `NetworkPolicy` resource in each member project allowing ingress to all pods from the other members and the control plane. If you remove a member from {SMProductShortName}, this `NetworkPolicy` resource is deleted from the project.
+
diff --git a/modules/ossm-networkpolicy-overview.adoc b/modules/ossm-networkpolicy-overview.adoc
index 3e786cb9c4..ed157b0679 100644
--- a/modules/ossm-networkpolicy-overview.adoc
+++ b/modules/ossm-networkpolicy-overview.adoc
@@ -8,4 +8,4 @@ This module included in the following assemblies:
{SMProductName} automatically creates and manages a number of `NetworkPolicies` resources in the {SMProductShortName} control plane and application namespaces. This is to ensure that applications and the control plane can communicate with each other.
-For example, if you have configured your {product-title} cluster to use the SDN plug-in, {SMProductName} creates a `NetworkPolicy` resource in each member project. This enables ingress to all pods in the mesh from the other mesh members and the control plane. This also restricts ingress to only member projects. If you require ingress from non-member projects, you need to create a `NetworkPolicy` to allow that traffic through. If you remove a namespace from {SMProductShortName}, this `NetworkPolicy` resource is deleted from the project.
+For example, if you have configured your {product-title} cluster to use the SDN plugin, {SMProductName} creates a `NetworkPolicy` resource in each member project. This enables ingress to all pods in the mesh from the other mesh members and the control plane. This also restricts ingress to only member projects. If you require ingress from non-member projects, you need to create a `NetworkPolicy` to allow that traffic through. If you remove a namespace from {SMProductShortName}, this `NetworkPolicy` resource is deleted from the project.
diff --git a/modules/ossm-rn-deprecated-features.adoc b/modules/ossm-rn-deprecated-features.adoc
index daeda3f68a..8d529cb738 100644
--- a/modules/ossm-rn-deprecated-features.adoc
+++ b/modules/ossm-rn-deprecated-features.adoc
@@ -42,7 +42,7 @@ This release marks the end of support for {SMProductShortName} control planes ba
In Service Mesh 2.1, the Mixer component is removed. Bug fixes and support is provided through the end of the Service Mesh 2.0 life cycle.
-Upgrading from a Service Mesh 2.0.x release to 2.1 will not proceed if Mixer plug-ins are enabled. Mixer plug-ins must be ported to WebAssembly Extensions.
+Upgrading from a Service Mesh 2.0.x release to 2.1 will not proceed if Mixer plugins are enabled. Mixer plugins must be ported to WebAssembly Extensions.
== Deprecated features {SMProductName} 2.0
diff --git a/modules/ossm-rn-fixed-issues.adoc b/modules/ossm-rn-fixed-issues.adoc
index 5169889af1..54228031a0 100644
--- a/modules/ossm-rn-fixed-issues.adoc
+++ b/modules/ossm-rn-fixed-issues.adoc
@@ -97,7 +97,7 @@ Workaround: Manually create the `NetworkPolicy` in the namespace.
** Prometheus
** Sidecar injector
-* link:https://issues.redhat.com/browse/MAISTRA-2378[MAISTRA-2378] When the cluster is configured to use OpenShift SDN with `ovs-multitenant` and the mesh contains a large number of namespaces (200+), the {product-title} networking plug-in is unable to configure the namespaces quickly. {SMProductShortName} times out causing namespaces to be continuously dropped from the service mesh and then reenlisted.
+* link:https://issues.redhat.com/browse/MAISTRA-2378[MAISTRA-2378] When the cluster is configured to use OpenShift SDN with `ovs-multitenant` and the mesh contains a large number of namespaces (200+), the {product-title} networking plugin is unable to configure the namespaces quickly. {SMProductShortName} times out causing namespaces to be continuously dropped from the service mesh and then reenlisted.
* link:https://issues.redhat.com/browse/MAISTRA-2370[MAISTRA-2370] Handle tombstones in listerInformer. The updated cache codebase was not handling tombstones when translating the events from the namespace caches to the aggregated cache, leading to a panic in the go routine.
diff --git a/modules/ossm-rn-new-features.adoc b/modules/ossm-rn-new-features.adoc
index 190c3a71b6..7fb960a6e0 100644
--- a/modules/ossm-rn-new-features.adoc
+++ b/modules/ossm-rn-new-features.adoc
@@ -548,9 +548,9 @@ The OVN-Kubernetes Container Network Interface (CNI) was previously introduced a
=== Service Mesh WebAssembly (WASM) Extensions
-The `ServiceMeshExtensions` Custom Resource Definition (CRD), first introduced in 2.0 as Technology Preview, is now generally available. You can use CRD to build your own plug-ins, but Red Hat does not provide support for the plugins you create.
+The `ServiceMeshExtensions` Custom Resource Definition (CRD), first introduced in 2.0 as Technology Preview, is now generally available. You can use CRD to build your own plugins, but Red Hat does not provide support for the plugins you create.
-Mixer has been completely removed in Service Mesh 2.1. Upgrading from a Service Mesh 2.0.x release to 2.1 will be blocked if Mixer is enabled. Mixer plug-ins will need to be ported to WebAssembly Extensions.
+Mixer has been completely removed in Service Mesh 2.1. Upgrading from a Service Mesh 2.0.x release to 2.1 will be blocked if Mixer is enabled. Mixer plugins will need to be ported to WebAssembly Extensions.
=== 3scale WebAssembly Adapter (WASM)
diff --git a/modules/ossm-supported-configurations.adoc b/modules/ossm-supported-configurations.adoc
index 09dfd8d16f..569ce75854 100644
--- a/modules/ossm-supported-configurations.adoc
+++ b/modules/ossm-supported-configurations.adoc
@@ -35,7 +35,7 @@ Explicitly unsupported cases include:
* OpenShift-SDN
* OVN-Kubernetes is supported on {product-title} 4.7.32+, {product-title} 4.8.12+, and {product-title} 4.9+.
-* Third-Party Container Network Interface (CNI) plug-ins that have been certified on {product-title} and passed {SMProductShortName} conformance testing. See link:https://access.redhat.com/articles/5436171[Certified OpenShift CNI Plug-ins] for more information.
+* Third-Party Container Network Interface (CNI) plugins that have been certified on {product-title} and passed {SMProductShortName} conformance testing. See link:https://access.redhat.com/articles/5436171[Certified OpenShift CNI Plug-ins] for more information.
[id="ossm-supported-configurations-sm_{context}"]
== Supported configurations for Service Mesh
diff --git a/modules/ossm-threescale-integrate.adoc b/modules/ossm-threescale-integrate.adoc
index db061da452..56693a894d 100644
--- a/modules/ossm-threescale-integrate.adoc
+++ b/modules/ossm-threescale-integrate.adoc
@@ -17,7 +17,7 @@ You can use these examples to configure requests to your services using the 3sca
* Enabling backend cache requires 3scale 2.9 or greater
* {SMProductName} prerequisites
* Ensure Mixer policy enforcement is enabled. Update Mixer policy enforcement section provides instructions to check the current Mixer policy enforcement status and enable policy enforcement.
-* Mixer policy and telemetry must be enabled if you are using a mixer plug-in.
+* Mixer policy and telemetry must be enabled if you are using a mixer plugin.
** You will need to properly configure the Service Mesh Control Plane (SMCP) when upgrading.
[NOTE]
diff --git a/modules/ossm-threescale-webassembly-module-examples-for-credentials-use-cases.adoc b/modules/ossm-threescale-webassembly-module-examples-for-credentials-use-cases.adoc
index 780823e820..7f6c5ad3bb 100644
--- a/modules/ossm-threescale-webassembly-module-examples-for-credentials-use-cases.adoc
+++ b/modules/ossm-threescale-webassembly-module-examples-for-credentials-use-cases.adoc
@@ -178,9 +178,9 @@ spec:
http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs
----
-When you apply the `RequestAuthentication`, it configures `Envoy` with a link:https://www.envoyproxy.io/docs/envoy/v1.19.0/api-v3/extensions/filters/http/jwt_authn/v3/config.proto.html[native plug-in] to validate `JWT` tokens. The proxy validates everything before running the module so any requests that fail do not make it to the 3scale WebAssembly module.
+When you apply the `RequestAuthentication`, it configures `Envoy` with a link:https://www.envoyproxy.io/docs/envoy/v1.19.0/api-v3/extensions/filters/http/jwt_authn/v3/config.proto.html[native plugin] to validate `JWT` tokens. The proxy validates everything before running the module so any requests that fail do not make it to the 3scale WebAssembly module.
-When a `JWT` token is validated, the proxy stores its contents in an internal metadata object, with an entry whose key depends on the specific configuration of the plug-in. This use case gives you the ability to look up structure objects with a single entry containing an unknown key name.
+When a `JWT` token is validated, the proxy stores its contents in an internal metadata object, with an entry whose key depends on the specific configuration of the plugin. This use case gives you the ability to look up structure objects with a single entry containing an unknown key name.
The 3scale `app_id` for OIDC matches the OAuth `client_id`. This is found in the `azp` or `aud` fields of `JWT` tokens.
@@ -202,7 +202,7 @@ credentials:
head: 1
----
-The example instructs the module to use the `filter` source type to look up filter metadata for an object from the `Envoy`-specific `JWT` authentication native plug-in. This plug-in includes the `JWT` token as part of a structure object with a single entry and a pre-configured name. Use `0` to specify that you will only access the single entry.
+The example instructs the module to use the `filter` source type to look up filter metadata for an object from the `Envoy`-specific `JWT` authentication native plugin. This plugin includes the `JWT` token as part of a structure object with a single entry and a pre-configured name. Use `0` to specify that you will only access the single entry.
The resulting value is a structure for which you will resolve two fields:
diff --git a/modules/ossm-vs-istio-1x.adoc b/modules/ossm-vs-istio-1x.adoc
index ebfeddd031..1160613290 100644
--- a/modules/ossm-vs-istio-1x.adoc
+++ b/modules/ossm-vs-istio-1x.adoc
@@ -81,9 +81,9 @@ spec:
* Deployment of TLS certificates using the Secret Discovery Service (SDS) functionality of Istio is not currently supported in {SMProductName}. The Istio implementation depends on a nodeagent container that uses hostPath mounts.
[id="ossm-cni_{context}"]
-== Istio Container Network Interface (CNI) plug-in
+== Istio Container Network Interface (CNI) plugin
-{SMProductName} includes CNI plug-in, which provides you with an alternate way to configure application pod networking. The CNI plug-in replaces the `init-container` network configuration eliminating the need to grant service accounts and projects access to Security Context Constraints (SCCs) with elevated privileges.
+{SMProductName} includes CNI plugin, which provides you with an alternate way to configure application pod networking. The CNI plugin replaces the `init-container` network configuration eliminating the need to grant service accounts and projects access to Security Context Constraints (SCCs) with elevated privileges.
[id="ossm-routes-gateways_{context}"]
== Routes for Istio Gateways
diff --git a/modules/ossm-vs-istio.adoc b/modules/ossm-vs-istio.adoc
index d052d157f3..491d93d5f4 100644
--- a/modules/ossm-vs-istio.adoc
+++ b/modules/ossm-vs-istio.adoc
@@ -114,9 +114,9 @@ You can deploy virtual machines to OpenShift using OpenShift Virtualization. The
{SMProductName} does not support QUIC-based services.
[id="ossm-cni_{context}"]
-== Istio Container Network Interface (CNI) plug-in
+== Istio Container Network Interface (CNI) plugin
-{SMProductName} includes CNI plug-in, which provides you with an alternate way to configure application pod networking. The CNI plug-in replaces the `init-container` network configuration eliminating the need to grant service accounts and projects access to security context constraints (SCCs) with elevated privileges.
+{SMProductName} includes CNI plugin, which provides you with an alternate way to configure application pod networking. The CNI plugin replaces the `init-container` network configuration eliminating the need to grant service accounts and projects access to security context constraints (SCCs) with elevated privileges.
[id="ossm-global-mtls_{context}"]
== Global mTLS settings
diff --git a/modules/persistent-storage-csi-about.adoc b/modules/persistent-storage-csi-about.adoc
index 0a51a93eca..40ddc62683 100644
--- a/modules/persistent-storage-csi-about.adoc
+++ b/modules/persistent-storage-csi-about.adoc
@@ -10,6 +10,6 @@
:_content-type: CONCEPT
[id="csi-about_{context}"]
= About CSI
-Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plug-ins using a standard interface without ever having to change the core Kubernetes code.
+Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.
-CSI Operators give {product-title} users storage options, such as volume snapshots, that are not possible with in-tree volume plug-ins.
+CSI Operators give {product-title} users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.
diff --git a/modules/persistent-storage-csi-drivers-supported.adoc b/modules/persistent-storage-csi-drivers-supported.adoc
index f752e49558..853f613cd0 100644
--- a/modules/persistent-storage-csi-drivers-supported.adoc
+++ b/modules/persistent-storage-csi-drivers-supported.adoc
@@ -5,7 +5,7 @@
[id="csi-drivers-supported_{context}"]
= CSI drivers supported by {product-title}
-{product-title} installs certain CSI drivers by default, giving users storage options that are not possible with in-tree volume plug-ins.
+{product-title} installs certain CSI drivers by default, giving users storage options that are not possible with in-tree volume plugins.
To create CSI-provisioned persistent volumes that mount to these supported storage assets, {product-title} installs the necessary CSI driver Operator, the CSI driver, and the required storage class by default. For more details about the default namespace of the Operator and driver, see the documentation for the specific CSI Driver Operator.
diff --git a/modules/persistent-storage-csi-ebs-operator-install.adoc b/modules/persistent-storage-csi-ebs-operator-install.adoc
index 924bbbe394..e56f192e1b 100644
--- a/modules/persistent-storage-csi-ebs-operator-install.adoc
+++ b/modules/persistent-storage-csi-ebs-operator-install.adoc
@@ -5,7 +5,7 @@
[id="persistent-storage-csi-ebs-operator-install_{context}"]
= Installing the AWS Elastic Block Store CSI Driver Operator
-The AWS Elastic Block Store (EBS) Container Storage Interface (CSI) Driver Operator enables the replacement of the existing AWS EBS in-tree storage plug-in.
+The AWS Elastic Block Store (EBS) Container Storage Interface (CSI) Driver Operator enables the replacement of the existing AWS EBS in-tree storage plugin.
[IMPORTANT]
====
diff --git a/modules/persistent-storage-csi-manila-dynamic-provisioning.adoc b/modules/persistent-storage-csi-manila-dynamic-provisioning.adoc
index 26aad83de5..42fc2830b5 100644
--- a/modules/persistent-storage-csi-manila-dynamic-provisioning.adoc
+++ b/modules/persistent-storage-csi-manila-dynamic-provisioning.adoc
@@ -8,7 +8,7 @@
{product-title} installs a storage class for each available Manila share type.
-The YAML files that are created are completely decoupled from Manila and from its Container Storage Interface (CSI) plug-in. As an application developer, you can dynamically provision ReadWriteMany (RWX) storage and deploy pods with applications that safely consume the storage using YAML manifests.
+The YAML files that are created are completely decoupled from Manila and from its Container Storage Interface (CSI) plugin. As an application developer, you can dynamically provision ReadWriteMany (RWX) storage and deploy pods with applications that safely consume the storage using YAML manifests.
You can use the same pod and persistent volume claim (PVC) definitions on-premise that you use with {product-title} on AWS, GCP, Azure, and other platforms, with the exception of the storage class reference in the PVC definition.
diff --git a/modules/persistent-storage-csi-migration-automatic-ga.adoc b/modules/persistent-storage-csi-migration-automatic-ga.adoc
index a74c159b96..0e2c768193 100644
--- a/modules/persistent-storage-csi-migration-automatic-ga.adoc
+++ b/modules/persistent-storage-csi-migration-automatic-ga.adoc
@@ -20,4 +20,4 @@ CSI migration for these volume types is considered generally available (GA), and
For new {product-title} 4.11, and later, installations, the default storage class is the CSI storage class. All volumes provisioned using this storage class are CSI persistent volumes (PVs).
-For clusters upgraded from 4.10, and earlier, to 4.11, and later, the CSI storage class is created, and is set as the default if no default storage class was set prior to the upgrade. In the very unlikely case that there is a storage class with the same name, the existing storage class remains unchanged. Any existing in-tree storage classes remain, and might be necessary for certain features, such as volume expansion to work for existing in-tree PVs. While storage class referencing to the in-tree storage plug-in will continue working, we recommend that you switch the default storage class to the CSI storage class.
+For clusters upgraded from 4.10, and earlier, to 4.11, and later, the CSI storage class is created, and is set as the default if no default storage class was set prior to the upgrade. In the very unlikely case that there is a storage class with the same name, the existing storage class remains unchanged. Any existing in-tree storage classes remain, and might be necessary for certain features, such as volume expansion to work for existing in-tree PVs. While storage class referencing to the in-tree storage plugin will continue working, we recommend that you switch the default storage class to the CSI storage class.
diff --git a/modules/persistent-storage-csi-migration-enable.adoc b/modules/persistent-storage-csi-migration-enable.adoc
index 5a197a6e1b..94e1d48bf4 100644
--- a/modules/persistent-storage-csi-migration-enable.adoc
+++ b/modules/persistent-storage-csi-migration-enable.adoc
@@ -12,12 +12,12 @@ If you want to test Container Storage Interface (CSI) migration in development o
* Azure File
-:FeatureName: CSI automatic migration for the preceding in-tree volume plug-ins and CSI driver pairs
+:FeatureName: CSI automatic migration for the preceding in-tree volume plugins and CSI driver pairs
include::snippets/technology-preview.adoc[leveloffset=+1]
After migration, the default storage class remains the in-tree storage class.
-CSI automatic migration will be enabled by default for all storage in-tree plug-ins in a future {product-title} release, so it is highly recommended that you test it now and report any issues.
+CSI automatic migration will be enabled by default for all storage in-tree plugins in a future {product-title} release, so it is highly recommended that you test it now and report any issues.
[NOTE]
====
diff --git a/modules/persistent-storage-csi-migration-overview-support-level.adoc b/modules/persistent-storage-csi-migration-overview-support-level.adoc
index a139f70e0d..4534d68d06 100644
--- a/modules/persistent-storage-csi-migration-overview-support-level.adoc
+++ b/modules/persistent-storage-csi-migration-overview-support-level.adoc
@@ -6,11 +6,11 @@
[id="persistent-storage-csi-migration-overview-support-level_{context}"]
= Automatic migration support level
-Certain in-tree volume plug-ins and their equivalent Container Storage Interface (CSI) driver are supported in Technology Preview (TP) status, whereas others are supported in General Availability (GA) status.
+Certain in-tree volume plugins and their equivalent Container Storage Interface (CSI) driver are supported in Technology Preview (TP) status, whereas others are supported in General Availability (GA) status.
-The following table provides details about the support level of in-tree volume plug-ins/CSI driver pairs.
+The following table provides details about the support level of in-tree volume plugins/CSI driver pairs.
-.CSI automatic migration In-tree volume plug-ins/CSI driver pair support in {product-title}
+.CSI automatic migration In-tree volume plugins/CSI driver pair support in {product-title}
[cols=",^v,^v,^v width="100%",options="header"]
|===
|CSI drivers |Support level |CSI auto migration enabled automatically?
@@ -32,5 +32,5 @@ a|
[IMPORTANT]
====
-CSI automatic migration will be enabled by default for all storage in-tree plug-ins in a future {product-title} release, so it is highly recommended that you test it now and report any issues.
+CSI automatic migration will be enabled by default for all storage in-tree plugins in a future {product-title} release, so it is highly recommended that you test it now and report any issues.
====
diff --git a/modules/persistent-storage-csi-migration-overview.adoc b/modules/persistent-storage-csi-migration-overview.adoc
index 2753717731..df9e0ce7ed 100644
--- a/modules/persistent-storage-csi-migration-overview.adoc
+++ b/modules/persistent-storage-csi-migration-overview.adoc
@@ -6,7 +6,7 @@
[id="persistent-storage-csi-migration-overview_{context}"]
= Overview
-Volumes that are provisioned by using in-tree storage plug-ins, and that are supported by this feature, are migrated to their counterpart Container Storage Interface (CSI) drivers. This process does not perform any data migration; {product-title} only translates the persistent volume object in memory. As a result, the translated persistent volume object is not stored on disk, nor is its contents changed.
+Volumes that are provisioned by using in-tree storage plugins, and that are supported by this feature, are migrated to their counterpart Container Storage Interface (CSI) drivers. This process does not perform any data migration; {product-title} only translates the persistent volume object in memory. As a result, the translated persistent volume object is not stored on disk, nor is its contents changed.
The following in-tree to CSI drivers are supported:
@@ -32,4 +32,4 @@ a|
CSI automatic migration should be seamless. This feature does not change how you use all existing API objects: for example, `PersistentVolumes`, `PersistentVolumeClaims`, and `StorageClasses`.
-Enabling CSI automatic migration for in-tree persistent volumes (PVs) or persistent volume claims (PVCs) does not enable any new CSI driver features, such as snapshots or expansion, if the original in-tree storage plug-in did not support it.
+Enabling CSI automatic migration for in-tree persistent volumes (PVs) or persistent volume claims (PVCs) does not enable any new CSI driver features, such as snapshots or expansion, if the original in-tree storage plugin did not support it.
diff --git a/modules/persistent-storage-csi-vsphere-install-issues.adoc b/modules/persistent-storage-csi-vsphere-install-issues.adoc
index c1dfcd8dab..b18690a413 100644
--- a/modules/persistent-storage-csi-vsphere-install-issues.adoc
+++ b/modules/persistent-storage-csi-vsphere-install-issues.adoc
@@ -17,7 +17,7 @@ These instructions may not be complete, so consult the vendor or community provi
To uninstall the third-party vSphere CSI Driver:
-. Delete the third-party vSphere CSI Driver (VMware vSphere Container Storage Plug-in) Deployment and Daemonset objects.
+. Delete the third-party vSphere CSI Driver (VMware vSphere Container Storage Plugin) Deployment and Daemonset objects.
. Delete the configmap and secret objects that were installed previously with the third-party vSphere CSI Driver.
. Delete the third-party vSphere CSI driver `CSIDriver` object:
+
diff --git a/modules/persistent-storage-flexvolume-installing.adoc b/modules/persistent-storage-flexvolume-installing.adoc
index ff2bba10fe..8e0d4f681d 100644
--- a/modules/persistent-storage-flexvolume-installing.adoc
+++ b/modules/persistent-storage-flexvolume-installing.adoc
@@ -58,7 +58,7 @@ To install the FlexVolume driver:
. Ensure that the executable file exists on all nodes in the cluster.
-. Place the executable file at the volume plug-in path:
+. Place the executable file at the volume plugin path:
`/etc/kubernetes/kubelet-plugins/volume/exec/~/`.
For example, to install the FlexVolume driver for the storage `foo`, place the
diff --git a/modules/policy-customer-responsibility.adoc b/modules/policy-customer-responsibility.adoc
index f3572932c5..871ad293a4 100644
--- a/modules/policy-customer-responsibility.adoc
+++ b/modules/policy-customer-responsibility.adoc
@@ -26,7 +26,7 @@ The customer is responsible for the applications, workloads, and data that they
|* Provision clusters with OpenShift components installed so that customers can access the OpenShift and Kubernetes APIs to deploy and manage containerized applications.
* Create clusters with image pull secrets so that customer deployments can pull images from the Red Hat Container Catalog registry.
* Provide access to OpenShift APIs that a customer can use to set up Operators to add community, third-party, and Red Hat services to the cluster.
-* Provide storage classes and plug-ins to support persistent volumes for use with customer applications.
+* Provide storage classes and plugins to support persistent volumes for use with customer applications.
* Provide a container image registry so customers can securely store application container images on the cluster to deploy and manage applications.
|* Maintain responsibility for customer and third-party applications, data, and their complete lifecycle.
* If a customer adds Red Hat, community, third-party, their own, or other services to the cluster by using Operators or external images, the customer is responsible for these services and for working with the appropriate provider (including Red Hat) to troubleshoot any issues.
diff --git a/modules/psap-driver-toolkit.adoc b/modules/psap-driver-toolkit.adoc
index 352a548083..b2939cd506 100644
--- a/modules/psap-driver-toolkit.adoc
+++ b/modules/psap-driver-toolkit.adoc
@@ -39,5 +39,5 @@ The Driver Toolkit also has several tools which are commonly needed to build and
== Purpose
Prior to the Driver Toolkit's existence, you could install kernel packages in a pod or build config on {product-title} using link:https://www.openshift.com/blog/how-to-use-entitled-image-builds-to-build-drivercontainers-with-ubi-on-openshift[entitled builds] or by installing from the kernel RPMs in the hosts `machine-os-content`. The Driver Toolkit simplifies the process by removing the entitlement step, and avoids the privileged operation of accessing the machine-os-content in a pod. The Driver Toolkit can also be used by partners who have access to pre-released {product-title} versions to prebuild driver-containers for their hardware devices for future {product-title} releases.
-The Driver Toolkit is also used by the Special Resource Operator (SRO), which is currently available as a community Operator on OperatorHub. SRO supports out-of-tree and third-party kernel drivers and the support software for the underlying operating system. Users can create _recipes_ for SRO to build and deploy a driver container, as well as support software like a device plug-in, or metrics. Recipes can include a build config to build a driver container based on the Driver Toolkit, or SRO can deploy a prebuilt driver container.
+The Driver Toolkit is also used by the Special Resource Operator (SRO), which is currently available as a community Operator on OperatorHub. SRO supports out-of-tree and third-party kernel drivers and the support software for the underlying operating system. Users can create _recipes_ for SRO to build and deploy a driver container, as well as support software like a device plugin, or metrics. Recipes can include a build config to build a driver container based on the Driver Toolkit, or SRO can deploy a prebuilt driver container.
diff --git a/modules/psap-special-resource-operator.adoc b/modules/psap-special-resource-operator.adoc
index b65c33d85c..b8f74aa03f 100644
--- a/modules/psap-special-resource-operator.adoc
+++ b/modules/psap-special-resource-operator.adoc
@@ -6,7 +6,7 @@
[id="about-special-resource-operator_{context}"]
= About the Special Resource Operator
-The Special Resource Operator (SRO) helps you manage the deployment of kernel modules and drivers on an existing {product-title} cluster. The SRO can be used for a case as simple as building and loading a single kernel module, or as complex as deploying the driver, device plug-in, and monitoring stack for a hardware accelerator.
+The Special Resource Operator (SRO) helps you manage the deployment of kernel modules and drivers on an existing {product-title} cluster. The SRO can be used for a case as simple as building and loading a single kernel module, or as complex as deploying the driver, device plugin, and monitoring stack for a hardware accelerator.
For loading kernel modules, the SRO is designed around the use of driver containers. Driver containers are increasingly being used in cloud-native environments, especially when run on pure container operating systems, to deliver hardware drivers to the host. Driver containers extend the kernel stack beyond the out-of-the-box software and hardware features of a specific kernel. Driver containers work on various container-capable Linux distributions. With driver containers, the host operating system stays clean and there is no clash between different library versions or binaries on the host.
[NOTE]
diff --git a/modules/rosa-policy-customer-responsibility.adoc b/modules/rosa-policy-customer-responsibility.adoc
index 5473f23493..8b88f44172 100644
--- a/modules/rosa-policy-customer-responsibility.adoc
+++ b/modules/rosa-policy-customer-responsibility.adoc
@@ -26,7 +26,7 @@ The customer is responsible for the applications, workloads, and data that they
|- Provision clusters with OpenShift components installed so that customers can access the OpenShift and Kubernetes APIs to deploy and manage containerized applications.
- Create clusters with image pull secrets so that customer deployments can pull images from the Red Hat Container Catalog registry.
- Provide access to OpenShift APIs that a customer can use to set up Operators to add community, third-party, and Red Hat services to the cluster.
-- Provide storage classes and plug-ins to support persistent volumes for use with customer applications.
+- Provide storage classes and plugins to support persistent volumes for use with customer applications.
- Provide a container image registry so customers can securely store application container images on the cluster to deploy and manage applications.
|- Maintain responsibility for customer and third-party applications, data, and their complete lifecycle.
- If a customer adds Red Hat, community, third-party, their own, or other services to the cluster by using Operators or external images, the customer is responsible for these services and for working with the appropriate provider, including Red Hat, to troubleshoot any issues.
diff --git a/modules/rosa-sdpolicy-networking.adoc b/modules/rosa-sdpolicy-networking.adoc
index f380047ac8..776b85903a 100644
--- a/modules/rosa-sdpolicy-networking.adoc
+++ b/modules/rosa-sdpolicy-networking.adoc
@@ -36,7 +36,7 @@ To use a custom hostname for a route, you must update your DNS provider by creat
== Cluster ingress
Project administrators can add route annotations for many different purposes, including ingress control through IP allow-listing.
-Ingress policies can also be changed by using `NetworkPolicy` objects, which leverage the `ovs-networkpolicy` plug-in. This allows for full control over the ingress network policy down to the pod level, including between pods on the same cluster and even in the same namespace.
+Ingress policies can also be changed by using `NetworkPolicy` objects, which leverage the `ovs-networkpolicy` plugin. This allows for full control over the ingress network policy down to the pod level, including between pods on the same cluster and even in the same namespace.
All cluster ingress traffic will go through the defined load balancers. Direct access to all nodes is blocked by cloud configuration.
diff --git a/modules/security-container-content-external-scanning.adoc b/modules/security-container-content-external-scanning.adoc
index ed77da9025..db71a53014 100644
--- a/modules/security-container-content-external-scanning.adoc
+++ b/modules/security-container-content-external-scanning.adoc
@@ -217,7 +217,7 @@ annotations:
== Integration reference
In most cases, external tools such as vulnerability scanners develop a
-script or plug-in that watches for image updates, performs scanning, and
+script or plugin that watches for image updates, performs scanning, and
annotates the associated image object with the results. Typically this
automation calls the {product-title} {product-version} REST APIs to write the annotation. See
{product-title} REST APIs for general
diff --git a/modules/security-context-constraints-example.adoc b/modules/security-context-constraints-example.adoc
index 75bb76a17d..06a9b2bbde 100644
--- a/modules/security-context-constraints-example.adoc
+++ b/modules/security-context-constraints-example.adoc
@@ -100,7 +100,7 @@ the effective UID depends on the SCC that emits this pod. Because the `restricte
is granted to all authenticated users by default, it will be available to all
users and service accounts and used in most cases. The `restricted-v2` SCC uses
`MustRunAsRange` strategy for constraining and defaulting the possible values of
-the `securityContext.runAsUser` field. The admission plug-in will look for the
+the `securityContext.runAsUser` field. The admission plugin will look for the
`openshift.io/sa.scc.uid-range` annotation on the current project to populate
range fields, as it does not provide this range. In the end, a container will
have `runAsUser` equal to the first value of the range that is
diff --git a/modules/security-deploy-secrets.adoc b/modules/security-deploy-secrets.adoc
index 2c5c668105..0e40af628d 100644
--- a/modules/security-deploy-secrets.adoc
+++ b/modules/security-deploy-secrets.adoc
@@ -9,7 +9,7 @@
The `Secret` object type provides a mechanism to hold sensitive information such
as passwords, {product-title} client configuration files, `dockercfg` files,
and private source repository credentials. Secrets decouple sensitive content
-from pods. You can mount secrets into containers using a volume plug-in or the
+from pods. You can mount secrets into containers using a volume plugin or the
system can use secrets to perform actions on behalf of a pod.
For example, to add a secret to your deployment configuration
diff --git a/modules/security-network-multiple-pod.adoc b/modules/security-network-multiple-pod.adoc
index 9aac96f39c..d3d36a15c9 100644
--- a/modules/security-network-multiple-pod.adoc
+++ b/modules/security-network-multiple-pod.adoc
@@ -6,7 +6,7 @@
= Using multiple pod networks
Each running container has only one network interface by default.
-The Multus CNI plug-in lets you create multiple CNI networks, and then
+The Multus CNI plugin lets you create multiple CNI networks, and then
attach any of those networks to your pods. In that way, you can do
things like separate private data onto a more restricted network
and have multiple network interfaces on each node.
diff --git a/modules/security-platform-admission.adoc b/modules/security-platform-admission.adoc
index 1a1c0fa2a3..376f3bd989 100644
--- a/modules/security-platform-admission.adoc
+++ b/modules/security-platform-admission.adoc
@@ -3,24 +3,24 @@
// * security/container_security/security-platform.adoc
[id="security-platform-admission_{context}"]
-= Protecting control plane with admission plug-ins
+= Protecting control plane with admission plugins
While RBAC controls access rules between users and groups and available projects,
-_admission plug-ins_ define access to the {product-title} master API.
-Admission plug-ins form a chain of rules that consist of:
+_admission plugins_ define access to the {product-title} master API.
+Admission plugins form a chain of rules that consist of:
-* Default admissions plug-ins: These implement a default set of
+* Default admissions plugins: These implement a default set of
policies and resources limits that are applied to components of the {product-title}
control plane.
-* Mutating admission plug-ins: These plug-ins dynamically extend the admission chain.
+* Mutating admission plugins: These plugins dynamically extend the admission chain.
They call out to a webhook server and can both authenticate a request and modify the selected resource.
-* Validating admission plug-ins: These validate requests for a selected resource
+* Validating admission plugins: These validate requests for a selected resource
and can both validate the request and ensure that the resource does not change again.
-API requests go through admissions plug-ins in a chain, with any failure along
-the way causing the request to be rejected. Each admission plug-in is associated with particular resources and only
+API requests go through admissions plugins in a chain, with any failure along
+the way causing the request to be rejected. Each admission plugin is associated with particular resources and only
responds to requests for those resources.
[id="security-deployment-sccs_{context}"]
diff --git a/modules/security-storage-persistent.adoc b/modules/security-storage-persistent.adoc
index d7a8b87041..a576954f56 100644
--- a/modules/security-storage-persistent.adoc
+++ b/modules/security-storage-persistent.adoc
@@ -3,14 +3,14 @@
// * security/container_security/security-storage.adoc
[id="security-network-storage-persistent_{context}"]
-= Persistent volume plug-ins
+= Persistent volume plugins
Containers are useful for both stateless and stateful applications.
Protecting attached storage is a key element of securing stateful services.
Using the Container Storage Interface (CSI), {product-title} can
incorporate storage from any storage back end that supports the CSI interface.
-{product-title} provides plug-ins for multiple types of storage, including:
+{product-title} provides plugins for multiple types of storage, including:
* {rh-storage-first} *
* AWS Elastic Block Stores (EBS) *
@@ -25,7 +25,7 @@ incorporate storage from any storage back end that supports the CSI interface.
* Fibre Channel
* iSCSI
-Plug-ins for those storage types with dynamic provisioning are marked with
+Plugins for those storage types with dynamic provisioning are marked with
an asterisk (*). Data in transit is encrypted via HTTPS for all
{product-title} components communicating with each other.
diff --git a/modules/serverless-build-events-kn.adoc b/modules/serverless-build-events-kn.adoc
index 6fc753747b..e1ce6feb82 100644
--- a/modules/serverless-build-events-kn.adoc
+++ b/modules/serverless-build-events-kn.adoc
@@ -1,6 +1,6 @@
:_content-type: PROCEDURE
[id="serverless-build-events-kn_{context}"]
-= Building events by using the kn-event plug-in
+= Building events by using the kn-event plugin
You can use the builder-like interface of the `kn event build` command to build an event. You can then send that event at a later time or use it in another context.
diff --git a/modules/serverless-kn-config.adoc b/modules/serverless-kn-config.adoc
index 53136e360a..6240a3668e 100644
--- a/modules/serverless-kn-config.adoc
+++ b/modules/serverless-kn-config.adoc
@@ -28,8 +28,8 @@ eventing:
version: v1 <6>
resource: services <7>
----
-<1> Specifies whether the Knative CLI should look for plug-ins in the `PATH` environment variable. This is a boolean configuration option. The default value is `false`.
-<2> Specifies the directory where the Knative CLI will look for plug-ins. The default path depends on the operating system, as described above. This can be any directory that is visible to the user.
+<1> Specifies whether the Knative CLI should look for plugins in the `PATH` environment variable. This is a boolean configuration option. The default value is `false`.
+<2> Specifies the directory where the Knative CLI will look for plugins. The default path depends on the operating system, as described above. This can be any directory that is visible to the user.
<3> The `sink-mappings` spec defines the Kubernetes addressable resource that is used when you use the `--sink` flag with a `kn` CLI command.
<4> The prefix you want to use to describe your sink. `svc` for a service, `channel`, and `broker` are predefined prefixes in `kn`.
<5> The API group of the Kubernetes resource.
diff --git a/modules/serverless-rn-1-18-0.adoc b/modules/serverless-rn-1-18-0.adoc
index 39970f4e8d..ac8f45ab48 100644
--- a/modules/serverless-rn-1-18-0.adoc
+++ b/modules/serverless-rn-1-18-0.adoc
@@ -16,7 +16,7 @@
* {ServerlessProductName} now uses Kourier 0.24.0.
* {ServerlessProductName} now uses Knative (`kn`) CLI 0.24.0.
* {ServerlessProductName} now uses Knative Kafka 0.24.7.
-* The `kn func` CLI plug-in now uses `func` 0.18.0.
+* The `kn func` CLI plugin now uses `func` 0.18.0.
* In the upcoming {ServerlessProductName} 1.19.0 release, the URL scheme of external routes will default to HTTPS for enhanced security.
+
If you do not want this change to apply for your workloads, you can override the default setting before upgrading to 1.19.0, by adding the following YAML to your `KnativeServing` custom resource (CR):
diff --git a/modules/serverless-rn-1-19-0.adoc b/modules/serverless-rn-1-19-0.adoc
index bb25fe9036..fd1f507129 100644
--- a/modules/serverless-rn-1-19-0.adoc
+++ b/modules/serverless-rn-1-19-0.adoc
@@ -16,7 +16,7 @@
* {ServerlessProductName} now uses Kourier 0.25.
* {ServerlessProductName} now uses Knative (`kn`) CLI 0.25.
* {ServerlessProductName} now uses Knative Kafka 0.25.
-* The `kn func` CLI plug-in now uses `func` 0.19.
+* The `kn func` CLI plugin now uses `func` 0.19.
* The `KafkaBinding` API is deprecated in {ServerlessProductName} 1.19.0 and will be removed in a future release.
diff --git a/modules/serverless-rn-1-20-0.adoc b/modules/serverless-rn-1-20-0.adoc
index 9ffe8aa3dd..d5ce1518a2 100644
--- a/modules/serverless-rn-1-20-0.adoc
+++ b/modules/serverless-rn-1-20-0.adoc
@@ -16,7 +16,7 @@
* {ServerlessProductName} now uses Kourier 0.26.
* {ServerlessProductName} now uses Knative (`kn`) CLI 0.26.
* {ServerlessProductName} now uses Knative Kafka 0.26.
-* The `kn func` CLI plug-in now uses `func` 0.20.
+* The `kn func` CLI plugin now uses `func` 0.20.
* The Kafka broker is now available as a Technology Preview.
+
@@ -25,7 +25,7 @@
The Kafka broker, which is currently in Technology Preview, is not supported on FIPS.
====
-* The `kn event` plug-in is now available as a Technology Preview.
+* The `kn event` plugin is now available as a Technology Preview.
* The `--min-scale` and `--max-scale` flags for the `kn service create` command have been deprecated. Use the `--scale-min` and `--scale-max` flags instead.
diff --git a/modules/serverless-rn-1-21-0.adoc b/modules/serverless-rn-1-21-0.adoc
index ba4af30d5e..38f4573515 100644
--- a/modules/serverless-rn-1-21-0.adoc
+++ b/modules/serverless-rn-1-21-0.adoc
@@ -16,7 +16,7 @@
* {ServerlessProductName} now uses Kourier 1.0.
* {ServerlessProductName} now uses Knative (`kn`) CLI 1.0.
* {ServerlessProductName} now uses Knative Kafka 1.0.
-* The `kn func` CLI plug-in now uses `func` 0.21.
+* The `kn func` CLI plugin now uses `func` 0.21.
* The Kafka sink is now available as a Technology Preview.
* The Knative open source project has begun to deprecate camel-cased configuration keys in favor of using kebab-cased keys consistently. As a result, the `defaultExternalScheme` key, previously mentioned in the {ServerlessProductName} 1.18.0 release notes, is now deprecated and replaced by the `default-external-scheme` key. Usage instructions for the key remain the same.
diff --git a/modules/serverless-rn-1-22-0.adoc b/modules/serverless-rn-1-22-0.adoc
index 5f845238d1..3ada7a94dc 100644
--- a/modules/serverless-rn-1-22-0.adoc
+++ b/modules/serverless-rn-1-22-0.adoc
@@ -16,7 +16,7 @@
* {ServerlessProductName} now uses Kourier 1.1.
* {ServerlessProductName} now uses Knative (`kn`) CLI 1.1.
* {ServerlessProductName} now uses Knative Kafka 1.1.
-* The `kn func` CLI plug-in now uses `func` 0.23.
+* The `kn func` CLI plugin now uses `func` 0.23.
* Init containers support for Knative services is now available as a Technology Preview.
* Persistent volume claim (PVC) support for Knative services is now available as a Technology Preview.
* The `knative-serving`, `knative-serving-ingress`, `knative-eventing` and `knative-kafka` system namespaces now have the `knative.openshift.io/part-of: "openshift-serverless"` label by default.
diff --git a/modules/serverless-rn-1-23-0.adoc b/modules/serverless-rn-1-23-0.adoc
index 5b211bd373..c576b1070d 100644
--- a/modules/serverless-rn-1-23-0.adoc
+++ b/modules/serverless-rn-1-23-0.adoc
@@ -16,7 +16,7 @@
* {ServerlessProductName} now uses Kourier 1.2.
* {ServerlessProductName} now uses Knative (`kn`) CLI 1.2.
* {ServerlessProductName} now uses Knative Kafka 1.2.
-* The `kn func` CLI plug-in now uses `func` 0.24.
+* The `kn func` CLI plugin now uses `func` 0.24.
* It is now possible to use the `kafka.eventing.knative.dev/external.topic` annotation with the Kafka broker. This annotation makes it possible to use an existing externally managed topic instead of the broker creating its own internal topic.
diff --git a/modules/serverless-rn-1-24-0.adoc b/modules/serverless-rn-1-24-0.adoc
index fc9310cf9b..118ff6d5d1 100644
--- a/modules/serverless-rn-1-24-0.adoc
+++ b/modules/serverless-rn-1-24-0.adoc
@@ -16,7 +16,7 @@
* {ServerlessProductName} now uses Kourier 1.3.
* {ServerlessProductName} now uses Knative `kn` CLI 1.3.
* {ServerlessProductName} now uses Knative Kafka 1.3.
-* The `kn func` CLI plug-in now uses `func` 0.24.
+* The `kn func` CLI plugin now uses `func` 0.24.
* Init containers support for Knative services is now generally available (GA).
diff --git a/modules/serverless-rn-1-25-0.adoc b/modules/serverless-rn-1-25-0.adoc
index 638056b17f..b90f2bf4af 100644
--- a/modules/serverless-rn-1-25-0.adoc
+++ b/modules/serverless-rn-1-25-0.adoc
@@ -16,9 +16,9 @@
* {ServerlessProductName} now uses Kourier 1.4.
* {ServerlessProductName} now uses Knative (`kn`) CLI 1.4.
* {ServerlessProductName} now uses Knative Kafka 1.4.
-* The `kn func` CLI plug-in now uses `func` 1.7.0.
+* The `kn func` CLI plugin now uses `func` 1.7.0.
-* Integrated development environment (IDE) plug-ins for creating and deploying functions are now available for link:https://github.com/redhat-developer/vscode-knative[Visual Studio Code] and link:https://github.com/redhat-developer/intellij-knative[IntelliJ].
+* Integrated development environment (IDE) plugins for creating and deploying functions are now available for link:https://github.com/redhat-developer/vscode-knative[Visual Studio Code] and link:https://github.com/redhat-developer/intellij-knative[IntelliJ].
* Knative Kafka broker is now GA. Knative Kafka broker is a highly performant implementation of the Knative broker API, directly targeting Apache Kafka.
+
It is recommended to not use the MT-Channel-Broker, but the Knative Kafka broker instead.
diff --git a/modules/serverless-rn-1-26-0.adoc b/modules/serverless-rn-1-26-0.adoc
index 928991236e..4a0ff9480f 100644
--- a/modules/serverless-rn-1-26-0.adoc
+++ b/modules/serverless-rn-1-26-0.adoc
@@ -18,7 +18,7 @@
* {ServerlessProductName} now uses Knative (`kn`) CLI 1.5.
* {ServerlessProductName} now uses Knative Kafka 1.5.
* {ServerlessProductName} now uses Knative Operator 1.3.
-* The `kn func` CLI plug-in now uses `func` 1.8.1.
+* The `kn func` CLI plugin now uses `func` 1.8.1.
* Persistent volume claims (PVCs) are now GA. PVCs provide permanent data storage for your Knative services.
diff --git a/modules/serverless-send-events-kn.adoc b/modules/serverless-send-events-kn.adoc
index 8413fa20f8..472a14cbae 100644
--- a/modules/serverless-send-events-kn.adoc
+++ b/modules/serverless-send-events-kn.adoc
@@ -1,6 +1,6 @@
:_content-type: PROCEDURE
[id="serverless-send-events-kn_{context}"]
-= Sending events by using the kn-event plug-in
+= Sending events by using the kn-event plugin
You can use the `kn event send` command to send an event. The events can be sent either to publicly available addresses or to addressable resources inside a cluster, such as Kubernetes services, as well as Knative services, brokers, and channels. The command uses the same builder-like interface as the `kn event build` command.
diff --git a/modules/setting-up-vmc-for-vsphere.adoc b/modules/setting-up-vmc-for-vsphere.adoc
index 2cfc91bd12..4cfb4952f9 100644
--- a/modules/setting-up-vmc-for-vsphere.adoc
+++ b/modules/setting-up-vmc-for-vsphere.adoc
@@ -73,7 +73,7 @@ It is recommended to move your vSphere cluster to the VMC `Compute-ResourcePool`
[NOTE]
====
-You cannot use the VMware NSX Container Plug-in for Kubernetes (NCP), and NSX is not used as the OpenShift SDN. The version of NSX currently available with VMC is incompatible with the version of NCP certified with {product-title}.
+You cannot use the VMware NSX Container Plugin for Kubernetes (NCP), and NSX is not used as the OpenShift SDN. The version of NSX currently available with VMC is incompatible with the version of NCP certified with {product-title}.
However, the NSX DHCP service is used for virtual machine IP management with the full-stack automated {product-title} deployment and with nodes provisioned, either manually or automatically, by the Machine API integration with vSphere. Additionally, NSX firewall rules are created to enable access with the {product-title} cluster and between the bastion host and the VMC vSphere hosts.
====
diff --git a/modules/storage-persistent-storage-block-volume.adoc b/modules/storage-persistent-storage-block-volume.adoc
index 3c3b16d62c..b57dfd5a0e 100644
--- a/modules/storage-persistent-storage-block-volume.adoc
+++ b/modules/storage-persistent-storage-block-volume.adoc
@@ -21,12 +21,12 @@ PV and PVC specification.
Pods using raw block volumes must be configured to allow privileged containers.
====
-The following table displays which volume plug-ins support block volumes.
+The following table displays which volume plugins support block volumes.
.Block volume support
[cols="1,1,1,1", width="100%",options="header"]
|===
-|Volume Plug-in |Manually provisioned |Dynamically provisioned |Fully supported
+|Volume Plugin |Manually provisioned |Dynamically provisioned |Fully supported
|AWS EBS | ✅ | ✅ | ✅
|Azure Disk | ✅ | ✅ | ✅
|Azure File | | |
diff --git a/modules/storage-persistent-storage-nfs-provisioning.adoc b/modules/storage-persistent-storage-nfs-provisioning.adoc
index 8420a4d447..6d4945bd3b 100644
--- a/modules/storage-persistent-storage-nfs-provisioning.adoc
+++ b/modules/storage-persistent-storage-nfs-provisioning.adoc
@@ -36,7 +36,7 @@ pod` commands.
<3> Though this appears to be related to controlling access to the volume,
it is actually used similarly to labels and used to match a PVC to a PV.
Currently, no access rules are enforced based on the `accessModes`.
-<4> The volume type being used, in this case the `nfs` plug-in.
+<4> The volume type being used, in this case the `nfs` plugin.
<5> The path that is exported by the NFS server.
<6> The hostname or IP address of the NFS server.
<7> The reclaim policy for the PV. This defines what happens to a volume
diff --git a/modules/storage-persistent-storage-nfs-reclaiming-resources.adoc b/modules/storage-persistent-storage-nfs-reclaiming-resources.adoc
index f113f60540..34cb6a5a48 100644
--- a/modules/storage-persistent-storage-nfs-reclaiming-resources.adoc
+++ b/modules/storage-persistent-storage-nfs-reclaiming-resources.adoc
@@ -4,7 +4,7 @@
[id="nfs-reclaiming-resources_{context}"]
= Reclaiming resources
-NFS implements the {product-title} `Recyclable` plug-in interface. Automatic
+NFS implements the {product-title} `Recyclable` plugin interface. Automatic
processes handle reclamation tasks based on policies set on each persistent
volume.
diff --git a/modules/storage-persistent-storage-nfs-volume-security.adoc b/modules/storage-persistent-storage-nfs-volume-security.adoc
index 2a3209e291..1463273802 100644
--- a/modules/storage-persistent-storage-nfs-volume-security.adoc
+++ b/modules/storage-persistent-storage-nfs-volume-security.adoc
@@ -10,12 +10,12 @@ SELinux considerations. The user is expected to understand the basics of
POSIX permissions, process UIDs, supplemental groups, and SELinux.
Developers request NFS storage by referencing either a PVC by name or the
-NFS volume plug-in directly in the `volumes` section of their `Pod`
+NFS volume plugin directly in the `volumes` section of their `Pod`
definition.
The `/etc/exports` file on the NFS server contains the accessible NFS
directories. The target NFS directory has POSIX owner and group IDs. The
-{product-title} NFS plug-in mounts the container's NFS directory with the
+{product-title} NFS plugin mounts the container's NFS directory with the
same POSIX ownership and permissions found on the exported NFS directory.
However, the container is not run with its effective UID equal to the
owner of the NFS mount, which is the desired behavior.
diff --git a/modules/storage-persistent-storage-pv.adoc b/modules/storage-persistent-storage-pv.adoc
index 45380849c3..009b827ed0 100644
--- a/modules/storage-persistent-storage-pv.adoc
+++ b/modules/storage-persistent-storage-pv.adoc
@@ -140,7 +140,7 @@ ifndef::microshift[]
.Supported access modes for PVs
[cols=",^v,^v,^v", width="100%",options="header"]
|===
-|Volume plug-in |ReadWriteOnce ^[1]^ |ReadOnlyMany |ReadWriteMany
+|Volume plugin |ReadWriteOnce ^[1]^ |ReadOnlyMany |ReadWriteMany
ifdef::microshift[]
|Local volume| ✅ | - | -
endif::[]
diff --git a/modules/support-collecting-network-trace.adoc b/modules/support-collecting-network-trace.adoc
index 6b6c37f310..3a0551d561 100644
--- a/modules/support-collecting-network-trace.adoc
+++ b/modules/support-collecting-network-trace.adoc
@@ -52,7 +52,7 @@ $ oc debug node/my-cluster-node
# ip ad
----
-. Start a `toolbox` container, which includes the required binaries and plug-ins to run `sosreport`:
+. Start a `toolbox` container, which includes the required binaries and plugins to run `sosreport`:
+
[source,terminal]
----
diff --git a/modules/support-generating-a-sosreport-archive.adoc b/modules/support-generating-a-sosreport-archive.adoc
index 28efac43fd..f489ba12b3 100644
--- a/modules/support-generating-a-sosreport-archive.adoc
+++ b/modules/support-generating-a-sosreport-archive.adoc
@@ -64,7 +64,7 @@ endif::openshift-dedicated[]
{product-title} {product-version} cluster nodes running {op-system-first} are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as _accessed_. However, if the {product-title} API is not available, or the kubelet is not properly functioning on the target node, `oc` operations will be impacted. In such situations, it is possible to access nodes using `ssh core@..` instead.
====
-. Start a `toolbox` container, which includes the required binaries and plug-ins to run `sosreport`:
+. Start a `toolbox` container, which includes the required binaries and plugins to run `sosreport`:
+
[source,terminal]
----
@@ -73,17 +73,17 @@ endif::openshift-dedicated[]
+
[NOTE]
====
-If an existing `toolbox` pod is already running, the `toolbox` command outputs `'toolbox-' already exists. Trying to start...`. Remove the running toolbox container with `podman rm toolbox-` and spawn a new toolbox container, to avoid issues with `sosreport` plug-ins.
+If an existing `toolbox` pod is already running, the `toolbox` command outputs `'toolbox-' already exists. Trying to start...`. Remove the running toolbox container with `podman rm toolbox-` and spawn a new toolbox container, to avoid issues with `sosreport` plugins.
====
+
. Collect a `sosreport` archive.
-.. Run the `sosreport` command and enable the `crio.all` and `crio.logs` CRI-O container engine `sosreport` plug-ins:
+.. Run the `sosreport` command and enable the `crio.all` and `crio.logs` CRI-O container engine `sosreport` plugins:
+
[source,terminal]
----
# sosreport -k crio.all=on -k crio.logs=on <1>
----
-<1> `-k` enables you to define `sosreport` plug-in parameters outside of the defaults.
+<1> `-k` enables you to define `sosreport` plugin parameters outside of the defaults.
+
.. Press *Enter* when prompted, to continue.
+
diff --git a/modules/support-starting-an-alternative-image-with-toolbox.adoc b/modules/support-starting-an-alternative-image-with-toolbox.adoc
index 07b821ed4e..a6b489df59 100644
--- a/modules/support-starting-an-alternative-image-with-toolbox.adoc
+++ b/modules/support-starting-an-alternative-image-with-toolbox.adoc
@@ -53,5 +53,5 @@ TOOLBOX_NAME=toolbox-fedora-33 <3>
+
[NOTE]
====
-If an existing `toolbox` pod is already running, the `toolbox` command outputs `'toolbox-' already exists. Trying to start...`. Remove the running toolbox container with `podman rm toolbox-` and spawn a new toolbox container, to avoid issues with `sosreport` plug-ins.
+If an existing `toolbox` pod is already running, the `toolbox` command outputs `'toolbox-' already exists. Trying to start...`. Remove the running toolbox container with `podman rm toolbox-` and spawn a new toolbox container, to avoid issues with `sosreport` plugins.
====
diff --git a/modules/troubleshooting-dynamic-plug-in.adoc b/modules/troubleshooting-dynamic-plug-in.adoc
index 0b38305497..f802b2deb0 100644
--- a/modules/troubleshooting-dynamic-plug-in.adoc
+++ b/modules/troubleshooting-dynamic-plug-in.adoc
@@ -4,42 +4,42 @@
:_content-type: REFERENCE
[id="troubleshooting-dynamic-plug-in_{context}"]
-= Troubleshooting your dynamic plug-in
+= Troubleshooting your dynamic plugin
-Refer to this list of troubleshooting tips if you run into issues loading your plug-in.
+Refer to this list of troubleshooting tips if you run into issues loading your plugin.
-* Verify that you have enabled your plug-in in the console Operator configuration and your plug-in name is the output by running the following command:
+* Verify that you have enabled your plugin in the console Operator configuration and your plugin name is the output by running the following command:
+
[source,terminal]
----
$ oc get console.operator.openshift.io cluster -o jsonpath='{.spec.plugins}'
----
-** Verify the enabled plug-ins on the status card of the *Overview* page in the *Administrator perspective*. You will need to refresh your browser if the plug-in was recently enabled.
+** Verify the enabled plugins on the status card of the *Overview* page in the *Administrator perspective*. You will need to refresh your browser if the plugin was recently enabled.
-* Verify your plug-in service is healthy by:
-** Verifying your plug-in pod status is running and your containers are ready.
+* Verify your plugin service is healthy by:
+** Verifying your plugin pod status is running and your containers are ready.
** Verifying the service label selector matches the pod and the target port is correct.
** Curl the `plugin-manifest.json` from the service in a terminal on the console pod or another pod on the cluster.
-* Verify your `ConsolePlugin` resource name (`consolePlugin.name`) matches the plug-in name used in `package.json`.
+* Verify your `ConsolePlugin` resource name (`consolePlugin.name`) matches the plugin name used in `package.json`.
* Verify your service name, namespace, port, and path are declared correctly in the `ConsolePlugin` resource.
-* Verify your plug-in service uses HTTPS and service serving certificates.
+* Verify your plugin service uses HTTPS and service serving certificates.
* Verify any certificates or connection errors in the console pod logs.
-* Verify the feature flag your plug-in relys on is not disabled.
+* Verify the feature flag your plugin relys on is not disabled.
-* Verify your plug-in does not have any `consolePlugin.dependencies` in `package.json` that are not met.
-** This can include console version dependencies or dependencies on other plug-ins. Filter the JS console in your browser for your plug-in's name to see messages that are logged.
+* Verify your plugin does not have any `consolePlugin.dependencies` in `package.json` that are not met.
+** This can include console version dependencies or dependencies on other plugins. Filter the JS console in your browser for your plugin's name to see messages that are logged.
* Verify there are no typos in the nav extension perspective or section IDs.
-** Your plug-in may be loaded, but nav items missing if IDs are incorrect. Try navigating to a plug-in page directly by editing the URL.
+** Your plugin may be loaded, but nav items missing if IDs are incorrect. Try navigating to a plugin page directly by editing the URL.
-* Verify there are no network policies that are blocking traffic from the console pod to your plug-in service.
+* Verify there are no network policies that are blocking traffic from the console pod to your plugin service.
** If necessary, adjust network policies to allow console pods in the openshift-console namespace to make requests to your service.
-* Verify the list of dynamic plug-ins to be loaded in your browser in the *Console* tab of the developer tools browser.
-** Evaluate `window.SERVER_FLAGS.consolePlugins` to see the dynamic plug-in on the Console frontend.
+* Verify the list of dynamic plugins to be loaded in your browser in the *Console* tab of the developer tools browser.
+** Evaluate `window.SERVER_FLAGS.consolePlugins` to see the dynamic plugin on the Console frontend.
diff --git a/modules/update-mirror-repository-oc-mirror.adoc b/modules/update-mirror-repository-oc-mirror.adoc
index 95b8ad99b8..fdb81e293d 100644
--- a/modules/update-mirror-repository-oc-mirror.adoc
+++ b/modules/update-mirror-repository-oc-mirror.adoc
@@ -4,9 +4,9 @@
:_content-type: PROCEDURE
[id="update-mirror-repository-oc-mirror_{context}"]
-= Mirroring resources using the oc-mirror plug-in
+= Mirroring resources using the oc-mirror plugin
-Use the oc-mirror OpenShift CLI (`oc`) plug-in to mirror images onto a mirror registry. Compared to using `oc adm release mirror`, the oc-mirror plug-in has the following advantages:
+Use the oc-mirror OpenShift CLI (`oc`) plugin to mirror images onto a mirror registry. Compared to using `oc adm release mirror`, the oc-mirror plugin has the following advantages:
* It is simpler to use.
@@ -16,7 +16,7 @@ Use the oc-mirror OpenShift CLI (`oc`) plug-in to mirror images onto a mirror re
.Procedure
-. Navigate to the _Mirroring images for a disconnected installation using the oc-mirror plug-in_ page of the documentation.
+. Navigate to the _Mirroring images for a disconnected installation using the oc-mirror plugin_ page of the documentation.
. Follow the instructions on that page to mirror resources onto a mirror registry.
diff --git a/modules/virt-about-nmstate.adoc b/modules/virt-about-nmstate.adoc
index 009a52a55f..8aa00a9d21 100644
--- a/modules/virt-about-nmstate.adoc
+++ b/modules/virt-about-nmstate.adoc
@@ -29,5 +29,5 @@ Node networking is monitored and updated by the following objects:
[NOTE]
====
-If your {product-title} cluster uses OVN-Kubernetes as the network plug-in, you cannot attach a Linux bridge or bonding to the default interface of a host because of a change in the host network topology of OVN-Kubernetes. As a workaround, you can use a secondary network interface connected to your host, or switch to the OpenShift SDN network plugin.
+If your {product-title} cluster uses OVN-Kubernetes as the network plugin, you cannot attach a Linux bridge or bonding to the default interface of a host because of a change in the host network topology of OVN-Kubernetes. As a workaround, you can use a secondary network interface connected to your host, or switch to the OpenShift SDN network plugin.
====
diff --git a/modules/virt-additional-scc-for-kubevirt-controller.adoc b/modules/virt-additional-scc-for-kubevirt-controller.adoc
index f5f7c693dc..33c9138e85 100644
--- a/modules/virt-additional-scc-for-kubevirt-controller.adoc
+++ b/modules/virt-additional-scc-for-kubevirt-controller.adoc
@@ -15,7 +15,7 @@ The `kubevirt-controller` service account is granted additional SCCs and Linux c
The `kubevirt-controller` service account is granted the following SCCs:
* `scc.AllowHostDirVolumePlugin = true` +
-This allows virtual machines to use the hostpath volume plug-in.
+This allows virtual machines to use the hostpath volume plugin.
* `scc.AllowPrivilegedContainer = false` +
This ensures the virt-launcher pod is not run as a privileged container.
diff --git a/modules/virt-attaching-vm-secondary-network-cli.adoc b/modules/virt-attaching-vm-secondary-network-cli.adoc
index dd1bdcdccf..f473d8ae23 100644
--- a/modules/virt-attaching-vm-secondary-network-cli.adoc
+++ b/modules/virt-attaching-vm-secondary-network-cli.adoc
@@ -46,7 +46,7 @@ spec:
----
<1> The name of the bridge interface.
<2> The name of the network. This value must match the `name` value of the corresponding `spec.template.spec.domain.devices.interfaces` entry.
-<3> The name of the network attachment definition, prefixed by the namespace where it exists. The namespace must be either the `default` namespace or the same namespace where the VM is to be created. In this case, `multus` is used. Multus is a cloud network interface (CNI) plug-in that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs.
+<3> The name of the network attachment definition, prefixed by the namespace where it exists. The namespace must be either the `default` namespace or the same namespace where the VM is to be created. In this case, `multus` is used. Multus is a cloud network interface (CNI) plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs.
. Apply the configuration:
+
diff --git a/modules/virt-configuring-secondary-network-vm-live-migration.adoc b/modules/virt-configuring-secondary-network-vm-live-migration.adoc
index 59e9fbbe8a..4ade414b2c 100644
--- a/modules/virt-configuring-secondary-network-vm-live-migration.adoc
+++ b/modules/virt-configuring-secondary-network-vm-live-migration.adoc
@@ -12,7 +12,7 @@ To configure a dedicated secondary network for live migration, you must first cr
* You installed the OpenShift CLI (`oc`).
* You logged in to the cluster as a user with the `cluster-admin` role.
-* The Multus Container Network Interface (CNI) plug-in is installed on the cluster.
+* The Multus Container Network Interface (CNI) plugin is installed on the cluster.
* Every node on the cluster has at least two Network Interface Cards (NICs), and the NICs to be used for live migration are connected to the same VLAN.
* The virtual machine (VM) is running with the `LiveMigrate` eviction strategy.
@@ -43,7 +43,7 @@ spec:
----
<1> The name of the `NetworkAttachmentDefinition` object.
<2> The name of the NIC to be used for live migration.
-<3> The name of the CNI plug-in that provides the network for this network attachment definition.
+<3> The name of the CNI plugin that provides the network for this network attachment definition.
<4> The IP address range for the secondary network. This range must not have any overlap with the IP addresses of the main network.
. Open the `HyperConverged` CR in your default editor by running the following command:
diff --git a/modules/virt-creating-bridge-nad-cli.adoc b/modules/virt-creating-bridge-nad-cli.adoc
index 35b4a171ee..9f5ebcc781 100644
--- a/modules/virt-creating-bridge-nad-cli.adoc
+++ b/modules/virt-creating-bridge-nad-cli.adoc
@@ -39,7 +39,7 @@ spec:
<1> The name for the `NetworkAttachmentDefinition` object.
<2> Optional: Annotation key-value pair for node selection, where `bridge-interface` must match the name of a bridge configured on some nodes. If you add this annotation to your network attachment definition, your virtual machine instances will only run on the nodes that have the `bridge-interface` bridge connected.
<3> The name for the configuration. It is recommended to match the configuration name to the `name` value of the network attachment definition.
-<4> The actual name of the Container Network Interface (CNI) plug-in that provides the network for this network attachment definition. Do not change this field unless you want to use a different CNI.
+<4> The actual name of the Container Network Interface (CNI) plugin that provides the network for this network attachment definition. Do not change this field unless you want to use a different CNI.
<5> The name of the Linux bridge configured on the node.
<6> Optional: Flag to enable MAC spoof check. When set to `true`, you cannot change the MAC address of the pod or guest interface. This attribute provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod.
<7> Optional: The VLAN tag. No additional VLAN configuration is required on the node network configuration policy.
diff --git a/modules/virt-exposing-pci-device-in-cluster-cli.adoc b/modules/virt-exposing-pci-device-in-cluster-cli.adoc
index 9955384f64..4dfb7c6582 100644
--- a/modules/virt-exposing-pci-device-in-cluster-cli.adoc
+++ b/modules/virt-exposing-pci-device-in-cluster-cli.adoc
@@ -43,7 +43,7 @@ spec:
<2> The list of PCI devices available on the node.
<3> The `vendor-ID` and the `device-ID` required to identify the PCI device.
<4> The name of a PCI host device.
-<5> Optional: Setting this field to `true` indicates that the resource is provided by an external device plug-in. {VirtProductName} allows the usage of this device in the cluster but leaves the allocation and monitoring to an external device plug-in.
+<5> Optional: Setting this field to `true` indicates that the resource is provided by an external device plugin. {VirtProductName} allows the usage of this device in the cluster but leaves the allocation and monitoring to an external device plugin.
+
[NOTE]
====
diff --git a/modules/virt-measuring-latency-vm-secondary-network.adoc b/modules/virt-measuring-latency-vm-secondary-network.adoc
index 7c01f5b141..33deddc54d 100644
--- a/modules/virt-measuring-latency-vm-secondary-network.adoc
+++ b/modules/virt-measuring-latency-vm-secondary-network.adoc
@@ -16,7 +16,7 @@ If you have previously run a checkup, skip to step 5 of the procedure because th
* You installed the OpenShift CLI (`oc`).
* The cluster has at least two worker nodes.
-* The Multus Container Network Interface (CNI) plug-in is installed on the cluster.
+* The Multus Container Network Interface (CNI) plugin is installed on the cluster.
* You configured a network attachment definition for a namespace.
.Procedure
diff --git a/modules/virt-networking-glossary.adoc b/modules/virt-networking-glossary.adoc
index 18ea30201d..df8ae573b4 100644
--- a/modules/virt-networking-glossary.adoc
+++ b/modules/virt-networking-glossary.adoc
@@ -6,15 +6,15 @@
[id="virt-networking-glossary_{context}"]
= {VirtProductName} networking glossary
-{VirtProductName} provides advanced networking functionality by using custom resources and plug-ins.
+{VirtProductName} provides advanced networking functionality by using custom resources and plugins.
The following terms are used throughout {VirtProductName} documentation:
Container Network Interface (CNI):: a link:https://www.cncf.io/[Cloud Native Computing Foundation]
project, focused on container network connectivity.
-{VirtProductName} uses CNI plug-ins to build upon the basic Kubernetes networking functionality.
+{VirtProductName} uses CNI plugins to build upon the basic Kubernetes networking functionality.
-Multus:: a "meta" CNI plug-in that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs.
+Multus:: a "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs.
Custom resource definition (CRD):: a link:https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/[Kubernetes]
API resource that allows you to define custom resources, or an object defined by using the CRD API resource.
diff --git a/modules/virt-pxe-booting-with-mac-address.adoc b/modules/virt-pxe-booting-with-mac-address.adoc
index ed485403b6..ae95daa3cb 100644
--- a/modules/virt-pxe-booting-with-mac-address.adoc
+++ b/modules/virt-pxe-booting-with-mac-address.adoc
@@ -44,7 +44,7 @@ spec:
}'
----
<1> Optional: The VLAN tag.
-<2> The `cnv-tuning` plug-in provides support for custom MAC addresses.
+<2> The `cnv-tuning` plugin provides support for custom MAC addresses.
+
[NOTE]
====
diff --git a/modules/windows-node-services.adoc b/modules/windows-node-services.adoc
index be419a3075..8631adf91e 100644
--- a/modules/windows-node-services.adoc
+++ b/modules/windows-node-services.adoc
@@ -16,7 +16,7 @@ The following Windows-specific services are installed on each Windows node:
|kubelet
|Registers the Windows node and manages its status.
-|Container Network Interface (CNI) plug-ins
+|Container Network Interface (CNI) plugins
|Exposes link:https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#networking[networking] for Windows nodes.
|Windows Instance Config Daemon (WICD)
diff --git a/modules/ztp-using-pgt-to-update-source-crs.adoc b/modules/ztp-using-pgt-to-update-source-crs.adoc
index 6c7a4d455d..824528aad3 100644
--- a/modules/ztp-using-pgt-to-update-source-crs.adoc
+++ b/modules/ztp-using-pgt-to-update-source-crs.adoc
@@ -6,7 +6,7 @@
[id="ztp-using-pgt-to-update-source-crs_{context}"]
= Using PolicyGenTemplate CRs to override source CRs content
-`PolicyGenTemplate` custom resources (CRs) allow you to overlay additional configuration details on top of the base source CRs provided with the GitOps plug-in in the `ztp-site-generate` container. You can think of `PolicyGenTemplate` CRs as a logical merge or patch to the base CR. Use `PolicyGenTemplate` CRs to update a single field of the base CR, or overlay the entire contents of the base CR. You can update values and insert fields that are not in the base CR.
+`PolicyGenTemplate` custom resources (CRs) allow you to overlay additional configuration details on top of the base source CRs provided with the GitOps plugin in the `ztp-site-generate` container. You can think of `PolicyGenTemplate` CRs as a logical merge or patch to the base CR. Use `PolicyGenTemplate` CRs to update a single field of the base CR, or overlay the entire contents of the base CR. You can update values and insert fields that are not in the base CR.
The following example procedure describes how to update fields in the generated `PerformanceProfile` CR for the reference configuration based on the `PolicyGenTemplate` CR in the `group-du-sno-ranGen.yaml` file. Use the procedure as a basis for modifying other parts of the `PolicyGenTemplate` based on your requirements.
diff --git a/networking/changing-cluster-network-mtu.adoc b/networking/changing-cluster-network-mtu.adoc
index fcedc72c74..bffb983101 100644
--- a/networking/changing-cluster-network-mtu.adoc
+++ b/networking/changing-cluster-network-mtu.adoc
@@ -7,7 +7,7 @@ include::_attributes/common-attributes.adoc[]
toc::[]
[role="_abstract"]
-As a cluster administrator, you can change the MTU for the cluster network after cluster installation. This change is disruptive as cluster nodes must be rebooted to finalize the MTU change. You can change the MTU only for clusters using the OVN-Kubernetes or OpenShift SDN network plug-ins.
+As a cluster administrator, you can change the MTU for the cluster network after cluster installation. This change is disruptive as cluster nodes must be rebooted to finalize the MTU change. You can change the MTU only for clusters using the OVN-Kubernetes or OpenShift SDN network plugins.
include::modules/nw-cluster-mtu-change-about.adoc[leveloffset=+1]
include::modules/nw-cluster-mtu-change.adoc[leveloffset=+1]
diff --git a/networking/hardware_networks/about-sriov.adoc b/networking/hardware_networks/about-sriov.adoc
index 9e9eb8c47f..76ad4ea64e 100644
--- a/networking/hardware_networks/about-sriov.adoc
+++ b/networking/hardware_networks/about-sriov.adoc
@@ -41,7 +41,7 @@ It performs the following functions:
- Orchestrates discovery and management of SR-IOV network devices
- Generates `NetworkAttachmentDefinition` custom resources for the SR-IOV Container Network Interface (CNI)
-- Creates and updates the configuration of the SR-IOV network device plug-in
+- Creates and updates the configuration of the SR-IOV network device plugin
- Creates node specific `SriovNetworkNodeState` custom resources
- Updates the `spec.interfaces` field in each `SriovNetworkNodeState` custom resource
@@ -56,14 +56,14 @@ A dynamic admission controller webhook that validates the Operator custom resour
SR-IOV Network resources injector::
A dynamic admission controller webhook that provides functionality for patching Kubernetes pod specifications with requests and limits for custom network resources such as SR-IOV VFs. The SR-IOV network resources injector adds the `resource` field to only the first container in a pod automatically.
-SR-IOV network device plug-in::
-A device plug-in that discovers, advertises, and allocates SR-IOV network virtual function (VF) resources. Device plug-ins are used in Kubernetes to enable the use of limited resources, typically in physical devices. Device plug-ins give the Kubernetes scheduler awareness of resource availability, so that the scheduler can schedule pods on nodes with sufficient resources.
+SR-IOV network device plugin::
+A device plugin that discovers, advertises, and allocates SR-IOV network virtual function (VF) resources. Device plugins are used in Kubernetes to enable the use of limited resources, typically in physical devices. Device plugins give the Kubernetes scheduler awareness of resource availability, so that the scheduler can schedule pods on nodes with sufficient resources.
-SR-IOV CNI plug-in::
-A CNI plug-in that attaches VF interfaces allocated from the SR-IOV network device plug-in directly into a pod.
+SR-IOV CNI plugin::
+A CNI plugin that attaches VF interfaces allocated from the SR-IOV network device plugin directly into a pod.
-SR-IOV InfiniBand CNI plug-in::
-A CNI plug-in that attaches InfiniBand (IB) VF interfaces allocated from the SR-IOV network device plug-in directly into a pod.
+SR-IOV InfiniBand CNI plugin::
+A CNI plugin that attaches InfiniBand (IB) VF interfaces allocated from the SR-IOV network device plugin directly into a pod.
[NOTE]
====
diff --git a/networking/hardware_networks/configuring-interface-sysctl-sriov-device.adoc b/networking/hardware_networks/configuring-interface-sysctl-sriov-device.adoc
index f4902135fb..526898cc6c 100644
--- a/networking/hardware_networks/configuring-interface-sysctl-sriov-device.adoc
+++ b/networking/hardware_networks/configuring-interface-sysctl-sriov-device.adoc
@@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]
toc::[]
-As a cluster administrator, you can modify interface-level network sysctls using the tuning Container Network Interface (CNI) meta plug-in for a pod connected to a SR-IOV network device.
+As a cluster administrator, you can modify interface-level network sysctls using the tuning Container Network Interface (CNI) meta plugin for a pod connected to a SR-IOV network device.
include::modules/nw-label-nodes-with-sriov.adoc[leveloffset=+1]
diff --git a/networking/multiple_networks/configuring-additional-network.adoc b/networking/multiple_networks/configuring-additional-network.adoc
index a0e9f6f637..fc4389a3c8 100644
--- a/networking/multiple_networks/configuring-additional-network.adoc
+++ b/networking/multiple_networks/configuring-additional-network.adoc
@@ -16,13 +16,13 @@ As a cluster administrator, you can configure an additional network for your clu
[id="{context}_approaches-managing-additional-network"]
== Approaches to managing an additional network
-You can manage the life cycle of an additional network by two approaches. Each approach is mutually exclusive and you can only use one approach for managing an additional network at a time. For either approach, the additional network is managed by a Container Network Interface (CNI) plug-in that you configure.
+You can manage the life cycle of an additional network by two approaches. Each approach is mutually exclusive and you can only use one approach for managing an additional network at a time. For either approach, the additional network is managed by a Container Network Interface (CNI) plugin that you configure.
-For an additional network, IP addresses are provisioned through an IP Address Management (IPAM) CNI plug-in that you configure as part of the additional network. The IPAM plug-in supports a variety of IP address assignment approaches including DHCP and static assignment.
+For an additional network, IP addresses are provisioned through an IP Address Management (IPAM) CNI plugin that you configure as part of the additional network. The IPAM plugin supports a variety of IP address assignment approaches including DHCP and static assignment.
* Modify the Cluster Network Operator (CNO) configuration: The CNO automatically creates and manages the `NetworkAttachmentDefinition` object. In addition to managing the object lifecycle the CNO ensures a DHCP is available for an additional network that uses a DHCP assigned IP address.
-* Applying a YAML manifest: You can manage the additional network directly by creating an `NetworkAttachmentDefinition` object. This approach allows for the chaining of CNI plug-ins.
+* Applying a YAML manifest: You can manage the additional network directly by creating an `NetworkAttachmentDefinition` object. This approach allows for the chaining of CNI plugins.
[id="{context}_configuration-additional-network-attachment"]
== Configuration for an additional network attachment
@@ -50,7 +50,7 @@ The configuration for the API is described in the following table:
|`spec.config`
|`string`
-|The CNI plug-in configuration in JSON format.
+|The CNI plugin configuration in JSON format.
|====
@@ -87,7 +87,7 @@ creating. The name must be unique within the specified `namespace`.
<3> The namespace to create the network attachment in. If
you do not specify a value, then the `default` namespace is used.
-<4> A CNI plug-in configuration in JSON format.
+<4> A CNI plugin configuration in JSON format.
[id="{context}_configuration-additional-network-yaml"]
=== Configuration of an additional network from a YAML manifest
@@ -108,7 +108,7 @@ spec:
----
<1> The name for the additional network attachment that you are
creating.
-<2> A CNI plug-in configuration in JSON format.
+<2> A CNI plugin configuration in JSON format.
[id="{context}_configuration-additional-network-types"]
== Configurations for additional network types
diff --git a/networking/multiple_networks/understanding-multiple-networks.adoc b/networking/multiple_networks/understanding-multiple-networks.adoc
index 7321d84193..acd964166b 100644
--- a/networking/multiple_networks/understanding-multiple-networks.adoc
+++ b/networking/multiple_networks/understanding-multiple-networks.adoc
@@ -6,13 +6,13 @@ include::_attributes/common-attributes.adoc[]
toc::[]
-In Kubernetes, container networking is delegated to networking plug-ins that
+In Kubernetes, container networking is delegated to networking plugins that
implement the Container Network Interface (CNI).
-{product-title} uses the Multus CNI plug-in to allow chaining of CNI plug-ins.
+{product-title} uses the Multus CNI plugin to allow chaining of CNI plugins.
During cluster installation, you configure your _default_ pod network. The
default network handles all ordinary network traffic for the cluster. You can
-define an _additional network_ based on the available CNI plug-ins and attach
+define an _additional network_ based on the available CNI plugins and attach
one or more of these networks to your pods. You can define more than one
additional network for your cluster, depending on your needs. This gives you
flexibility when you configure pods that deliver network functionality, such as
@@ -43,7 +43,7 @@ To attach additional network interfaces to a pod, you must create configurations
[id="additional-networks-provided"]
== Additional networks in {product-title}
-{product-title} provides the following CNI plug-ins for creating additional
+{product-title} provides the following CNI plugins for creating additional
networks in your cluster:
* *bridge*: xref:../../networking/multiple_networks/configuring-additional-network.adoc#nw-multus-bridge-object_configuring-additional-network[Configure a bridge-based additional network]
diff --git a/networking/network_policy/multitenant-network-policy.adoc b/networking/network_policy/multitenant-network-policy.adoc
index 2a4684d3cb..28735af9cb 100644
--- a/networking/network_policy/multitenant-network-policy.adoc
+++ b/networking/network_policy/multitenant-network-policy.adoc
@@ -13,7 +13,7 @@ As a cluster administrator, you can configure your network policies to provide m
[NOTE]
====
-If you are using the OpenShift SDN network plug-in, configuring network policies as described in this section provides network isolation similar to multitenant mode but with network policy mode set.
+If you are using the OpenShift SDN network plugin, configuring network policies as described in this section provides network isolation similar to multitenant mode but with network policy mode set.
====
include::modules/nw-networkpolicy-multitenant-isolation.adoc[leveloffset=+1]
diff --git a/networking/networking-operators-overview.adoc b/networking/networking-operators-overview.adoc
index c71591bbea..c021584ef4 100644
--- a/networking/networking-operators-overview.adoc
+++ b/networking/networking-operators-overview.adoc
@@ -10,7 +10,7 @@ toc::[]
[id="networking-operators-overview-cluster-network-operator"]
== Cluster Network Operator
-The Cluster Network Operator (CNO) deploys and manages the cluster network components in an {product-title} cluster. This includes deployment of the Container Network Interface (CNI) network plug-in selected for the cluster during installation. For more information, see xref:../networking/cluster-network-operator.adoc#cluster-network-operator[Cluster Network Operator in {product-title}].
+The Cluster Network Operator (CNO) deploys and manages the cluster network components in an {product-title} cluster. This includes deployment of the Container Network Interface (CNI) network plugin selected for the cluster during installation. For more information, see xref:../networking/cluster-network-operator.adoc#cluster-network-operator[Cluster Network Operator in {product-title}].
[id="networking-operators-overview-dns-operator"]
== DNS Operator
diff --git a/networking/openshift_sdn/about-openshift-sdn.adoc b/networking/openshift_sdn/about-openshift-sdn.adoc
index 3b866fc48c..44ae67e75e 100644
--- a/networking/openshift_sdn/about-openshift-sdn.adoc
+++ b/networking/openshift_sdn/about-openshift-sdn.adoc
@@ -1,12 +1,12 @@
:_content-type: ASSEMBLY
[id="about-openshift-sdn"]
-= About the OpenShift SDN network plug-in
+= About the OpenShift SDN network plugin
include::_attributes/common-attributes.adoc[]
:context: about-openshift-sdn
toc::[]
-Part of {openshift-networking}, OpenShift SDN is a network plug-in that uses a
+Part of {openshift-networking}, OpenShift SDN is a network plugin that uses a
software-defined networking (SDN) approach to provide a unified cluster network
that enables communication between pods across the {product-title} cluster. This
pod network is established and maintained by OpenShift SDN, which configures
diff --git a/networking/openshift_sdn/assigning-egress-ips.adoc b/networking/openshift_sdn/assigning-egress-ips.adoc
index c04d28db0c..550c0d0de5 100644
--- a/networking/openshift_sdn/assigning-egress-ips.adoc
+++ b/networking/openshift_sdn/assigning-egress-ips.adoc
@@ -7,7 +7,7 @@ include::_attributes/common-attributes.adoc[]
toc::[]
[role="_abstract"]
-As a cluster administrator, you can configure the OpenShift SDN Container Network Interface (CNI) network plug-in to assign one or more egress IP addresses to a project.
+As a cluster administrator, you can configure the OpenShift SDN Container Network Interface (CNI) network plugin to assign one or more egress IP addresses to a project.
include::modules/nw-egress-ips-about.adoc[leveloffset=+1]
diff --git a/networking/openshift_sdn/multitenant-isolation.adoc b/networking/openshift_sdn/multitenant-isolation.adoc
index a16b176e96..5f25fad544 100644
--- a/networking/openshift_sdn/multitenant-isolation.adoc
+++ b/networking/openshift_sdn/multitenant-isolation.adoc
@@ -7,7 +7,7 @@ include::_attributes/common-attributes.adoc[]
toc::[]
When your cluster is configured to use the multitenant isolation mode for the
-OpenShift SDN network plug-in, each project is isolated by default. Network traffic
+OpenShift SDN network plugin, each project is isolated by default. Network traffic
is not allowed between pods or services in different projects in multitenant
isolation mode.
diff --git a/networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.adoc b/networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.adoc
index 08accdb15e..0ced941f42 100644
--- a/networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.adoc
+++ b/networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.adoc
@@ -1,6 +1,6 @@
:_content-type: ASSEMBLY
[id="about-ovn-kubernetes"]
-= About the OVN-Kubernetes network plug-in
+= About the OVN-Kubernetes network plugin
include::_attributes/common-attributes.adoc[]
:context: about-ovn-kubernetes
@@ -8,9 +8,9 @@ toc::[]
The {product-title} cluster uses a virtualized network for pod and service networks.
-Part of {openshift-networking}, the OVN-Kubernetes network plug-in is the default network provider for {product-title}.
+Part of {openshift-networking}, the OVN-Kubernetes network plugin is the default network provider for {product-title}.
OVN-Kubernetes is based on Open Virtual Network (OVN) and provides an overlay-based networking implementation.
-A cluster that uses the OVN-Kubernetes plug-in also runs Open vSwitch (OVS) on each node.
+A cluster that uses the OVN-Kubernetes plugin also runs Open vSwitch (OVS) on each node.
OVN configures OVS on each node to implement the declared network configuration.
[NOTE]
diff --git a/networking/ovn_kubernetes_network_provider/configuring-egress-ips-ovn.adoc b/networking/ovn_kubernetes_network_provider/configuring-egress-ips-ovn.adoc
index b21742f141..7a9beaf306 100644
--- a/networking/ovn_kubernetes_network_provider/configuring-egress-ips-ovn.adoc
+++ b/networking/ovn_kubernetes_network_provider/configuring-egress-ips-ovn.adoc
@@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]
toc::[]
-As a cluster administrator, you can configure the OVN-Kubernetes Container Network Interface (CNI) network plug-in to assign one or more egress IP addresses to a namespace, or to specific pods in a namespace.
+As a cluster administrator, you can configure the OVN-Kubernetes Container Network Interface (CNI) network plugin to assign one or more egress IP addresses to a namespace, or to specific pods in a namespace.
include::modules/nw-egress-ips-about.adoc[leveloffset=+1]
diff --git a/networking/ovn_kubernetes_network_provider/configuring-hybrid-networking.adoc b/networking/ovn_kubernetes_network_provider/configuring-hybrid-networking.adoc
index 4aec12ecef..da163b12a3 100644
--- a/networking/ovn_kubernetes_network_provider/configuring-hybrid-networking.adoc
+++ b/networking/ovn_kubernetes_network_provider/configuring-hybrid-networking.adoc
@@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]
toc::[]
-As a cluster administrator, you can configure the {openshift-networking} OVN-Kubernetes network plug-in to allow Linux and Windows nodes to host Linux and Windows workloads, respectively.
+As a cluster administrator, you can configure the {openshift-networking} OVN-Kubernetes network plugin to allow Linux and Windows nodes to host Linux and Windows workloads, respectively.
include::modules/configuring-hybrid-ovnkubernetes.adoc[leveloffset=+1]
diff --git a/networking/ovn_kubernetes_network_provider/deploying-egress-router-ovn-redirection.adoc b/networking/ovn_kubernetes_network_provider/deploying-egress-router-ovn-redirection.adoc
index afe6b77713..c18c3c2d97 100644
--- a/networking/ovn_kubernetes_network_provider/deploying-egress-router-ovn-redirection.adoc
+++ b/networking/ovn_kubernetes_network_provider/deploying-egress-router-ovn-redirection.adoc
@@ -8,7 +8,7 @@ toc::[]
As a cluster administrator, you can deploy an egress router pod to redirect traffic to specified destination IP addresses from a reserved source IP address.
-The egress router implementation uses the egress router Container Network Interface (CNI) plug-in.
+The egress router implementation uses the egress router Container Network Interface (CNI) plugin.
// Describe the CR and provide an example.
include::modules/nw-egress-router-cr.adoc[leveloffset=+1]
diff --git a/networking/ovn_kubernetes_network_provider/logging-network-policy.adoc b/networking/ovn_kubernetes_network_provider/logging-network-policy.adoc
index fff0732a5a..5403dfbeb0 100644
--- a/networking/ovn_kubernetes_network_provider/logging-network-policy.adoc
+++ b/networking/ovn_kubernetes_network_provider/logging-network-policy.adoc
@@ -10,7 +10,7 @@ As a cluster administrator, you can configure audit logging for your cluster and
[NOTE]
====
-Audit logging is available for only the xref:../../networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.adoc#about-ovn-kubernetes[OVN-Kubernetes network plug-in].
+Audit logging is available for only the xref:../../networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.adoc#about-ovn-kubernetes[OVN-Kubernetes network plugin].
====
include::modules/nw-networkpolicy-audit-concept.adoc[leveloffset=+1]
diff --git a/networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.adoc b/networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.adoc
index 6606f31e08..f99f9c31f2 100644
--- a/networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.adoc
+++ b/networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.adoc
@@ -1,14 +1,14 @@
:_content-type: ASSEMBLY
[id="migrate-from-openshift-sdn"]
-= Migrating from the OpenShift SDN network plug-in
+= Migrating from the OpenShift SDN network plugin
include::_attributes/common-attributes.adoc[]
:context: migrate-from-openshift-sdn
toc::[]
-As a cluster administrator, you can migrate to the OVN-Kubernetes network plug-in from the OpenShift SDN network plug-in.
+As a cluster administrator, you can migrate to the OVN-Kubernetes network plugin from the OpenShift SDN network plugin.
-To learn more about OVN-Kubernetes, read xref:../../networking/ovn_kubernetes_network_provider/about-ovn-kubernetes#about-ovn-kubernetes[About the OVN-Kubernetes network plug-in].
+To learn more about OVN-Kubernetes, read xref:../../networking/ovn_kubernetes_network_provider/about-ovn-kubernetes#about-ovn-kubernetes[About the OVN-Kubernetes network plugin].
include::modules/nw-ovn-kubernetes-migration-about.adoc[leveloffset=+1]
diff --git a/networking/setting-interface-level-network-sysctls.adoc b/networking/setting-interface-level-network-sysctls.adoc
index effdd91ee7..79b4572075 100644
--- a/networking/setting-interface-level-network-sysctls.adoc
+++ b/networking/setting-interface-level-network-sysctls.adoc
@@ -6,15 +6,15 @@ include::_attributes/common-attributes.adoc[]
toc::[]
-In Linux, sysctl allows an administrator to modify kernel parameters at runtime. You can modify interface-level network sysctls using the tuning Container Network Interface (CNI) meta plug-in. The tuning CNI meta plug-in operates in a chain with a main CNI plug-in as illustrated.
+In Linux, sysctl allows an administrator to modify kernel parameters at runtime. You can modify interface-level network sysctls using the tuning Container Network Interface (CNI) meta plugin. The tuning CNI meta plugin operates in a chain with a main CNI plugin as illustrated.
-image::264_OpenShift_CNI_plugin_chain_0722.png[CNI plug-in]
+image::264_OpenShift_CNI_plugin_chain_0722.png[CNI plugin]
-The main CNI plug-in assigns the interface and passes this to the tuning CNI meta plug-in at runtime. You can change some sysctls and several interface attributes (promiscuous mode, all-multicast mode, MTU, and MAC address) in the network namespace by using the tuning CNI meta plug-in. In the tuning CNI meta plug-in configuration, the interface name is represented by the `IFNAME` token, and is replaced with the actual name of the interface at runtime.
+The main CNI plugin assigns the interface and passes this to the tuning CNI meta plugin at runtime. You can change some sysctls and several interface attributes (promiscuous mode, all-multicast mode, MTU, and MAC address) in the network namespace by using the tuning CNI meta plugin. In the tuning CNI meta plugin configuration, the interface name is represented by the `IFNAME` token, and is replaced with the actual name of the interface at runtime.
[NOTE]
====
-In {product-title}, the tuning CNI meta plug-in only supports changing interface-level network sysctls.
+In {product-title}, the tuning CNI meta plugin only supports changing interface-level network sysctls.
====
include::modules/nw-cfg-tuning-interface-cni.adoc[leveloffset=+1]
diff --git a/nodes/containers/nodes-containers-downward-api.adoc b/nodes/containers/nodes-containers-downward-api.adoc
index 8acba15cbf..1e7b9adf77 100644
--- a/nodes/containers/nodes-containers-downward-api.adoc
+++ b/nodes/containers/nodes-containers-downward-api.adoc
@@ -14,7 +14,7 @@ The _Downward API_ is a mechanism that allows containers to consume information
about API objects without coupling to {product-title}.
Such information includes the pod's name, namespace, and resource values.
Containers can consume information from the downward API using environment
-variables or a volume plug-in.
+variables or a volume plugin.
diff --git a/nodes/index.adoc b/nodes/index.adoc
index 9b81303571..6ab7e4fc78 100644
--- a/nodes/index.adoc
+++ b/nodes/index.adoc
@@ -102,9 +102,9 @@ You can work with pods more easily and efficiently with the help of various tool
As a developer, use a vertical pod autoscaler to ensure your pods stay up during periods of high demand by scheduling pods to nodes that have enough resources for each pod.
-|Provide access to external resources using device plug-ins.
+|Provide access to external resources using device plugins.
|Administrator
-|A xref:../nodes/pods/nodes-pods-plugins.adoc#nodes-pods-device[device plug-in] is a gRPC service running on nodes (external to the kubelet), which manages specific hardware resources. You can xref:../nodes/pods/nodes-pods-plugins.adoc#methods-for-deploying-a-device-plug-in[deploy a device plug-in] to provide a consistent and portable solution to consume hardware devices across clusters.
+|A xref:../nodes/pods/nodes-pods-plugins.adoc#nodes-pods-device[device plugin] is a gRPC service running on nodes (external to the kubelet), which manages specific hardware resources. You can xref:../nodes/pods/nodes-pods-plugins.adoc#methods-for-deploying-a-device-plugin_nodes-pods-device[deploy a device plugin] to provide a consistent and portable solution to consume hardware devices across clusters.
|Provide sensitive data to pods xref:../nodes/pods/nodes-pods-secrets.adoc#nodes-pods-secrets[using the `Secret` object].
|Administrator
diff --git a/nodes/nodes/nodes-node-tuning-operator.adoc b/nodes/nodes/nodes-node-tuning-operator.adoc
index 6b4ea5aa61..e46a20f874 100644
--- a/nodes/nodes/nodes-node-tuning-operator.adoc
+++ b/nodes/nodes/nodes-node-tuning-operator.adoc
@@ -18,9 +18,3 @@ include::modules/custom-tuning-specification.adoc[leveloffset=+1]
include::modules/cluster-node-tuning-operator-default-profiles-set.adoc[leveloffset=+1]
include::modules/node-tuning-operator-supported-tuned-daemon-plug-ins.adoc[leveloffset=+1]
-
-
-
-
-
-
diff --git a/nodes/pods/nodes-pods-plugins.adoc b/nodes/pods/nodes-pods-plugins.adoc
index 46feba486c..4c248b1775 100644
--- a/nodes/pods/nodes-pods-plugins.adoc
+++ b/nodes/pods/nodes-pods-plugins.adoc
@@ -1,13 +1,13 @@
:_content-type: ASSEMBLY
:context: nodes-pods-device
[id="nodes-pods-device"]
-= Using device plug-ins to access external resources with pods
+= Using device plugins to access external resources with pods
include::_attributes/common-attributes.adoc[]
toc::[]
-Device plug-ins allow you to use a particular device type (GPU, InfiniBand,
+Device plugins allow you to use a particular device type (GPU, InfiniBand,
or other similar computing resources that require vendor-specific initialization
and setup) in your {product-title} pod without needing to write custom code.
diff --git a/operators/admin/olm-managing-custom-catalogs.adoc b/operators/admin/olm-managing-custom-catalogs.adoc
index 776553d216..724f2a6f9c 100644
--- a/operators/admin/olm-managing-custom-catalogs.adoc
+++ b/operators/admin/olm-managing-custom-catalogs.adoc
@@ -36,7 +36,7 @@ As of {product-title} 4.11, the default Red Hat-provided Operator catalog releas
The `opm` subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format.
-Many of the `opm` subcommands and flags for working with the SQLite database format, such as `opm index prune`, do not work with the file-based catalog format. For more information about working with file-based catalogs, see xref:../../operators/understanding/olm-packaging-format.adoc#olm-file-based-catalogs_olm-packaging-format[Operator Framework packaging format] and xref:../../installing/disconnected_install/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plug-in].
+Many of the `opm` subcommands and flags for working with the SQLite database format, such as `opm index prune`, do not work with the file-based catalog format. For more information about working with file-based catalogs, see xref:../../operators/understanding/olm-packaging-format.adoc#olm-file-based-catalogs_olm-packaging-format[Operator Framework packaging format] and xref:../../installing/disconnected_install/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plugin].
====
include::modules/olm-creating-fb-catalog-image.adoc[leveloffset=+2]
diff --git a/operators/admin/olm-restricted-networks.adoc b/operators/admin/olm-restricted-networks.adoc
index e5e0912be5..94bb2f6f77 100644
--- a/operators/admin/olm-restricted-networks.adoc
+++ b/operators/admin/olm-restricted-networks.adoc
@@ -63,7 +63,7 @@ As of {product-title} 4.11, the default Red Hat-provided Operator catalog releas
The `opm` subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format.
-Many of the `opm` subcommands and flags for working with the SQLite database format, such as `opm index prune`, do not work with the file-based catalog format. For more information about working with file-based catalogs, see xref:../../operators/understanding/olm-packaging-format.adoc#olm-file-based-catalogs_olm-packaging-format[Operator Framework packaging format], xref:../../operators/admin/olm-managing-custom-catalogs.adoc#olm-managing-custom-catalogs-fb[Managing custom catalogs], and xref:../../installing/disconnected_install/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plug-in].
+Many of the `opm` subcommands and flags for working with the SQLite database format, such as `opm index prune`, do not work with the file-based catalog format. For more information about working with file-based catalogs, see xref:../../operators/understanding/olm-packaging-format.adoc#olm-file-based-catalogs_olm-packaging-format[Operator Framework packaging format], xref:../../operators/admin/olm-managing-custom-catalogs.adoc#olm-managing-custom-catalogs-fb[Managing custom catalogs], and xref:../../installing/disconnected_install/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plugin].
====
include::modules/olm-creating-catalog-from-index.adoc[leveloffset=+1]
diff --git a/operators/operator_sdk/osdk-generating-csvs.adoc b/operators/operator_sdk/osdk-generating-csvs.adoc
index d33bfea7fb..9a81922072 100644
--- a/operators/operator_sdk/osdk-generating-csvs.adoc
+++ b/operators/operator_sdk/osdk-generating-csvs.adoc
@@ -65,7 +65,7 @@ include::modules/olm-defining-csv-webhooks.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
-* xref:../../architecture/admission-plug-ins.adoc#admission-webhook-types_admission-plug-ins[Types of webhook admission plug-ins]
+* xref:../../architecture/admission-plug-ins.adoc#admission-webhook-types_admission-plug-ins[Types of webhook admission plugins]
* Kubernetes documentation:
** link:https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook[Validating admission webhooks]
** link:https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook[Mutating admission webhooks]
diff --git a/operators/understanding/olm-packaging-format.adoc b/operators/understanding/olm-packaging-format.adoc
index ac754a7f6a..6aba783f9e 100644
--- a/operators/understanding/olm-packaging-format.adoc
+++ b/operators/understanding/olm-packaging-format.adoc
@@ -39,7 +39,7 @@ As of {product-title} 4.11, the default Red Hat-provided Operator catalog releas
The `opm` subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format.
-Many of the `opm` subcommands and flags for working with the SQLite database format, such as `opm index prune`, do not work with the file-based catalog format. For more information about working with file-based catalogs, see xref:../../operators/admin/olm-managing-custom-catalogs.adoc#olm-managing-custom-catalogs-fb[Managing custom catalogs] and xref:../../installing/disconnected_install/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plug-in].
+Many of the `opm` subcommands and flags for working with the SQLite database format, such as `opm index prune`, do not work with the file-based catalog format. For more information about working with file-based catalogs, see xref:../../operators/admin/olm-managing-custom-catalogs.adoc#olm-managing-custom-catalogs-fb[Managing custom catalogs] and xref:../../installing/disconnected_install/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plugin].
====
include::modules/olm-fb-catalogs-structure.adoc[leveloffset=+2]
diff --git a/operators/understanding/olm-rh-catalogs.adoc b/operators/understanding/olm-rh-catalogs.adoc
index 3e139de016..335856d202 100644
--- a/operators/understanding/olm-rh-catalogs.adoc
+++ b/operators/understanding/olm-rh-catalogs.adoc
@@ -15,7 +15,7 @@ As of {product-title} 4.11, the default Red Hat-provided Operator catalog releas
The `opm` subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format.
Many of the `opm` subcommands and flags for working with the SQLite database format, such as `opm index prune`, do not work with the file-based catalog format. For more information about working with file-based catalogs, see xref:../../operators/admin/olm-managing-custom-catalogs.adoc#olm-managing-custom-catalogs[Managing custom catalogs],
-xref:../../operators/understanding/olm-packaging-format.adoc#olm-file-based-catalogs_olm-packaging-format[Operator Framework packaging format], and xref:../../installing/disconnected_install/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plug-in].
+xref:../../operators/understanding/olm-packaging-format.adoc#olm-file-based-catalogs_olm-packaging-format[Operator Framework packaging format], and xref:../../installing/disconnected_install/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plugin].
====
include::modules/olm-about-catalogs.adoc[leveloffset=+1]
diff --git a/operators/understanding/olm/olm-webhooks.adoc b/operators/understanding/olm/olm-webhooks.adoc
index 7c238485ae..4964be25af 100644
--- a/operators/understanding/olm/olm-webhooks.adoc
+++ b/operators/understanding/olm/olm-webhooks.adoc
@@ -14,7 +14,7 @@ See xref:../../../operators/operator_sdk/osdk-generating-csvs.adoc#olm-defining-
[role="_additional-resources"]
== Additional resources
-* xref:../../../architecture/admission-plug-ins.adoc#admission-webhook-types_admission-plug-ins[Types of webhook admission plug-ins]
+* xref:../../../architecture/admission-plug-ins.adoc#admission-webhook-types_admission-plug-ins[Types of webhook admission plugins]
* Kubernetes documentation:
** link:https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook[Validating admission webhooks]
** link:https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook[Mutating admission webhooks]
diff --git a/post_installation_configuration/network-configuration.adoc b/post_installation_configuration/network-configuration.adoc
index 1ee0fc6b8f..c95a22182d 100644
--- a/post_installation_configuration/network-configuration.adoc
+++ b/post_installation_configuration/network-configuration.adoc
@@ -61,14 +61,14 @@ include::modules/nw-nodeport-service-range-edit.adoc[leveloffset=+3]
[id="post-install-configuring-ipsec-ovn"]
== Configuring IPsec encryption
-With IPsec enabled, all network traffic between nodes on the OVN-Kubernetes network plug-in travels through an encrypted tunnel.
+With IPsec enabled, all network traffic between nodes on the OVN-Kubernetes network plugin travels through an encrypted tunnel.
IPsec is disabled by default.
[id="post-install-configuring-ipsec-ovn-prerequisites"]
=== Prerequisites
-- Your cluster must use the OVN-Kubernetes network plug-in.
+- Your cluster must use the OVN-Kubernetes network plugin.
include::modules/nw-ovn-ipsec-enable.adoc[leveloffset=+3]
include::modules/nw-ovn-ipsec-verification.adoc[leveloffset=+3]
diff --git a/scalability_and_performance/optimizing-networking.adoc b/scalability_and_performance/optimizing-networking.adoc
index 764693ae93..adef7af7d9 100644
--- a/scalability_and_performance/optimizing-networking.adoc
+++ b/scalability_and_performance/optimizing-networking.adoc
@@ -20,7 +20,7 @@ Cloud, VM, and bare metal CPU performance can be capable of handling much more t
If you are looking to push beyond one Gbps, you can:
-* Evaluate network plug-ins that implement different routing techniques, such as border gateway protocol (BGP).
+* Evaluate network plugins that implement different routing techniques, such as border gateway protocol (BGP).
* Use VXLAN-offload capable network adapters. VXLAN-offload moves the packet checksum calculation and associated CPU overhead off of the system CPU and onto dedicated hardware on the network adapter. This frees up CPU cycles for use by pods and applications, and allows users to utilize the full bandwidth of their network infrastructure.
VXLAN-offload does not reduce latency. However, CPU utilization is reduced even in latency tests.
diff --git a/scalability_and_performance/ztp_far_edge/ztp-deploying-far-edge-sites.adoc b/scalability_and_performance/ztp_far_edge/ztp-deploying-far-edge-sites.adoc
index 8729c1d3a3..ddeaf78515 100644
--- a/scalability_and_performance/ztp_far_edge/ztp-deploying-far-edge-sites.adoc
+++ b/scalability_and_performance/ztp_far_edge/ztp-deploying-far-edge-sites.adoc
@@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]
toc::[]
-You can provision {product-title} clusters at scale with {rh-rhacm-first} using the assisted service and the GitOps plug-in policy generator with core-reduction technology enabled. The zero touch priovisioning (ZTP) pipeline performs the cluster installations. ZTP can be used in a disconnected environment.
+You can provision {product-title} clusters at scale with {rh-rhacm-first} using the assisted service and the GitOps plugin policy generator with core-reduction technology enabled. The zero touch priovisioning (ZTP) pipeline performs the cluster installations. ZTP can be used in a disconnected environment.
include::modules/ztp-talo-integration.adoc[leveloffset=+1]
diff --git a/security/compliance_operator/compliance-operator-release-notes.adoc b/security/compliance_operator/compliance-operator-release-notes.adoc
index c1a33a4360..b35e631f37 100644
--- a/security/compliance_operator/compliance-operator-release-notes.adoc
+++ b/security/compliance_operator/compliance-operator-release-notes.adoc
@@ -193,7 +193,7 @@ The following advisory is available for the OpenShift Compliance Operator 0.1.49
* Previously, the `ocp4-cluster-version-operator-verify-integrity` always checked the first entry in the Cluter Version Operator (CVO) history. As a result, the upgrade would fail in situations where subsequent versions of {product-name} would be verified. Now, the compliance check result for `ocp4-cluster-version-operator-verify-integrity` is able to detect verified versions and is accurate with the CVO history. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2053602[*BZ#2053602*])
-* Previously, the `ocp4-api-server-no-adm-ctrl-plugins-disabled` rule did not check for a list of empty admission controller plug-ins. As a result, the rule would always fail, even if all admission plug-ins were enabled. Now, more robust checking of the `ocp4-api-server-no-adm-ctrl-plugins-disabled` rule accurately passes with all admission controller plug-ins enabled. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2058631[*BZ#2058631*])
+* Previously, the `ocp4-api-server-no-adm-ctrl-plugins-disabled` rule did not check for a list of empty admission controller plugins. As a result, the rule would always fail, even if all admission plugins were enabled. Now, more robust checking of the `ocp4-api-server-no-adm-ctrl-plugins-disabled` rule accurately passes with all admission controller plugins enabled. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2058631[*BZ#2058631*])
* Previously, scans did not contain platform checks for running against Linux worker nodes. As a result, running scans against worker nodes that were not Linux-based resulted in a never ending scan loop. Now, the scan schedules appropriately based on platform type and labels complete successfully. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2056911[*BZ#2056911*])
diff --git a/security/compliance_operator/oc-compliance-plug-in-using.adoc b/security/compliance_operator/oc-compliance-plug-in-using.adoc
index 67cc9e7c0e..11a16b45f7 100644
--- a/security/compliance_operator/oc-compliance-plug-in-using.adoc
+++ b/security/compliance_operator/oc-compliance-plug-in-using.adoc
@@ -1,12 +1,12 @@
:_content-type: ASSEMBLY
[id="using-oc-compliance-plug-in"]
-= Using the oc-compliance plug-in
+= Using the oc-compliance plugin
include::_attributes/common-attributes.adoc[]
:context: oc-compliance-plug-in-understanding
toc::[]
-Although the xref:../../security/compliance_operator/compliance-operator-understanding.adoc#understanding-compliance-operator[Compliance Operator] automates many of the checks and remediations for the cluster, the full process of bringing a cluster into compliance often requires administrator interaction with the Compliance Operator API and other components. The `oc-compliance` plug-in makes the process easier.
+Although the xref:../../security/compliance_operator/compliance-operator-understanding.adoc#understanding-compliance-operator[Compliance Operator] automates many of the checks and remediations for the cluster, the full process of bringing a cluster into compliance often requires administrator interaction with the Compliance Operator API and other components. The `oc-compliance` plugin makes the process easier.
include::modules/oc-compliance-installing.adoc[leveloffset=+1]
diff --git a/security/container_security/security-platform.adoc b/security/container_security/security-platform.adoc
index ff2eba17c2..3fac901155 100644
--- a/security/container_security/security-platform.adoc
+++ b/security/container_security/security-platform.adoc
@@ -16,7 +16,7 @@ Security-related features in {product-title} that are based on Kubernetes includ
* Multitenancy, which combines Role-Based Access Controls and network policies
to isolate containers at multiple levels.
-* Admission plug-ins, which form boundaries between an API and those
+* Admission plugins, which form boundaries between an API and those
making requests to the API.
{product-title} uses Operators to automate and simplify the management of
@@ -25,7 +25,7 @@ Kubernetes-level security features.
// Multitenancy
include::modules/security-platform-multi-tenancy.adoc[leveloffset=+1]
-// Admission plug-ins
+// Admission plugins
include::modules/security-platform-admission.adoc[leveloffset=+1]
// Authentication and authorization
@@ -40,7 +40,7 @@ include::modules/security-platform-certificates.adoc[leveloffset=+1]
* xref:../../authentication/using-rbac.adoc#using-rbac[Using RBAC to define and apply permissions]
ifndef::openshift-origin[]
-* xref:../../architecture/admission-plug-ins.adoc#admission-plug-ins[About admission plug-ins]
+* xref:../../architecture/admission-plug-ins.adoc#admission-plug-ins[About admission plugins]
endif::[]
* xref:../../authentication/managing-security-context-constraints.adoc#managing-pod-security-policies[Managing security context constraints]
diff --git a/security/container_security/security-storage.adoc b/security/container_security/security-storage.adoc
index e5931d848e..f7c67dcafb 100644
--- a/security/container_security/security-storage.adoc
+++ b/security/container_security/security-storage.adoc
@@ -11,7 +11,7 @@ for on-premise and cloud providers. In particular,
{product-title} can use storage types that support the Container
Storage Interface.
-// Persistent volume plug-ins
+// Persistent volume plugins
include::modules/security-storage-persistent.adoc[leveloffset=+1]
// Shared storage
diff --git a/security/index.adoc b/security/index.adoc
index 89063fe853..a83be283cf 100644
--- a/security/index.adoc
+++ b/security/index.adoc
@@ -78,7 +78,7 @@ For many {product-title} customers, regulatory readiness, or compliance, on some
[id="compliance-checking"]
=== Compliance checking
-Administrators can use the xref:../security/compliance_operator/compliance-operator-understanding.adoc#understanding-compliance-operator[Compliance Operator] to run compliance scans and recommend remediations for any issues found. The xref:../security/compliance_operator/oc-compliance-plug-in-using.adoc#using-oc-compliance-plug-in[`oc-compliance` plug-in] is an OpenShift CLI (`oc`) plug-in that provides a set of utilities to easily interact with the Compliance Operator.
+Administrators can use the xref:../security/compliance_operator/compliance-operator-understanding.adoc#understanding-compliance-operator[Compliance Operator] to run compliance scans and recommend remediations for any issues found. The xref:../security/compliance_operator/oc-compliance-plug-in-using.adoc#using-oc-compliance-plug-in[`oc-compliance` plugin] is an OpenShift CLI (`oc`) plugin that provides a set of utilities to easily interact with the Compliance Operator.
[discrete]
[id="file-integrity-checking"]
diff --git a/serverless/cli_tools/advanced-kn-config.adoc b/serverless/cli_tools/advanced-kn-config.adoc
index 833f52670f..25ba591bd1 100644
--- a/serverless/cli_tools/advanced-kn-config.adoc
+++ b/serverless/cli_tools/advanced-kn-config.adoc
@@ -29,8 +29,8 @@ eventing:
version: v1 <6>
resource: services <7>
----
-<1> Specifies whether the Knative (`kn`) CLI should look for plug-ins in the `PATH` environment variable. This is a boolean configuration option. The default value is `false`.
-<2> Specifies the directory where the Knative (`kn`) CLI looks for plug-ins. The default path depends on the operating system, as described previously. This can be any directory that is visible to the user.
+<1> Specifies whether the Knative (`kn`) CLI should look for plugins in the `PATH` environment variable. This is a boolean configuration option. The default value is `false`.
+<2> Specifies the directory where the Knative (`kn`) CLI looks for plugins. The default path depends on the operating system, as described previously. This can be any directory that is visible to the user.
<3> The `sink-mappings` spec defines the Kubernetes addressable resource that is used when you use the `--sink` flag with a Knative (`kn`) CLI command.
<4> The prefix you want to use to describe your sink. `svc` for a service, `channel`, and `broker` are predefined prefixes for the Knative (`kn`) CLI.
<5> The API group of the Kubernetes resource.
diff --git a/serverless/cli_tools/kn-plugins.adoc b/serverless/cli_tools/kn-plugins.adoc
index f7f774a3fd..56081aa95c 100644
--- a/serverless/cli_tools/kn-plugins.adoc
+++ b/serverless/cli_tools/kn-plugins.adoc
@@ -1,16 +1,16 @@
:_content-type: ASSEMBLY
[id="kn-plugins"]
-= Knative CLI plug-ins
+= Knative CLI plugins
include::_attributes/common-attributes.adoc[]
:context: kn-plugins
toc::[]
-The Knative (`kn`) CLI supports the use of plug-ins, which enable you to extend the functionality of your `kn` installation by adding custom commands and other shared commands that are not part of the core distribution. Knative (`kn`) CLI plug-ins are used in the same way as the main `kn` functionality.
+The Knative (`kn`) CLI supports the use of plugins, which enable you to extend the functionality of your `kn` installation by adding custom commands and other shared commands that are not part of the core distribution. Knative (`kn`) CLI plugins are used in the same way as the main `kn` functionality.
-Currently, Red Hat supports the `kn-source-kafka` plug-in and the `kn-event` plug-in.
+Currently, Red Hat supports the `kn-source-kafka` plugin and the `kn-event` plugin.
-:FeatureName: The `kn-event` plug-in
+:FeatureName: The `kn-event` plugin
include::snippets/technology-preview.adoc[leveloffset=+1]
// kn event commands
include::modules/serverless-build-events-kn.adoc[leveloffset=+1]
diff --git a/serverless/discover/serverless-functions-about.adoc b/serverless/discover/serverless-functions-about.adoc
index 94efd77bea..62997b2855 100644
--- a/serverless/discover/serverless-functions-about.adoc
+++ b/serverless/discover/serverless-functions-about.adoc
@@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]
toc::[]
-{FunctionsProductName} enables developers to create and deploy stateless, event-driven functions as a Knative service on {product-title}. The `kn func` CLI is provided as a plug-in for the Knative `kn` CLI. You can use the `kn func` CLI to create, build, and deploy the container image as a Knative service on the cluster.
+{FunctionsProductName} enables developers to create and deploy stateless, event-driven functions as a Knative service on {product-title}. The `kn func` CLI is provided as a plugin for the Knative `kn` CLI. You can use the `kn func` CLI to create, build, and deploy the container image as a Knative service on the cluster.
[id="serverless-functions-about-runtimes"]
== Included runtimes
diff --git a/serverless/functions/serverless-functions-getting-started.adoc b/serverless/functions/serverless-functions-getting-started.adoc
index 3f2cc6579a..ff702f2fea 100644
--- a/serverless/functions/serverless-functions-getting-started.adoc
+++ b/serverless/functions/serverless-functions-getting-started.adoc
@@ -25,8 +25,8 @@ ifdef::openshift-enterprise[]
[role="_additional-resources"]
== Additional resources
* xref:../../registry/securing-exposing-registry.adoc#securing-exposing-registry[Exposing a default registry manually]
-* link:https://plugins.jetbrains.com/plugin/16476-knative\--serverless-functions-by-red-hat[Marketplace page for the Intellij Knative plug-in]
-* link:https://marketplace.visualstudio.com/items?itemName=redhat.vscode-knative&utm_source=VSCode.pro&utm_campaign=AhmadAwais[Marketplace page for the Visual Studio Code Knative plug-in]
+* link:https://plugins.jetbrains.com/plugin/16476-knative\--serverless-functions-by-red-hat[Marketplace page for the Intellij Knative plugin]
+* link:https://marketplace.visualstudio.com/items?itemName=redhat.vscode-knative&utm_source=VSCode.pro&utm_campaign=AhmadAwais[Marketplace page for the Visual Studio Code Knative plugin]
// This Additional resource applies only to OCP, but not to OSD nor ROSA.
endif::[]
diff --git a/storage/container_storage_interface/persistent-storage-csi-azure.adoc b/storage/container_storage_interface/persistent-storage-csi-azure.adoc
index 16146116af..689e37a9e0 100644
--- a/storage/container_storage_interface/persistent-storage-csi-azure.adoc
+++ b/storage/container_storage_interface/persistent-storage-csi-azure.adoc
@@ -22,11 +22,11 @@ include::modules/persistent-storage-csi-about.adoc[leveloffset=+1]
[IMPORTANT]
====
-{product-title} defaults to using an in-tree (non-CSI) plug-in to provision Azure Disk storage.
+{product-title} defaults to using an in-tree (non-CSI) plugin to provision Azure Disk storage.
-In future {product-title} versions, volumes provisioned using existing in-tree plug-ins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see xref:../../storage/container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration].
+In future {product-title} versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see xref:../../storage/container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration].
-After full migration, in-tree plug-ins will eventually be removed in later versions of {product-title}.
+After full migration, in-tree plugins will eventually be removed in later versions of {product-title}.
====
//Machine sets that deploy machines on ultra disks using PVCs
diff --git a/storage/container_storage_interface/persistent-storage-csi-cinder.adoc b/storage/container_storage_interface/persistent-storage-csi-cinder.adoc
index 062ae600ac..e634c30d21 100644
--- a/storage/container_storage_interface/persistent-storage-csi-cinder.adoc
+++ b/storage/container_storage_interface/persistent-storage-csi-cinder.adoc
@@ -19,17 +19,17 @@ To create CSI-provisioned PVs that mount to OpenStack Cinder storage assets, {pr
* The _OpenStack Cinder CSI driver_ enables you to create and mount OpenStack Cinder PVs.
For {product-title}, automatic migration from OpenStack Cinder in-tree to the CSI driver is available as a Technology Preview (TP) feature.
-With migration enabled, volumes provisioned using the existing in-tree plug-in are automatically migrated to use the OpenStack Cinder CSI driver. For more information, see xref:../../storage/container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration feature].
+With migration enabled, volumes provisioned using the existing in-tree plugin are automatically migrated to use the OpenStack Cinder CSI driver. For more information, see xref:../../storage/container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration feature].
include::modules/persistent-storage-csi-about.adoc[leveloffset=+1]
[IMPORTANT]
====
-{product-title} defaults to using an in-tree (non-CSI) plug-in to provision Cinder storage.
+{product-title} defaults to using an in-tree (non-CSI) plugin to provision Cinder storage.
-In future {product-title} versions, volumes provisioned using existing in-tree plug-ins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see xref:../../storage/container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration].
+In future {product-title} versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see xref:../../storage/container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration].
-After full migration, in-tree plug-ins will eventually be removed in future versions of {product-title}.
+After full migration, in-tree plugins will eventually be removed in future versions of {product-title}.
====
include::modules/persistent-storage-csi-cinder-storage-class.adoc[leveloffset=+1]
diff --git a/storage/container_storage_interface/persistent-storage-csi-ebs.adoc b/storage/container_storage_interface/persistent-storage-csi-ebs.adoc
index a66ec00b36..e6ee17542f 100644
--- a/storage/container_storage_interface/persistent-storage-csi-ebs.adoc
+++ b/storage/container_storage_interface/persistent-storage-csi-ebs.adoc
@@ -27,11 +27,11 @@ include::modules/persistent-storage-csi-about.adoc[leveloffset=+1]
[IMPORTANT]
====
-{product-title} defaults to using an in-tree (non-CSI) plug-in to provision AWS EBS storage.
+{product-title} defaults to using an in-tree (non-CSI) plugin to provision AWS EBS storage.
-In future {product-title} versions, volumes provisioned using existing in-tree plug-ins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see xref:../../storage/container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration].
+In future {product-title} versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see xref:../../storage/container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration].
-After full migration, in-tree plug-ins will eventually be removed in future versions of {product-title}.
+After full migration, in-tree plugins will eventually be removed in future versions of {product-title}.
====
For information about dynamically provisioning AWS EBS persistent volumes in {product-title}, see xref:../../storage/persistent_storage/persistent-storage-aws.adoc#persistent-storage-aws[Persistent storage using AWS Elastic Block Store].
diff --git a/storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc b/storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc
index 77419de9f3..910a2febf8 100644
--- a/storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc
+++ b/storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc
@@ -22,11 +22,11 @@ To create CSI-provisioned persistent volumes (PVs) that mount to GCP PD storage
[IMPORTANT]
====
-{product-title} defaults to using an in-tree (non-CSI) plug-in to provision GCP PD storage.
+{product-title} defaults to using an in-tree (non-CSI) plugin to provision GCP PD storage.
-In future {product-title} versions, volumes provisioned using existing in-tree plug-ins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see xref:../../storage/container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration].
+In future {product-title} versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see xref:../../storage/container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration].
-After full migration, in-tree plug-ins will eventually be removed in future versions of {product-title}.
+After full migration, in-tree plugins will eventually be removed in future versions of {product-title}.
====
include::modules/persistent-storage-csi-about.adoc[leveloffset=+1]
diff --git a/storage/container_storage_interface/persistent-storage-csi-migration.adoc b/storage/container_storage_interface/persistent-storage-csi-migration.adoc
index 19674f1e2a..ceed23774a 100644
--- a/storage/container_storage_interface/persistent-storage-csi-migration.adoc
+++ b/storage/container_storage_interface/persistent-storage-csi-migration.adoc
@@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]
toc::[]
-In-tree storage drivers that are traditionally shipped with {product-title} are being deprecated and replaced by their equivalent Container Storage Interface (CSI) drivers. {product-title} provides automatic migration for certain supported in-tree volume plug-ins to their equivalent CSI drivers.
+In-tree storage drivers that are traditionally shipped with {product-title} are being deprecated and replaced by their equivalent Container Storage Interface (CSI) drivers. {product-title} provides automatic migration for certain supported in-tree volume plugins to their equivalent CSI drivers.
include::modules/persistent-storage-csi-migration-overview.adoc[leveloffset=+1]
diff --git a/storage/container_storage_interface/persistent-storage-csi-vsphere.adoc b/storage/container_storage_interface/persistent-storage-csi-vsphere.adoc
index 8cb9c5b9e2..d858da93a5 100644
--- a/storage/container_storage_interface/persistent-storage-csi-vsphere.adoc
+++ b/storage/container_storage_interface/persistent-storage-csi-vsphere.adoc
@@ -24,11 +24,11 @@ To create CSI-provisioned persistent volumes (PVs) that mount to vSphere storage
[IMPORTANT]
====
-{product-title} defaults to using an in-tree (non-CSI) plug-in to provision vSphere storage.
+{product-title} defaults to using an in-tree (non-CSI) plugin to provision vSphere storage.
-In future {product-title} versions, volumes provisioned using existing in-tree plug-ins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see xref:../../storage/container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration].
+In future {product-title} versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see xref:../../storage/container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration].
-After full migration, in-tree plug-ins will eventually be removed in future versions of {product-title}.
+After full migration, in-tree plugins will eventually be removed in future versions of {product-title}.
====
[NOTE]
diff --git a/storage/persistent_storage/persistent-storage-aws.adoc b/storage/persistent_storage/persistent-storage-aws.adoc
index e9e333f4ef..6b375529d6 100644
--- a/storage/persistent_storage/persistent-storage-aws.adoc
+++ b/storage/persistent_storage/persistent-storage-aws.adoc
@@ -21,11 +21,11 @@ requested by users. You can define a KMS key to encrypt container-persistent vol
[IMPORTANT]
====
-{product-title} defaults to using an in-tree (non-CSI) plug-in to provision AWS EBS storage.
+{product-title} defaults to using an in-tree (non-CSI) plugin to provision AWS EBS storage.
-In future {product-title} versions, volumes provisioned using existing in-tree plug-ins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see xref:../../storage/container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration].
+In future {product-title} versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see xref:../../storage/container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration].
-After full migration, in-tree plug-ins will eventually be removed in future versions of {product-title}.
+After full migration, in-tree plugins will eventually be removed in future versions of {product-title}.
====
[IMPORTANT]
@@ -58,4 +58,4 @@ include::modules/storage-persistent-storage-volume-encrypt-with-kms-key.adoc[lev
[role="_additional-resources"]
== Additional resources
-* See xref:../../storage/container_storage_interface/persistent-storage-csi-ebs.adoc#persistent-storage-csi-ebs[AWS Elastic Block Store CSI Driver Operator] for information about accessing additional storage options, such as volume snapshots, that are not possible with in-tree volume plug-ins.
+* See xref:../../storage/container_storage_interface/persistent-storage-csi-ebs.adoc#persistent-storage-csi-ebs[AWS Elastic Block Store CSI Driver Operator] for information about accessing additional storage options, such as volume snapshots, that are not possible with in-tree volume plugins.
diff --git a/storage/persistent_storage/persistent-storage-azure-file.adoc b/storage/persistent_storage/persistent-storage-azure-file.adoc
index 59458caff2..40b1006ae0 100644
--- a/storage/persistent_storage/persistent-storage-azure-file.adoc
+++ b/storage/persistent_storage/persistent-storage-azure-file.adoc
@@ -30,9 +30,9 @@ Azure File volumes use Server Message Block.
[IMPORTANT]
====
-In future {product-title} versions, volumes provisioned using existing in-tree plug-ins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see xref:../../storage/container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration].
+In future {product-title} versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see xref:../../storage/container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration].
-After full migration, in-tree plug-ins will eventually be removed in future versions of {product-title}.
+After full migration, in-tree plugins will eventually be removed in future versions of {product-title}.
====
.Additional resources
diff --git a/storage/persistent_storage/persistent-storage-azure.adoc b/storage/persistent_storage/persistent-storage-azure.adoc
index 4f61b59a48..7f33407753 100644
--- a/storage/persistent_storage/persistent-storage-azure.adoc
+++ b/storage/persistent_storage/persistent-storage-azure.adoc
@@ -20,11 +20,11 @@ requested by users.
[IMPORTANT]
====
-{product-title} defaults to using an in-tree (non-CSI) plug-in to provision Azure Disk storage.
+{product-title} defaults to using an in-tree (non-CSI) plugin to provision Azure Disk storage.
-In future {product-title} versions, volumes provisioned using existing in-tree plug-ins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see xref:../../storage/container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration].
+In future {product-title} versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see xref:../../storage/container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration].
-After full migration, in-tree plug-ins will eventually be removed in future versions of {product-title}.
+After full migration, in-tree plugins will eventually be removed in future versions of {product-title}.
====
[IMPORTANT]
diff --git a/storage/persistent_storage/persistent-storage-cinder.adoc b/storage/persistent_storage/persistent-storage-cinder.adoc
index 0598705dfd..c06a706314 100644
--- a/storage/persistent_storage/persistent-storage-cinder.adoc
+++ b/storage/persistent_storage/persistent-storage-cinder.adoc
@@ -16,11 +16,11 @@ requested by users.
[IMPORTANT]
====
-{product-title} defaults to using an in-tree (non-CSI) plug-in to provision Cinder storage.
+{product-title} defaults to using an in-tree (non-CSI) plugin to provision Cinder storage.
-In future {product-title} versions, volumes provisioned using existing in-tree plug-ins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see xref:../../storage/container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration].
+In future {product-title} versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see xref:../../storage/container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration].
-After full migration, in-tree plug-ins will eventually be removed in future versions of {product-title}.
+After full migration, in-tree plugins will eventually be removed in future versions of {product-title}.
====
[role="_additional-resources"]
diff --git a/storage/persistent_storage/persistent-storage-flexvolume.adoc b/storage/persistent_storage/persistent-storage-flexvolume.adoc
index eaeae33544..f4743f395e 100644
--- a/storage/persistent_storage/persistent-storage-flexvolume.adoc
+++ b/storage/persistent_storage/persistent-storage-flexvolume.adoc
@@ -15,9 +15,9 @@ Out-of-tree Container Storage Interface (CSI) driver is the recommended way to w
For the most recent list of major functionality that has been deprecated or removed within {product-title}, refer to the _Deprecated and removed features_ section of the {product-title} release notes.
====
-{product-title} supports FlexVolume, an out-of-tree plug-in that uses an executable model to interface with drivers.
+{product-title} supports FlexVolume, an out-of-tree plugin that uses an executable model to interface with drivers.
-To use storage from a back-end that does not have a built-in plug-in, you can extend {product-title} through FlexVolume drivers and provide persistent storage to applications.
+To use storage from a back-end that does not have a built-in plugin, you can extend {product-title} through FlexVolume drivers and provide persistent storage to applications.
Pods interact with FlexVolume drivers through the `flexvolume` in-tree plugin.
diff --git a/storage/persistent_storage/persistent-storage-gce.adoc b/storage/persistent_storage/persistent-storage-gce.adoc
index 7d6c21af23..170fc1f685 100644
--- a/storage/persistent_storage/persistent-storage-gce.adoc
+++ b/storage/persistent_storage/persistent-storage-gce.adoc
@@ -25,11 +25,11 @@ requested by users.
[IMPORTANT]
====
-{product-title} defaults to using an in-tree (non-CSI) plug-in to provision gcePD storage.
+{product-title} defaults to using an in-tree (non-CSI) plugin to provision gcePD storage.
-In future {product-title} versions, volumes provisioned using existing in-tree plug-ins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see xref:../../storage/container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration].
+In future {product-title} versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see xref:../../storage/container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration].
-After full migration, in-tree plug-ins will eventually be removed in future versions of {product-title}.
+After full migration, in-tree plugins will eventually be removed in future versions of {product-title}.
====
[IMPORTANT]
diff --git a/storage/persistent_storage/persistent-storage-vsphere.adoc b/storage/persistent_storage/persistent-storage-vsphere.adoc
index e8ca0682d8..29b0e4473f 100644
--- a/storage/persistent_storage/persistent-storage-vsphere.adoc
+++ b/storage/persistent_storage/persistent-storage-vsphere.adoc
@@ -26,11 +26,11 @@ requested by users.
[IMPORTANT]
====
-{product-title} defaults to using an in-tree (non-CSI) plug-in to provision vSphere storage.
+{product-title} defaults to using an in-tree (non-CSI) plugin to provision vSphere storage.
-In future {product-title} versions, volumes provisioned using existing in-tree plug-ins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see xref:../../storage/container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration].
+In future {product-title} versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see xref:../../storage/container_storage_interface/persistent-storage-csi-migration.adoc#persistent-storage-csi-migration[CSI automatic migration].
-After full migration, in-tree plug-ins will eventually be removed in future versions of {product-title}.
+After full migration, in-tree plugins will eventually be removed in future versions of {product-title}.
====
[role="_additional-resources"]
diff --git a/storage/persistent_storage/rosa-persistent-storage-aws-ebs.adoc b/storage/persistent_storage/rosa-persistent-storage-aws-ebs.adoc
index a32c874a91..3373f2c0db 100644
--- a/storage/persistent_storage/rosa-persistent-storage-aws-ebs.adoc
+++ b/storage/persistent_storage/rosa-persistent-storage-aws-ebs.adoc
@@ -26,7 +26,7 @@ The Kubernetes persistent volume framework enables administrators to provision a
[IMPORTANT]
====
-* ROSA defaults to using an in-tree, or non-Container Storage Interface (CSI), plug-in to provision AWS EBS storage. In future ROSA versions, volumes provisioned using existing in-tree plug-ins are planned for migration to their equivalent CSI driver. After full migration, the in-tree plug-ins are planned to be removed from the future versions of ROSA.
+* ROSA defaults to using an in-tree, or non-Container Storage Interface (CSI), plugin to provision AWS EBS storage. In future ROSA versions, volumes provisioned using existing in-tree plugins are planned for migration to their equivalent CSI driver. After full migration, the in-tree plugins are planned to be removed from the future versions of ROSA.
* High-availability of storage in the infrastructure is left to the underlying storage provider.
====
@@ -44,6 +44,6 @@ By default, a ROSA cluster supports a maximum of 39 EBS volumes attached to one
You must use either in-tree or CSI volumes and their respective storage classes, but never both volume types at the same time. The maximum attached EBS volume number is counted separately for in-tree and CSI volumes, so you could have up to 39 EBS volumes of each type.
====
-For information about accessing additional storage options, such as volume snapshots, that are not possible with in-tree volume plug-ins, see link:https://docs.openshift.com/container-platform/4.9/storage/container_storage_interface/persistent-storage-csi-ebs.html#persistent-storage-csi-ebs[Elastic Block Store CSI Driver Operator].
+For information about accessing additional storage options, such as volume snapshots, that are not possible with in-tree volume plugins, see link:https://docs.openshift.com/container-platform/4.9/storage/container_storage_interface/persistent-storage-csi-ebs.html#persistent-storage-csi-ebs[Elastic Block Store CSI Driver Operator].
include::modules/rosa-howto-create-persistent-storage-aws-ebs.adoc[leveloffset=+1]
diff --git a/storage/understanding-persistent-storage.adoc b/storage/understanding-persistent-storage.adoc
index 17008fbe80..a10c61aea6 100644
--- a/storage/understanding-persistent-storage.adoc
+++ b/storage/understanding-persistent-storage.adoc
@@ -25,9 +25,9 @@ include::modules/storage-persistent-storage-pvc.adoc[leveloffset=+1]
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
include::modules/storage-persistent-storage-block-volume.adoc[leveloffset=+1]
-// As these volumes have transitioned to being tech preview per plug-in,
+// As these volumes have transitioned to being tech preview per plugin,
// this notice has been included in the previous module.
-// :FeatureName: Support for raw block volumes in the volume plug-ins listed above
+// :FeatureName: Support for raw block volumes in the volume plugins listed above
// include::snippets/technology-preview.adoc[leveloffset=+1]
include::modules/storage-persistent-storage-block-volume-examples.adoc[leveloffset=+2]
diff --git a/updating/updating-restricted-network-cluster.adoc b/updating/updating-restricted-network-cluster.adoc
index fb110dd760..a4271fee5f 100644
--- a/updating/updating-restricted-network-cluster.adoc
+++ b/updating/updating-restricted-network-cluster.adoc
@@ -49,7 +49,7 @@ You must mirror container images onto a mirror registry before you can update a
There are two supported methods for mirroring images onto a mirror registry:
-* Using the oc-mirror OpenShift CLI (`oc`) plug-in
+* Using the oc-mirror OpenShift CLI (`oc`) plugin
* Using the oc adm release mirror command
@@ -60,7 +60,7 @@ include::modules/update-mirror-repository-oc-mirror.adoc[leveloffset=+3]
[role="_additional-resources"]
.Additional resources
-* xref:../installing/disconnected_install/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plug-in]
+* xref:../installing/disconnected_install/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plugin]
include::modules/update-mirror-repository.adoc[leveloffset=+3]
@@ -153,7 +153,7 @@ You must mirror container images onto a mirror registry before you can update a
There are two supported methods for mirroring images onto a mirror registry:
-* Using the oc-mirror OpenShift CLI (`oc`) plug-in
+* Using the oc-mirror OpenShift CLI (`oc`) plugin
* Using the oc adm release mirror command
@@ -169,7 +169,7 @@ include::modules/update-mirror-repository-oc-mirror.adoc[leveloffset=+3]
[role="_additional-resources"]
.Additional resources
-* xref:../installing/disconnected_install/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plug-in]
+* xref:../installing/disconnected_install/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plugin]
:!context:
:context: updating-restricted-network-cluster
diff --git a/virt/about-virt.adoc b/virt/about-virt.adoc
index 61a2e95f5a..d50708c46b 100644
--- a/virt/about-virt.adoc
+++ b/virt/about-virt.adoc
@@ -24,7 +24,7 @@ include::modules/virt-what-you-can-do-with-virt.adoc[leveloffset=+1]
// This line is attached to the above `virt-what-you-can-do-with-virt` module.
// It is included here in the assembly because of the xref ban.
-You can use {VirtProductName} with the xref:../networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.adoc#about-ovn-kubernetes[OVN-Kubernetes], xref:../networking/openshift_sdn/about-openshift-sdn.adoc#about-openshift-sdn[OpenShift SDN], or one of the other certified network plug-ins listed in link:https://access.redhat.com/articles/5436171[Certified OpenShift CNI Plug-ins].
+You can use {VirtProductName} with the xref:../networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.adoc#about-ovn-kubernetes[OVN-Kubernetes], xref:../networking/openshift_sdn/about-openshift-sdn.adoc#about-openshift-sdn[OpenShift SDN], or one of the other certified network plugins listed in link:https://access.redhat.com/articles/5436171[Certified OpenShift CNI Plug-ins].
You can check your {VirtProductName} cluster for compliance issues by installing the xref:../security/compliance_operator/compliance-operator-understanding.adoc#understanding-compliance[Compliance Operator] and running a scan with the `ocp4-moderate` and `ocp4-moderate-node` xref:../security/compliance_operator/compliance-operator-supported-profiles.adoc#compliance-operator-supported-profiles[profiles]. The Compliance Operator uses OpenSCAP, a link:https://www.nist.gov/[NIST-certified tool], to scan and enforce security policies.
diff --git a/web_console/dynamic-plug-in/deploy-plug-in-cluster.adoc b/web_console/dynamic-plug-in/deploy-plug-in-cluster.adoc
index cdb43c6963..c92a5ab37d 100644
--- a/web_console/dynamic-plug-in/deploy-plug-in-cluster.adoc
+++ b/web_console/dynamic-plug-in/deploy-plug-in-cluster.adoc
@@ -1,16 +1,15 @@
:_content-type: ASSEMBLY
[id="deploy-plug-in-cluster_{context}"]
-= Deploy your plug-in on a cluster
+= Deploy your plugin on a cluster
include::_attributes/common-attributes.adoc[]
:context: deploy-plug-in-cluster
toc::[]
-You can deploy the plug-in to a {product-title} cluster.
+You can deploy the plugin to a {product-title} cluster.
include::modules/build-image-docker.adoc[leveloffset=+1]
include::modules/deployment-plug-in-cluster.adoc[leveloffset=+1]
include::modules/disabling-plug-in-browser.adoc[leveloffset=+1]
-
diff --git a/web_console/dynamic-plug-in/dynamic-plug-in-example.adoc b/web_console/dynamic-plug-in/dynamic-plug-in-example.adoc
index 72b6a056f6..8eda8c9a68 100644
--- a/web_console/dynamic-plug-in/dynamic-plug-in-example.adoc
+++ b/web_console/dynamic-plug-in/dynamic-plug-in-example.adoc
@@ -1,11 +1,11 @@
:_content-type: ASSEMBLY
[id="dynamic-plugin-example_{context}"]
-= Dynamic plug-in example
+= Dynamic plugin example
include::_attributes/common-attributes.adoc[]
:context: dynamic-plug-in-example
toc::[]
-Before working through the example, verify that the plug-in is working by following the steps in xref:../../web_console/dynamic-plug-in/dynamic-plug-ins-get-started.adoc#dynamic-plugin-development_dynamic-plug-ins-get-started[Dynamic plug-in development]
+Before working through the example, verify that the plugin is working by following the steps in xref:../../web_console/dynamic-plug-in/dynamic-plug-ins-get-started.adoc#dynamic-plugin-development_dynamic-plug-ins-get-started[Dynamic plugin development]
include::modules/adding-tab-pods-page.adoc[leveloffset=+1]
diff --git a/web_console/dynamic-plug-in/dynamic-plug-in.adoc b/web_console/dynamic-plug-in/dynamic-plug-in.adoc
index b9c303e304..2839c3394d 100644
--- a/web_console/dynamic-plug-in/dynamic-plug-in.adoc
+++ b/web_console/dynamic-plug-in/dynamic-plug-in.adoc
@@ -1,20 +1,20 @@
:_content-type: ASSEMBLY
[id="overview-of-dynamic-plug-ins_{context}"]
-= Overview of dynamic plug-ins
+= Overview of dynamic plugins
include::_attributes/common-attributes.adoc[]
:context: overview-of-dynamic-plug-ins
toc::[]
[id="dynamic-plug-in-overview"]
-== About dynamic plug-ins
+== About dynamic plugins
-A dynamic plug-in allows you to add custom pages and other extensions to your interface at runtime. The `ConsolePlugin` custom resource registers plug-ins with the console, and a cluster administrator enables plug-ins in the `console-operator` configuration.
+A dynamic plugin allows you to add custom pages and other extensions to your interface at runtime. The `ConsolePlugin` custom resource registers plugins with the console, and a cluster administrator enables plugins in the `console-operator` configuration.
[id="dynamic-plugins-features"]
== Key features
-A dynamic plug-in allows you to make the following customizations to the {product-title} experience:
+A dynamic plugin allows you to make the following customizations to the {product-title} experience:
* Add custom pages.
* Add perspectives beyond administrator and developer.
@@ -23,12 +23,12 @@ A dynamic plug-in allows you to make the following customizations to the {produc
[id="general-plug-in-guidelines"]
== General guidelines
-When creating your plug-in, follow these general guidelines:
+When creating your plugin, follow these general guidelines:
-* link:https://nodejs.org/en/[`Node.js`] and link:https://yarnpkg.com/[`yarn`] are required to build and run your plug-in.
-* Prefix your CSS class names with your plug-in name to avoid collisions. For example, `my-plugin_\_heading` and `my-plugin_\_icon`.
+* link:https://nodejs.org/en/[`Node.js`] and link:https://yarnpkg.com/[`yarn`] are required to build and run your plugin.
+* Prefix your CSS class names with your plugin name to avoid collisions. For example, `my-plugin_\_heading` and `my-plugin_\_icon`.
* Maintain a consistent look, feel, and behavior with other console pages.
-* Follow link:https://www.i18next.com/[react-i18next] localization guidelines when creating your plug-in. You can use the `useTranslation` hook like the one in the following example:
+* Follow link:https://www.i18next.com/[react-i18next] localization guidelines when creating your plugin. You can use the `useTranslation` hook like the one in the following example:
+
[source,ymal]
----
@@ -38,12 +38,12 @@ conster Header: React.FC = () => {
};
----
-* Avoid selectors that could affect markup outside of your plug-ins components, such as element selectors. These are not APIs and are subject to change. Using them might break your plug-in. Avoid selectors like element selectors that could affect markup outside of your plug-ins components.
+* Avoid selectors that could affect markup outside of your plugins components, such as element selectors. These are not APIs and are subject to change. Using them might break your plugin. Avoid selectors like element selectors that could affect markup outside of your plugins components.
[discrete]
== PatternFly 4 guidelines
-When creating your plug-in, follow these guidelines for using PatternFly:
+When creating your plugin, follow these guidelines for using PatternFly:
-* Use link:https://www.patternfly.org/v4/[PatternFly4] components and PatternFly CSS variables. Core PatternFly components are available through the SDK. Using PatternFly components and variables help your plug-in look consistent in future console versions.
-* Make your plug-in accessible by following link:https://www.patternfly.org/v4/accessibility/accessibility-fundamentals/[PatternFly's accessibility fundamentals].
-* Avoid using other CSS libraries such as Bootstrap or Tailwind. They can conflict with PatternFly and will not match the console look and feel.
\ No newline at end of file
+* Use link:https://www.patternfly.org/v4/[PatternFly4] components and PatternFly CSS variables. Core PatternFly components are available through the SDK. Using PatternFly components and variables help your plugin look consistent in future console versions.
+* Make your plugin accessible by following link:https://www.patternfly.org/v4/accessibility/accessibility-fundamentals/[PatternFly's accessibility fundamentals].
+* Avoid using other CSS libraries such as Bootstrap or Tailwind. They can conflict with PatternFly and will not match the console look and feel.
diff --git a/web_console/dynamic-plug-in/dynamic-plug-ins-get-started.adoc b/web_console/dynamic-plug-in/dynamic-plug-ins-get-started.adoc
index ced6bd8125..e717ebb895 100644
--- a/web_console/dynamic-plug-in/dynamic-plug-ins-get-started.adoc
+++ b/web_console/dynamic-plug-in/dynamic-plug-ins-get-started.adoc
@@ -1,11 +1,11 @@
:_content-type: ASSEMBLY
[id="getting-started-with-dynamic-plugins_{context}"]
-= Getting started with dynamic plug-ins
+= Getting started with dynamic plugins
include::_attributes/common-attributes.adoc[]
:context: dynamic-plug-ins-get-started
toc::[]
-To get started using the dynamic plug-in, you must set up your environment to write a new {product-title} dynamic plug-in. For an example of how to write a new plug-in, see xref:../../web_console/dynamic-plug-in/dynamic-plug-in-example.html#adding-tab-to-pods-page_dynamic-plug-in-example[Adding a tab to the pods page].
+To get started using the dynamic plugin, you must set up your environment to write a new {product-title} dynamic plugin. For an example of how to write a new plugin, see xref:../../web_console/dynamic-plug-in/dynamic-plug-in-example.html#adding-tab-to-pods-page_dynamic-plug-in-example[Adding a tab to the pods page].
include::modules/dynamic-plug-in-development.adoc[leveloffset=+1]
diff --git a/web_console/dynamic-plug-in/dynamic-plug-ins-reference.adoc b/web_console/dynamic-plug-in/dynamic-plug-ins-reference.adoc
index 4fba9fe124..ab5617c40b 100644
--- a/web_console/dynamic-plug-in/dynamic-plug-ins-reference.adoc
+++ b/web_console/dynamic-plug-in/dynamic-plug-ins-reference.adoc
@@ -1,12 +1,12 @@
:_content-type: ASSEMBLY
[id="dynamic-plugins-reference_{context}"]
-= Dynamic plug-in reference
+= Dynamic plugin reference
include::_attributes/common-attributes.adoc[]
:context: dynamic-plug-ins-reference
toc::[]
-You can add extensions that allow you to customize your plug-in. Those extensions are then loaded to the console at run-time.
+You can add extensions that allow you to customize your plugin. Those extensions are then loaded to the console at run-time.
include::modules/dynamic-plug-in-sdk-extensions.adoc[leveloffset=+1]
@@ -16,4 +16,4 @@ include::modules/troubleshooting-dynamic-plug-in.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
-xref:../../security/certificates/service-serving-certificate.adoc#understanding-service-serving_service-serving-certificate[Understanding service serving certificates]
\ No newline at end of file
+xref:../../security/certificates/service-serving-certificate.adoc#understanding-service-serving_service-serving-certificate[Understanding service serving certificates]
diff --git a/welcome/oke_about.adoc b/welcome/oke_about.adoc
index 9e0987ae1d..c70f16ad33 100644
--- a/welcome/oke_about.adoc
+++ b/welcome/oke_about.adoc
@@ -131,7 +131,7 @@ Prometheus and offers deep coverage and alerting for common Kubernetes issues.
[[about_oke_standard_infrastructure_services]]
=== Standard infrastructure services
-With an {oke} subscription, you receive support for all storage plug-ins that
+With an {oke} subscription, you receive support for all storage plugins that
{product-title} supports.
In terms of networking, {oke} offers full and
@@ -139,7 +139,7 @@ supported access to the Kubernetes Container Network Interface (CNI) and
therefore allows you to use any third-party SDN that supports {product-title}.
It also allows you to use the included Open vSwitch software defined network to
its fullest extent. {oke} allows you to take full advantage of the OVN
-Kubernetes overlay, Multus, and Multus plug-ins that are supported on
+Kubernetes overlay, Multus, and Multus plugins that are supported on
{product-title}. {oke} allows customers to use a Kubernetes Network Policy to
create microsegmentation between deployed application services on the cluster.
@@ -195,7 +195,7 @@ not supported in {oke}.
=== Advanced networking
The standard networking solutions in {product-title} are supported with an
-{oke} subscription. The {product-title} Kubernetes CNI plug-in for automation of
+{oke} subscription. The {product-title} Kubernetes CNI plugin for automation of
multi-tenant network segmentation between {product-title} projects is
entitled for use with {oke}. {oke} offers all the granular control of the
source IP addresses that are used by application services on the cluster.
diff --git a/whats_new/new-features.adoc b/whats_new/new-features.adoc
index 499ed66e5a..6574941f52 100644
--- a/whats_new/new-features.adoc
+++ b/whats_new/new-features.adoc
@@ -238,7 +238,7 @@ The default mode is now `NetworkPolicy`.
[id="ocp-multus"]
=== Multus
-Multus is a meta plug-in for Kubernetes Container Network Interface (CNI), which
+Multus is a meta plugin for Kubernetes Container Network Interface (CNI), which
enables a user to create multiple network interfaces per pod.
[id="ocp-sriov"]
@@ -250,9 +250,9 @@ attach SR-IOV virtual function (VF) interfaces to Pods in addition to other
network interfaces.
[id="ocp-f5"]
-=== F5 router plug-in support
+=== F5 router plugin support
-F5 router plug-in is no longer supported as part of {product-title} directly.
+F5 router plugin is no longer supported as part of {product-title} directly.
However, F5 has developed a container connector that replaces the functionality.
It is recommended to work with F5 support to implement their solution.