diff --git a/installing/installing_openstack/installing-openstack-user-sr-iov.adoc b/installing/installing_openstack/installing-openstack-user-sr-iov.adoc index d826edf52f..e0fe1362be 100644 --- a/installing/installing_openstack/installing-openstack-user-sr-iov.adoc +++ b/installing/installing_openstack/installing-openstack-user-sr-iov.adoc @@ -45,7 +45,7 @@ include::modules/installation-user-infra-generate-k8s-manifest-ignition.adoc[lev include::modules/installation-osp-converting-ignition-resources.adoc[leveloffset=+1] include::modules/installation-osp-creating-control-plane-ignition.adoc[leveloffset=+1] include::modules/installation-osp-creating-network-resources.adoc[leveloffset=+1] -Optionally, you can use the the `inventory.yaml` file that you created to customize your installation. For example, you can deploy a cluster that uses bare metal machines. +Optionally, you can use the `inventory.yaml` file that you created to customize your installation. For example, you can deploy a cluster that uses bare metal machines. include::modules/installation-osp-deploying-bare-metal-machines.adoc[leveloffset=+2] include::modules/installation-osp-creating-bootstrap-machine.adoc[leveloffset=+1] @@ -67,8 +67,8 @@ include::modules/networking-osp-enabling-vfio-noiommu.adoc[leveloffset=+2] After you apply the machine config to the machine pool, you can xref:../../post_installation_configuration/machine-configuration-tasks.html#checking-mco-status_post-install-machine-configuration-tasks[watch the machine config pool status] to see when the machines are available. ==== -// TODO: If bullet one of Next steps is truly required for this flow, these topics (in full or in part) could be added here rather than linked to. -// This document is quite long, however, and operator installation and configuration should arguably remain in their their own assemblies. +// TODO: If bullet one of Next steps is truly required for this flow, these topics (in full or in part) could be added here rather than linked to. +// This document is quite long, however, and operator installation and configuration should arguably remain in their their own assemblies. The cluster is installed and prepared for SR-IOV configuration. You must now perform the SR-IOV configuration tasks in "Next steps". diff --git a/modules/developer-cli-odo-connecting-a-java-application-to-mysql-database.adoc b/modules/developer-cli-odo-connecting-a-java-application-to-mysql-database.adoc index feceeaffbd..e5074d6c26 100644 --- a/modules/developer-cli-odo-connecting-a-java-application-to-mysql-database.adoc +++ b/modules/developer-cli-odo-connecting-a-java-application-to-mysql-database.adoc @@ -9,7 +9,7 @@ To connect your Java application to the database, use the `odo link` command. .Procedure -. Display the list of services: +. Display the list of services: + [source,terminal] ---- @@ -50,7 +50,7 @@ declare -x DATABASE_DB_PASSWORD="samplepwd" declare -x DATABASE_DB_USER="sampleuser" ---- -. Open the URL of your Java application and navigate to the `CreatePerson.xhtml` data entry page. Enter a username and age by using the the form. Click *Save*. +. Open the URL of your Java application and navigate to the `CreatePerson.xhtml` data entry page. Enter a username and age by using the form. Click *Save*. + Note that now you can see the data in the database by clicking the *View Persons Record List* link. + diff --git a/modules/developer-cli-odo-creating-a-java-microservice-jpa-application.adoc b/modules/developer-cli-odo-creating-a-java-microservice-jpa-application.adoc index df3ff0c10b..490e43330c 100644 --- a/modules/developer-cli-odo-creating-a-java-microservice-jpa-application.adoc +++ b/modules/developer-cli-odo-creating-a-java-microservice-jpa-application.adoc @@ -29,10 +29,10 @@ $ odo create java-openliberty java-application + [source,terminal] ---- -$ odo push +$ odo push ---- + -The application is now deployed to the cluster. +The application is now deployed to the cluster. . View the status of the cluster by streaming the OpenShift logs to the terminal: + @@ -105,6 +105,6 @@ java-application-8080 Pushed http://java-application-8080.apps-crc.testi + The application is now deployed to the cluster and you can access it by using the URL that is created. -. Use the URL to navigate to the `CreatePerson.xhtml` data entry page and enter a username and age by using the the form. Click *Save*. +. Use the URL to navigate to the `CreatePerson.xhtml` data entry page and enter a username and age by using the form. Click *Save*. + Note that you cannot see the data by clicking the *View Persons Record List* link since your application does not have a database connected yet. diff --git a/modules/gitops-registering-an-additional-oauth-client.adoc b/modules/gitops-registering-an-additional-oauth-client.adoc index 6070caa182..6853092d93 100644 --- a/modules/gitops-registering-an-additional-oauth-client.adoc +++ b/modules/gitops-registering-an-additional-oauth-client.adoc @@ -19,13 +19,13 @@ apiVersion: oauth.openshift.io/v1 metadata: name: keycloak-broker <1> secret: "..." <2> -redirectURIs: +redirectURIs: - "https://keycloak-keycloak.apps.dev-svc-4.7-020201.devcluster.openshift.com/auth/realms/myrealm/broker/openshift-v4/endpoint" <3> -grantMethod: prompt <4> +grantMethod: prompt <4> ') ---- -<1> The name of the OAuth client is used as the `client_id` parameter when making requests to `/oauth/authorize` and ``/oauth/token`. +<1> The name of the OAuth client is used as the `client_id` parameter when making requests to `/oauth/authorize` and `/oauth/token`. <2> The `secret` is used as the client_secret parameter when making requests to `/oauth/token`. -<3> The `redirect_uri` parameter specified in requests to ``/oauth/authorize` and ``/oauth/token` must be equal to or prefixed by one of the URIs listed in the `redirectURIs` parameter value. +<3> The `redirect_uri` parameter specified in requests to `/oauth/authorize` and `/oauth/token` must be equal to or prefixed by one of the URIs listed in the `redirectURIs` parameter value. <4> If the user has not granted access to this client, the `grantMethod` determines which action to take when this client requests tokens. Specify `auto` to automatically approve the grant and retry the request, or `prompt` to prompt the user to approve or deny the grant. diff --git a/modules/images-other-jenkins-env-var.adoc b/modules/images-other-jenkins-env-var.adoc index 6de05b1eb1..b7ca5d8424 100644 --- a/modules/images-other-jenkins-env-var.adoc +++ b/modules/images-other-jenkins-env-var.adoc @@ -83,7 +83,7 @@ log file to persist when a fatal error occurs. The fatal error file is saved at |Default: `false` |`NODEJS_SLAVE_IMAGE` -|Setting this value overrides the image that is used for the default Node.js agent pod configuration. A related image stream tag named `jenkins-agent-nodejs` is in in the project. This variable must be set before Jenkins starts the first time for it to have an effect. +|Setting this value overrides the image that is used for the default Node.js agent pod configuration. A related image stream tag named `jenkins-agent-nodejs` is in the project. This variable must be set before Jenkins starts the first time for it to have an effect. |Default Node.js agent image in Jenkins server: `image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-nodejs:latest` |`MAVEN_SLAVE_IMAGE` diff --git a/modules/installation-infrastructure-user-infra.adoc b/modules/installation-infrastructure-user-infra.adoc index f410acfbe2..24350bfc11 100644 --- a/modules/installation-infrastructure-user-infra.adoc +++ b/modules/installation-infrastructure-user-infra.adoc @@ -31,9 +31,9 @@ endif::[] Before you install {product-title} on user-provisioned infrastructure, you must prepare the underlying infrastructure. -This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an {product-title} installation. This includes configuring IP networking and network connectivity for your cluster nodes, +This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an {product-title} installation. This includes configuring IP networking and network connectivity for your cluster nodes, ifdef::ibm-z[] -preparing a web server for the Ignition files, +preparing a web server for the Ignition files, endif::ibm-z[] enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. @@ -68,7 +68,7 @@ If you are not using a DHCP service, the cluster nodes obtain their hostname thr ==== endif::ibm-z[] ifdef::ibm-z-kvm[] -. Choose to perform either a fast track installation of {op-system-first} or a full installation of {op-system-first}. For the full installation you must set up an HTTP or HTTPS server to provide Ignition files and install images to the cluster nodes. For the fast track installation an HTTP or HTTPS server is not not required, however, a DHCP server is required. See sections “Fast-track installation: Creating {op-system-first} machines" and “Full installation: Creating {op-system-first} machines". +. Choose to perform either a fast track installation of {op-system-first} or a full installation of {op-system-first}. For the full installation you must set up an HTTP or HTTPS server to provide Ignition files and install images to the cluster nodes. For the fast track installation an HTTP or HTTPS server is not required, however, a DHCP server is required. See sections “Fast-track installation: Creating {op-system-first} machines" and “Full installation: Creating {op-system-first} machines". endif::ibm-z-kvm[] . Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the _Networking requirements for user-provisioned infrastructure_ section for details about the requirements. diff --git a/modules/installation-initializing.adoc b/modules/installation-initializing.adoc index 65c8c2b7e8..1acb2171e7 100644 --- a/modules/installation-initializing.adoc +++ b/modules/installation-initializing.adoc @@ -416,7 +416,7 @@ ifdef::aws+restricted[] + [source,yaml] ---- -subnets: +subnets: - subnet-1 - subnet-2 - subnet-3 diff --git a/modules/ipi-install-creating-the-openshift-manifests.adoc b/modules/ipi-install-creating-the-openshift-manifests.adoc index 0eba00c9ce..412ac63fe7 100644 --- a/modules/ipi-install-creating-the-openshift-manifests.adoc +++ b/modules/ipi-install-creating-the-openshift-manifests.adoc @@ -16,7 +16,7 @@ $ ./openshift-baremetal-install --dir ~/clusterconfigs create manifests ---- INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings -WARNING Discarding the Openshift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated +WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated ---- ifeval::[{product-version} <= 4.3] diff --git a/modules/master-node-sizing.adoc b/modules/master-node-sizing.adoc index 8a1f459360..7ac82c34a6 100644 --- a/modules/master-node-sizing.adoc +++ b/modules/master-node-sizing.adoc @@ -40,7 +40,7 @@ The control plane node resource requirements depend on the number of nodes in th |=== -On a cluster with three masters or control plane nodes, the CPU and memory usage will spike up when one of the nodes is stopped, rebooted or fails because the remaining two nodes must handle the load in order to be highly available. This is also expected during upgrades because the masters are cordoned, drained, and rebooted serially to apply the operating system updates, as well as the the control plane Operators update. To avoid cascading failures on large and dense clusters, keep the overall resource usage on the master nodes to at least half of all available capacity to handle the resource usage spikes. Increase the CPU and memory on the master nodes accordingly. +On a cluster with three masters or control plane nodes, the CPU and memory usage will spike up when one of the nodes is stopped, rebooted or fails because the remaining two nodes must handle the load in order to be highly available. This is also expected during upgrades because the masters are cordoned, drained, and rebooted serially to apply the operating system updates, as well as the control plane Operators update. To avoid cascading failures on large and dense clusters, keep the overall resource usage on the master nodes to at least half of all available capacity to handle the resource usage spikes. Increase the CPU and memory on the master nodes accordingly. [IMPORTANT] ==== diff --git a/modules/nw-ingress-creating-a-passthrough-route.adoc b/modules/nw-ingress-creating-a-passthrough-route.adoc index 3973ab3050..f6b52b4d74 100644 --- a/modules/nw-ingress-creating-a-passthrough-route.adoc +++ b/modules/nw-ingress-creating-a-passthrough-route.adoc @@ -42,7 +42,7 @@ spec: ---- <1> The name of the object, which is limited to 63 characters. <2> The `*termination*` field is set to `passthrough`. This is the only required `tls` field. -<3> Optional `insecureEdgeTerminationPolicy`. The only valid values are are `None`, `Redirect`, or empty for disabled. +<3> Optional `insecureEdgeTerminationPolicy`. The only valid values are `None`, `Redirect`, or empty for disabled. + The destination pod is responsible for serving certificates for the traffic at the endpoint. This is currently the only method that can support requiring client certificates, also known as two-way authentication. diff --git a/modules/nw-pod-network-connectivity-check-object.adoc b/modules/nw-pod-network-connectivity-check-object.adoc index 924f08fe1a..429832253b 100644 --- a/modules/nw-pod-network-connectivity-check-object.adoc +++ b/modules/nw-pod-network-connectivity-check-object.adoc @@ -128,7 +128,7 @@ The following table describes the fields for objects in the `status.conditions` [discrete] == Connection log fields -The fields for a connection log entry are described in the following table. The object is used in the the following fields: +The fields for a connection log entry are described in the following table. The object is used in the following fields: * `status.failures[]` * `status.successes[]` diff --git a/modules/oc-compliance-rerunning-scans.adoc b/modules/oc-compliance-rerunning-scans.adoc index 5b0328305e..742ca9b4a8 100644 --- a/modules/oc-compliance-rerunning-scans.adoc +++ b/modules/oc-compliance-rerunning-scans.adoc @@ -17,7 +17,7 @@ $ oc compliance rerun-now ---- + * `` can be `compliancescan`, `compliancesuite`, or `scansettingbinding`. -* ``` is the name of the given `scan-object`. +* `` is the name of the given `scan-object`. + For example, to re-run the scans for the `ScanSettingBinding` object named `my-binding`: + diff --git a/modules/op-interacting-with-pipelines-using-the-developer-perspective.adoc b/modules/op-interacting-with-pipelines-using-the-developer-perspective.adoc index 43f2a9f89c..d9f9497a12 100644 --- a/modules/op-interacting-with-pipelines-using-the-developer-perspective.adoc +++ b/modules/op-interacting-with-pipelines-using-the-developer-perspective.adoc @@ -36,7 +36,7 @@ You can use this information to improve the pipeline workflow and eliminate issu [NOTE] ==== The *Details* section of the *Pipeline Run Details* page displays a *Log Snippet* of the failed pipeline run. *Log Snippet* provides a general error message and a snippet of the log. A link to the *Logs* section provides quick access to the details about the failed run. -The *Log Snippet* is also displayed in the the *Details* section of the *Task Run Details* page. +The *Log Snippet* is also displayed in the *Details* section of the *Task Run Details* page. ==== You can use the Options menu {kebab} to stop a running pipeline, to rerun a pipeline using the same parameters and resources as that of the previous pipeline execution, or to delete a pipeline run. * Click the *Parameters* tab to see the parameters defined in the pipeline. You can also add or edit additional parameters, as required. diff --git a/modules/ossm-rn-new-features.adoc b/modules/ossm-rn-new-features.adoc index cc51fbcb59..237e58f099 100644 --- a/modules/ossm-rn-new-features.adoc +++ b/modules/ossm-rn-new-features.adoc @@ -56,7 +56,7 @@ There are manual steps that must be completed to address CVE-2021-29492 and CVE- [id="manual-updates-cve-2021-29492_{context}"] === Manual updates required by CVE-2021-29492 and CVE-2021-31920 -Istio contains a remotely exploitable vulnerability where an HTTP request path with multiple slashes or escaped slash characters (``%2F` or ``%5C`) could potentially bypass an Istio authorization policy when path-based authorization rules are used. +Istio contains a remotely exploitable vulnerability where an HTTP request path with multiple slashes or escaped slash characters (`%2F` or `%5C`) could potentially bypass an Istio authorization policy when path-based authorization rules are used. For example, assume an Istio cluster administrator defines an authorization DENY policy to reject the request at path `/admin`. A request sent to the URL path `//admin` will NOT be rejected by the authorization policy. diff --git a/modules/ossm-rn-technology-preview.adoc b/modules/ossm-rn-technology-preview.adoc index d0acdebc52..0531253664 100644 --- a/modules/ossm-rn-technology-preview.adoc +++ b/modules/ossm-rn-technology-preview.adoc @@ -16,7 +16,7 @@ These features provide early access to upcoming product features, enabling custo {ProductName} 2.0.1 introduces support for the OVN-Kubernetes network type on {product-title} 4.7. -== WebAsssembly Technology Preview +== WebAssembly Technology Preview {ProductName} 2.0.0 introduces support for WebAssembly extensions to Envoy Proxy. diff --git a/modules/persistent-storage-local-create-cr-manual.adoc b/modules/persistent-storage-local-create-cr-manual.adoc index 2e12dd95ab..065a6be4be 100644 --- a/modules/persistent-storage-local-create-cr-manual.adoc +++ b/modules/persistent-storage-local-create-cr-manual.adoc @@ -10,7 +10,7 @@ Local volumes cannot be created by dynamic provisioning. Instead, persistent vol [IMPORTANT] ==== Manual provisioning of PVs includes the risk of potential data leaks across PV reuse when PVCs are deleted. -The Local Storage Operator is recommmended for automating the life cycle of devices when provisioning local PVs. +The Local Storage Operator is recommended for automating the life cycle of devices when provisioning local PVs. ==== .Prerequisites diff --git a/modules/persistent-storage-rhv.adoc b/modules/persistent-storage-rhv.adoc index fcbfe04fd6..572aae2c4b 100644 --- a/modules/persistent-storage-rhv.adoc +++ b/modules/persistent-storage-rhv.adoc @@ -11,7 +11,7 @@ When you create a `PersistentVolumeClaim` (PVC) object, {product-title} provisio .Procedure -* If you are using the we console to to dynamically create a persistent volume on {rh-virtualization} : +* If you are using the we console to dynamically create a persistent volume on {rh-virtualization}: + . In the {product-title} console, click *Storage* -> *Persistent Volume Claims*. . In the persistent volume claims overview, click *Create Persistent Volume Claim*. diff --git a/modules/sandboxed-containers-uninstalling-kata-runtime.adoc b/modules/sandboxed-containers-uninstalling-kata-runtime.adoc index bc2720862d..8611585460 100644 --- a/modules/sandboxed-containers-uninstalling-kata-runtime.adoc +++ b/modules/sandboxed-containers-uninstalling-kata-runtime.adoc @@ -6,7 +6,7 @@ = Uninstalling the Kata runtime -This section describes how to remove and uninstall the `kata` runtime and all its related resources, such as CRI-O config and `RuntimeClass`, from from your cluster. +This section describes how to remove and uninstall the `kata` runtime and all its related resources, such as CRI-O config and `RuntimeClass`, from your cluster. .Procedure diff --git a/modules/ssh-agent-using.adoc b/modules/ssh-agent-using.adoc index 7f674b4d5b..3d7d551875 100644 --- a/modules/ssh-agent-using.adoc +++ b/modules/ssh-agent-using.adoc @@ -158,7 +158,7 @@ endif::openshift-origin[] ---- $ ssh-keygen -t ed25519 -N '' -f / <1> ---- -<1> Specify the path and file name, such as `~/.ssh/id_rsa`, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ``~/.ssh` directory. +<1> Specify the path and file name, such as `~/.ssh/id_rsa`, of the new SSH key. If you have an existing key pair, ensure your public key is in the your `~/.ssh` directory. + [NOTE] ==== @@ -176,7 +176,7 @@ For example, run the following to view the `~/.ssh/id_rsa.pub` public key: + [source,termanal] ---- -$ cat ~/.ssh/id_rsa.pub +$ cat ~/.ssh/id_rsa.pub ---- . Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the `./openshift-install gather` command. diff --git a/modules/update-service-configure-cvo.adoc b/modules/update-service-configure-cvo.adoc index a1b46d06b2..057392f786 100644 --- a/modules/update-service-configure-cvo.adoc +++ b/modules/update-service-configure-cvo.adoc @@ -6,7 +6,7 @@ After the OpenShift Update Service Operator has been installed and the OpenShift .Prerequisites * The OpenShift Update Service Operator has been installed. -* The Openshift Update Service graph-data container image has been created and pushed to a repository that is accessible to the OpenShift Update Service. +* The OpenShift Update Service graph-data container image has been created and pushed to a repository that is accessible to the OpenShift Update Service. * The current release and update target releases have been mirrored to a locally accessible registry. * The OpenShift Update Service application has been created. diff --git a/modules/update-service-create-service-cli.adoc b/modules/update-service-create-service-cli.adoc index 6dbc5d04b1..959997a164 100644 --- a/modules/update-service-create-service-cli.adoc +++ b/modules/update-service-create-service-cli.adoc @@ -6,7 +6,7 @@ You can use the OpenShift CLI (`oc`) to create an OpenShift Update Service appli .Prerequisites * The OpenShift Update Service Operator has been installed. -* The Openshift Update Service graph-data container image has been created and pushed to a repository that is accessible to the OpenShift Update Service. +* The OpenShift Update Service graph-data container image has been created and pushed to a repository that is accessible to the OpenShift Update Service. * The current release and update target releases have been mirrored to a locally accessible registry. .Procedure diff --git a/modules/update-service-create-service-web-console.adoc b/modules/update-service-create-service-web-console.adoc index 287d32fc25..1857ff6162 100644 --- a/modules/update-service-create-service-web-console.adoc +++ b/modules/update-service-create-service-web-console.adoc @@ -6,7 +6,7 @@ You can use the {product-title} web console to create an OpenShift Update Servic .Prerequisites * The OpenShift Update Service Operator has been installed. -* The Openshift Update Service graph-data container image has been created and pushed to a repository that is accessible to the OpenShift Update Service. +* The OpenShift Update Service graph-data container image has been created and pushed to a repository that is accessible to the OpenShift Update Service. * The current release and update target releases have been mirrored to a locally accessible registry. .Procedure diff --git a/serverless/functions/serverless-developing-go-functions.adoc b/serverless/functions/serverless-developing-go-functions.adoc index c6c594edbc..8ef49ed542 100644 --- a/serverless/functions/serverless-developing-go-functions.adoc +++ b/serverless/functions/serverless-developing-go-functions.adoc @@ -21,7 +21,7 @@ include::modules/serverless-go-template.adoc[leveloffset=+1] [id="serverless-developing-go-functions-about-invoking"] == About invoking Golang functions -Golang functions are invoked by using different methods, depending on whether they are triggered by a HTTP request or a CloudEvent. +Golang functions are invoked by using different methods, depending on whether they are triggered by an HTTP request or a CloudEvent. include::modules/serverless-invoking-go-functions-http.adoc[leveloffset=+2] include::modules/serverless-invoking-go-functions-cloudevent.adoc[leveloffset=+2] diff --git a/serverless/networking/serverless-ossm-jwt.adoc b/serverless/networking/serverless-ossm-jwt.adoc index d3fee61224..d4bcd1b668 100644 --- a/serverless/networking/serverless-ossm-jwt.adoc +++ b/serverless/networking/serverless-ossm-jwt.adoc @@ -28,7 +28,7 @@ Adding sidecar injection to pods in system namespaces such as `knative-serving` [IMPORTANT] ==== -You must set the annotation `sidecar.istio.io/rewriteAppHTTPProbers: "true"` in your Knative service as {ServerlessProductName} versions 1.14.0 and later use a HTTP probe as the readiness probe for Knative services by default. +You must set the annotation `sidecar.istio.io/rewriteAppHTTPProbers: "true"` in your Knative service as {ServerlessProductName} versions 1.14.0 and later use an HTTP probe as the readiness probe for Knative services by default. ==== include::modules/serverless-ossm-v2x-jwt.adoc[leveloffset=+1] diff --git a/storage/container_storage_interface/persistent-storage-csi-azure.adoc b/storage/container_storage_interface/persistent-storage-csi-azure.adoc index 44e03989d2..ae83f858e9 100644 --- a/storage/container_storage_interface/persistent-storage-csi-azure.adoc +++ b/storage/container_storage_interface/persistent-storage-csi-azure.adoc @@ -16,7 +16,7 @@ Familiarity with xref:../../storage/understanding-persistent-storage.adoc#unders To create CSI-provisioned PVs that mount to Azure Disk storage assets with this feature is enabled, {product-title} installs the Azure Disk CSI Driver Operator and the Azure Disk CSI driver by default in the `openshift-cluster-csi-drivers` namespace. -* The _Azure Disk CSI Driver Operator_ , after being enabled, provides a storage class named `managed-csi` that you can use to create persistent volume claims (PVCs). The Azure Disk CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand, eliminating the need for cluster administrators to pre-provision storage. +* The _Azure Disk CSI Driver Operator_, after being enabled, provides a storage class named `managed-csi` that you can use to create persistent volume claims (PVCs). The Azure Disk CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand, eliminating the need for cluster administrators to pre-provision storage. * The _Azure Disk CSI driver_ enables you to create and mount Azure Disk PVs. diff --git a/welcome/learn_more_about_openshift.adoc b/welcome/learn_more_about_openshift.adoc index 8b2017d18c..98f3e0926c 100644 --- a/welcome/learn_more_about_openshift.adoc +++ b/welcome/learn_more_about_openshift.adoc @@ -16,7 +16,7 @@ Use the following sections to find content to help you learn about and use {prod | link:https://www.openshift.com/blog/enterprise-kubernetes-with-openshift-part-one?extIdCarryOver=true&sc_cid=701f2000001Css5AAC[Enterprise Kubernetes with OpenShift] | link:https://access.redhat.com/articles/4128421[Tested platforms] -| link:https://www.openshift.com/blog?hsLang=en-us[Openshift blog] +| link:https://www.openshift.com/blog?hsLang=en-us[OpenShift blog] | xref:../architecture/architecture.adoc#architecture[Architecture] | xref:../security/container_security/security-understanding.adoc#understanding-security[Security and compliance]