From 0c9bae67b353dd38e2484cb8f3242c693ee570ce Mon Sep 17 00:00:00 2001 From: Andrea Hoffer Date: Wed, 21 Sep 2022 16:28:05 -0400 Subject: [PATCH] Fixing typos and other misc things --- .../installing_aws/manually-creating-iam.adoc | 2 +- .../manually-creating-iam-azure.adoc | 2 +- .../manually-creating-iam-gcp.adoc | 2 +- logging/cluster-logging-external.adoc | 2 +- modules/apiserversource-kn.adoc | 2 +- modules/apiserversource-yaml.adoc | 2 +- modules/cluster-logging-feature-reference.adoc | 2 +- ...oy-red-hat-openshift-container-storage.adoc | 4 ++-- modules/disabling-plug-in-browser.adoc | 2 +- ...fy-a-disconnected-registry-config-yaml.adoc | 2 +- modules/mirror-registry-localhost-update.adoc | 18 +++++++++--------- .../mirror-registry-remote-host-update.adoc | 16 ++++++++-------- ...s-autoscaling-custom-prometheus-config.adoc | 11 +++++------ modules/nodes-pods-secrets-creating-sa.adoc | 2 +- modules/nw-egress-ips-about.adoc | 2 +- modules/nw-egress-router-about.adoc | 2 +- modules/odc-creating-apiserversource.adoc | 2 +- modules/op-release-notes-1-7.adoc | 4 ++-- modules/ossm-kiali-service-mesh.adoc | 2 +- ...ting-a-cluster-with-customizations-ocm.adoc | 4 ++-- ...serverless-create-default-channel-yaml.adoc | 2 +- modules/serverless-create-func-kn.adoc | 2 +- .../serverless-create-kafka-channel-yaml.adoc | 2 +- .../serverless-creating-broker-annotation.adoc | 2 +- .../serverless-creating-broker-labeling.adoc | 2 +- ...serverless-creating-subscriptions-yaml.adoc | 2 +- .../serverless-deleting-broker-injection.adoc | 2 +- ...serverless-functions-on-cluster-builds.adoc | 2 +- modules/serverless-jaeger-config.adoc | 2 +- modules/serverless-kafka-broker-configmap.adoc | 2 +- ...erless-kafka-broker-tls-default-config.adoc | 2 +- ...rverless-kafka-broker-with-kafka-topic.adoc | 2 +- modules/serverless-kafka-broker.adoc | 2 +- modules/serverless-kafka-sink.adoc | 2 +- modules/serverless-kafka-source-kn.adoc | 2 +- modules/serverless-kafka-tls-channels.adoc | 2 +- modules/serverless-open-telemetry.adoc | 2 +- modules/serverless-ossm-secret-filtering.adoc | 2 +- modules/serverless-ossm-setup.adoc | 2 +- modules/serverless-pingsource-kn.adoc | 2 +- modules/serverless-pingsource-yaml.adoc | 2 +- modules/serverless-sinkbinding-kn.adoc | 2 +- modules/serverless-sinkbinding-yaml.adoc | 2 +- modules/virt-customizing-storage-profile.adoc | 2 +- .../ztp-filtering-ai-crs-using-siteconfig.adoc | 2 +- nodes/pods/nodes-pods-autoscaling-custom.adoc | 16 ++++++++-------- serverless/cli_tools/installing-kn.adoc | 4 ++-- .../develop/serverless-kafka-developer.adoc | 2 +- .../service-account-auto-secret-removed.adoc | 5 ++--- 49 files changed, 80 insertions(+), 82 deletions(-) diff --git a/installing/installing_aws/manually-creating-iam.adoc b/installing/installing_aws/manually-creating-iam.adoc index df706b2ed5..ad80851861 100644 --- a/installing/installing_aws/manually-creating-iam.adoc +++ b/installing/installing_aws/manually-creating-iam.adoc @@ -26,7 +26,7 @@ include::modules/alternatives-to-storing-admin-secrets-in-kube-system.adoc[level include::modules/manually-create-identity-access-management.adoc[leveloffset=+1] [role="_additional-resources"] -.Additional references +.Additional resources * xref:../../updating/updating-cluster-within-minor.adoc#manually-maintained-credentials-upgrade_updating-cluster-within-minor[Updating a cluster using the web console] * xref:../../updating/updating-cluster-cli.adoc#manually-maintained-credentials-upgrade_updating-cluster-cli[Updating a cluster using the CLI] diff --git a/installing/installing_azure/manually-creating-iam-azure.adoc b/installing/installing_azure/manually-creating-iam-azure.adoc index 4c7b348ef9..9d568148f8 100644 --- a/installing/installing_azure/manually-creating-iam-azure.adoc +++ b/installing/installing_azure/manually-creating-iam-azure.adoc @@ -17,7 +17,7 @@ include::modules/alternatives-to-storing-admin-secrets-in-kube-system.adoc[level include::modules/manually-create-identity-access-management.adoc[leveloffset=+1] [role="_additional-resources"] -.Additional references +.Additional resources * xref:../../updating/updating-cluster-within-minor.adoc#manually-maintained-credentials-upgrade_updating-cluster-within-minor[Updating a cluster using the web console] * xref:../../updating/updating-cluster-cli.adoc#manually-maintained-credentials-upgrade_updating-cluster-cli[Updating a cluster using the CLI] diff --git a/installing/installing_gcp/manually-creating-iam-gcp.adoc b/installing/installing_gcp/manually-creating-iam-gcp.adoc index ecaabc836c..aad10c6798 100644 --- a/installing/installing_gcp/manually-creating-iam-gcp.adoc +++ b/installing/installing_gcp/manually-creating-iam-gcp.adoc @@ -21,7 +21,7 @@ For a detailed description of all available CCO credential modes and their suppo include::modules/manually-create-identity-access-management.adoc[leveloffset=+1] [role="_additional-resources"] -.Additional references +.Additional resources * xref:../../updating/updating-cluster-within-minor.adoc#manually-maintained-credentials-upgrade_updating-cluster-within-minor[Updating a cluster using the web console] * xref:../../updating/updating-cluster-cli.adoc#manually-maintained-credentials-upgrade_updating-cluster-cli[Updating a cluster using the CLI] diff --git a/logging/cluster-logging-external.adoc b/logging/cluster-logging-external.adoc index 8712d5d73d..db180f65fa 100644 --- a/logging/cluster-logging-external.adoc +++ b/logging/cluster-logging-external.adoc @@ -88,7 +88,7 @@ spec: - logcollector ---- + -. Use the `ccoctl` command to to create a role for AWS using your `CredentialsRequest` CR. With the `CredentialsRequest` object, this `ccoctl` command creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy that grants permissions to perform operations on CloudWatch resources. This command also creates a YAML configuration file in ``//manifests/openshift-logging--credentials.yaml`. This secret file contains the `role_arn` key/value used during authentication with the AWS IAM identity provider. +. Use the `ccoctl` command to create a role for AWS using your `CredentialsRequest` CR. With the `CredentialsRequest` object, this `ccoctl` command creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy that grants permissions to perform operations on CloudWatch resources. This command also creates a YAML configuration file in `//manifests/openshift-logging--credentials.yaml`. This secret file contains the `role_arn` key/value used during authentication with the AWS IAM identity provider. + [source,terminal] ---- diff --git a/modules/apiserversource-kn.adoc b/modules/apiserversource-kn.adoc index c7d9655c7f..4c2a4825c7 100644 --- a/modules/apiserversource-kn.adoc +++ b/modules/apiserversource-kn.adoc @@ -13,7 +13,7 @@ You can use the `kn source apiserver create` command to create an API server sou * The {ServerlessOperatorName} and Knative Eventing are installed on the cluster. * You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}. -* You have installed the OpenShift (`oc`) CLI. +* You have installed the OpenShift CLI (`oc`). * You have installed the Knative (`kn`) CLI. .Procedure diff --git a/modules/apiserversource-yaml.adoc b/modules/apiserversource-yaml.adoc index a233a2240a..f80dda6839 100644 --- a/modules/apiserversource-yaml.adoc +++ b/modules/apiserversource-yaml.adoc @@ -13,7 +13,7 @@ Creating Knative resources by using YAML files uses a declarative API, which ena * The {ServerlessOperatorName} and Knative Eventing are installed on the cluster. * You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}. * You have created the `default` broker in the same namespace as the one defined in the API server source YAML file. -* Install the OpenShift (`oc`) CLI. +* Install the OpenShift CLI (`oc`). .Procedure diff --git a/modules/cluster-logging-feature-reference.adoc b/modules/cluster-logging-feature-reference.adoc index a75f2db66e..64cc7ffcaf 100644 --- a/modules/cluster-logging-feature-reference.adoc +++ b/modules/cluster-logging-feature-reference.adoc @@ -75,7 +75,7 @@ $ oc -n openshift-logging edit ClusterLogging instance | Infra container logs | ✓ | ✓ | Infra journal logs | ✓ | ✓ | Kube API audit logs | ✓ | ✓ -| Openshift API audit logs | ✓ | ✓ +| OpenShift API audit logs | ✓ | ✓ | Open Virtual Network (OVN) audit logs| ✓ | ✓ |=============================================================== diff --git a/modules/deploy-red-hat-openshift-container-storage.adoc b/modules/deploy-red-hat-openshift-container-storage.adoc index 29397a78b1..3d2d547180 100644 --- a/modules/deploy-red-hat-openshift-container-storage.adoc +++ b/modules/deploy-red-hat-openshift-container-storage.adoc @@ -20,8 +20,8 @@ |Instructions on deploying {rh-storage} to local storage on bare metal infrastructure |link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure[Deploying OpenShift Data Foundation 4.9 using bare metal infrastructure] -|Instructions on deploying {rh-storage} on Red Hat {product-title} VMWare vSphere clusters -|link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_on_vmware_vsphere[Deploying OpenShift Data Foundation 4.9 on VMWare vSphere] +|Instructions on deploying {rh-storage} on Red Hat {product-title} VMware vSphere clusters +|link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_on_vmware_vsphere[Deploying OpenShift Data Foundation 4.9 on VMware vSphere] |Instructions on deploying {rh-storage} using Amazon Web Services for local or cloud storage |link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_amazon_web_services[Deploying OpenShift Data Foundation 4.9 using Amazon Web Services] diff --git a/modules/disabling-plug-in-browser.adoc b/modules/disabling-plug-in-browser.adoc index a08a975d3f..538352a773 100644 --- a/modules/disabling-plug-in-browser.adoc +++ b/modules/disabling-plug-in-browser.adoc @@ -12,7 +12,7 @@ Console users can use the `disable-plugins` query parameter to disable specific * To disable a specific plug-in(s), remove the plug-in you want to disable from the comma-separated list of plug-in names. -* To disable all plug-ins, leave an an empty string in the `disable-plugins` query parameter. +* To disable all plug-ins, leave an empty string in the `disable-plugins` query parameter. [NOTE] ==== diff --git a/modules/ipi-modify-a-disconnected-registry-config-yaml.adoc b/modules/ipi-modify-a-disconnected-registry-config-yaml.adoc index 51ec267ed2..e0e5052ada 100644 --- a/modules/ipi-modify-a-disconnected-registry-config-yaml.adoc +++ b/modules/ipi-modify-a-disconnected-registry-config-yaml.adoc @@ -18,7 +18,7 @@ The `install-config.yaml` file must contain the disconnected registry node's cer $ pullSecret: '{"auths":{":5000": {"auth": "","email": "you@example.com"}}}'---- ---- + -For ``, specify the registry domain name that you specified in the certificate for your mirror registry, and for ```, specify the base64-encoded user name and password for your mirror registry. +For ``, specify the registry domain name that you specified in the certificate for your mirror registry, and for ``, specify the base64-encoded user name and password for your mirror registry. .. Add the `additionalTrustBundle` parameter and value: + diff --git a/modules/mirror-registry-localhost-update.adoc b/modules/mirror-registry-localhost-update.adoc index 3521e2c662..fbad5fb59b 100644 --- a/modules/mirror-registry-localhost-update.adoc +++ b/modules/mirror-registry-localhost-update.adoc @@ -1,32 +1,32 @@ -// module included in the following assembly: +// module included in the following assembly: // // * installing-mirroring-creating-registry.adoc :_content-type: PROCEDURE [id="mirror-registry-localhost-update_{context}"] -= Updating mirror registry for Red Hat OpenShift from a local host += Updating mirror registry for Red Hat OpenShift from a local host -This procedure explains how to update the _mirror registry for Red Hat OpenShift_ from a local host using the `upgrade` command. Updating to the latest version ensures bug fixes and security vulnerability fixes. +This procedure explains how to update the _mirror registry for Red Hat OpenShift_ from a local host using the `upgrade` command. Updating to the latest version ensures bug fixes and security vulnerability fixes. [IMPORTANT] ==== -When updating, there is intermittent downtime of your mirror registry, as it is restarted during the update process. +When updating, there is intermittent downtime of your mirror registry, as it is restarted during the update process. ==== .Prerequisites -* You have installed the _mirror registry for Red Hat OpenShift_ on a local host. +* You have installed the _mirror registry for Red Hat OpenShift_ on a local host. -.Procedure +.Procedure -* To upgrade the the _mirror registry for Red Hat OpenShift_ from localhost, enter the following command: +* To upgrade the _mirror registry for Red Hat OpenShift_ from localhost, enter the following command: + [source,terminal] ---- -$ sudo ./mirror-registry upgrade -v +$ sudo ./mirror-registry upgrade -v ---- + [NOTE] ==== Users who upgrade the _mirror registry for Red Hat OpenShift_ with the `./mirror-registry upgrade -v` flag must include the same credentials used when creating their mirror registry. For example, if you installed the _mirror registry for Red Hat OpenShift_ with `--quayHostname ` and `--quayRoot `, you must include that string to properly upgrade the mirror registry. -==== \ No newline at end of file +==== diff --git a/modules/mirror-registry-remote-host-update.adoc b/modules/mirror-registry-remote-host-update.adoc index 30b11a6952..d867dc07d8 100644 --- a/modules/mirror-registry-remote-host-update.adoc +++ b/modules/mirror-registry-remote-host-update.adoc @@ -1,25 +1,25 @@ -// module included in the following assembly: +// module included in the following assembly: // // * installing-mirroring-creating-registry.adoc :_content-type: PROCEDURE [id="mirror-registry-remote-host-update_{context}"] -= Updating mirror registry for Red Hat OpenShift from a remote host += Updating mirror registry for Red Hat OpenShift from a remote host -This procedure explains how to update the _mirror registry for Red Hat OpenShift_ from a remote host using the `upgrade` command. Updating to the latest version ensures bug fixes and security vulnerability fixes. +This procedure explains how to update the _mirror registry for Red Hat OpenShift_ from a remote host using the `upgrade` command. Updating to the latest version ensures bug fixes and security vulnerability fixes. [IMPORTANT] ==== -When updating, there is intermittent downtime of your mirror registry, as it is restarted during the update process. +When updating, there is intermittent downtime of your mirror registry, as it is restarted during the update process. ==== .Prerequisites -* You have installed the _mirror registry for Red Hat OpenShift_ on a remote host. +* You have installed the _mirror registry for Red Hat OpenShift_ on a remote host. -.Procedure +.Procedure -* To upgrade the the _mirror registry for Red Hat OpenShift_ from a remote host, enter the following command: +* To upgrade the _mirror registry for Red Hat OpenShift_ from a remote host, enter the following command: + [source,terminal] ---- @@ -29,4 +29,4 @@ $ sudo ./mirror-registry upgrade -v --targetHostname --targetU [NOTE] ==== Users who upgrade the _mirror registry for Red Hat OpenShift_ with the `./mirror-registry upgrade -v` flag must include the same credentials used when creating their mirror registry. For example, if you installed the _mirror registry for Red Hat OpenShift_ with `--quayHostname ` and `--quayRoot `, you must include that string to properly upgrade the mirror registry. -==== \ No newline at end of file +==== diff --git a/modules/nodes-pods-autoscaling-custom-prometheus-config.adoc b/modules/nodes-pods-autoscaling-custom-prometheus-config.adoc index 70933a287e..d752251fa6 100644 --- a/modules/nodes-pods-autoscaling-custom-prometheus-config.adoc +++ b/modules/nodes-pods-autoscaling-custom-prometheus-config.adoc @@ -26,11 +26,11 @@ You must perform the following tasks, as described in this section: * Monitoring of user-defined workloads must be enabled in {product-title} monitoring, as described in the *Creating a user-defined workload monitoring config map* section. -* The Custom Metrics Autoscaler Operator must be installed. +* The Custom Metrics Autoscaler Operator must be installed. .Procedure -. Change to the project with the the object you want to scale: +. Change to the project with the object you want to scale: + [source,terminal] ---- @@ -101,7 +101,7 @@ spec: <4> Specifies the key in the token to use with the specified parameter. .. Create the CR object: -+ ++ [source,terminal] ---- $ oc create -f .yaml @@ -136,7 +136,7 @@ rules: ---- .. Create the CR object: -+ ++ [source,terminal] ---- $ oc create -f .yaml @@ -167,11 +167,10 @@ subjects: <3> Specifies the name of the service account to bind to the role. <4> Specifies the namespace of the object you want to scale. .. Create the CR object: -+ ++ [source,terminal] ---- $ oc create -f .yaml ---- You can now deploy a scaled object or scaled job to enable autoscaling for your application, as described in the following sections. To use {product-title} monitoring as the source, in the trigger, or scaler, specify the `prometheus` type and use `\https://thanos-querier.openshift-monitoring.svc.cluster.local:9092` as the `serverAddress`. - diff --git a/modules/nodes-pods-secrets-creating-sa.adoc b/modules/nodes-pods-secrets-creating-sa.adoc index 387d988c95..a20ea49ca1 100644 --- a/modules/nodes-pods-secrets-creating-sa.adoc +++ b/modules/nodes-pods-secrets-creating-sa.adoc @@ -14,7 +14,7 @@ It is recommended to obtain bound service account tokens using the TokenRequest You should create a service account token secret only if you cannot use the TokenRequest API and if the security exposure of a non-expiring token in a readable API object is acceptable to you. -See the Additional references section that follows for information on creating bound service account tokens. +See the Additional resources section that follows for information on creating bound service account tokens. ==== .Procedure diff --git a/modules/nw-egress-ips-about.adoc b/modules/nw-egress-ips-about.adoc index c3ce1e96df..a3d37fe7d6 100644 --- a/modules/nw-egress-ips-about.adoc +++ b/modules/nw-egress-ips-about.adoc @@ -42,7 +42,7 @@ Support for the egress IP address functionality on various platforms is summariz | Platform | Supported | Bare metal | Yes -| VMWare vSphere | Yes +| VMware vSphere | Yes | {rh-openstack-first} | No | Amazon Web Services (AWS) | Yes | Google Cloud Platform (GCP) | Yes diff --git a/modules/nw-egress-router-about.adoc b/modules/nw-egress-router-about.adoc index 3bb3595232..91d4c3e570 100644 --- a/modules/nw-egress-router-about.adoc +++ b/modules/nw-egress-router-about.adoc @@ -105,7 +105,7 @@ If you are using link:https://access.redhat.com/documentation/en-us/red_hat_virt VMware vSphere:: -If you are using VMware vSphere, see the link:https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.security.doc/GUID-3507432E-AFEA-4B6B-B404-17A020575358.html[VMWare documentation for securing vSphere standard switches]. View and change VMWare vSphere default settings by selecting the host virtual switch from the vSphere Web Client. +If you are using VMware vSphere, see the link:https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.security.doc/GUID-3507432E-AFEA-4B6B-B404-17A020575358.html[VMware documentation for securing vSphere standard switches]. View and change VMware vSphere default settings by selecting the host virtual switch from the vSphere Web Client. Specifically, ensure that the following are enabled: diff --git a/modules/odc-creating-apiserversource.adoc b/modules/odc-creating-apiserversource.adoc index 3ce2a3172c..3bcdb72015 100644 --- a/modules/odc-creating-apiserversource.adoc +++ b/modules/odc-creating-apiserversource.adoc @@ -13,7 +13,7 @@ After Knative Eventing is installed on your cluster, you can create an API serve * You have logged in to the {product-title} web console. * The {ServerlessOperatorName} and Knative Eventing are installed on the cluster. * You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}. -* You have installed the OpenShift (`oc`) CLI. +* You have installed the OpenShift CLI (`oc`). .Procedure diff --git a/modules/op-release-notes-1-7.adoc b/modules/op-release-notes-1-7.adoc index 9c72dee86a..a5e5923fa8 100644 --- a/modules/op-release-notes-1-7.adoc +++ b/modules/op-release-notes-1-7.adoc @@ -252,7 +252,7 @@ Pipelines as Code supports the following features: == Deprecated features // Pipelines -* Breaking change: This update removes the `disable-working-directory-overwrite` and `disable-home-env-overwrite` fields from the `TektonConfig` custom resource (CR). As a result, the `TektonConfig` CR no longer automatically sets the `$HOME` environment variable and `workingDir` parameter. You can still set the `$HOME` environment variable and `workingDir` parameter by using the the `env` and `workingDir` fields in the `Task` custom resource definition (CRD). +* Breaking change: This update removes the `disable-working-directory-overwrite` and `disable-home-env-overwrite` fields from the `TektonConfig` custom resource (CR). As a result, the `TektonConfig` CR no longer automatically sets the `$HOME` environment variable and `workingDir` parameter. You can still set the `$HOME` environment variable and `workingDir` parameter by using the `env` and `workingDir` fields in the `Task` custom resource definition (CRD). // https://github.com/tektoncd/pipeline/pull/4587 @@ -427,4 +427,4 @@ With this update, {pipelines-title} General Availability (GA) 1.7.3 is available * Previously, upgrading the {pipelines-title} Operator caused the `pipeline` service account to be recreated, which meant that the secrets linked to the service account were lost. This update fixes the issue. During upgrades, the Operator no longer recreates the `pipeline` service account. As a result, secrets attached to the `pipeline` service account persist after upgrades, and the resources (tasks and pipelines) continue to work correctly. // link:https://issues.redhat.com/browse/SRVKP-2256 -// Kushagra Kulshreshtha \ No newline at end of file +// Kushagra Kulshreshtha diff --git a/modules/ossm-kiali-service-mesh.adoc b/modules/ossm-kiali-service-mesh.adoc index 869fe75d7f..352322875f 100644 --- a/modules/ossm-kiali-service-mesh.adoc +++ b/modules/ossm-kiali-service-mesh.adoc @@ -13,4 +13,4 @@ Installing Kiali via the Service Mesh on {product-title} differs from community * Ingress has been enabled by default. * Updates have been made to the Kiali ConfigMap. * Updates have been made to the ClusterRole settings for Kiali. -* Do not edit the ConfigMap, because your changes might be overwritten by the {SMProductShortName} or Kiali Operators. Files that the Kiali Operator manages have a `kiali.io/`` label or annotation. Updating the Operator files should be restricted to those users with `cluster-admin` privileges. If you use {product-dedicated}, updating the Operator files should be restricted to those users with `dedicated-admin` privileges. +* Do not edit the ConfigMap, because your changes might be overwritten by the {SMProductShortName} or Kiali Operators. Files that the Kiali Operator manages have a `kiali.io/` label or annotation. Updating the Operator files should be restricted to those users with `cluster-admin` privileges. If you use {product-dedicated}, updating the Operator files should be restricted to those users with `dedicated-admin` privileges. diff --git a/modules/rosa-sts-creating-a-cluster-with-customizations-ocm.adoc b/modules/rosa-sts-creating-a-cluster-with-customizations-ocm.adoc index 2378b35df1..507a56f398 100644 --- a/modules/rosa-sts-creating-a-cluster-with-customizations-ocm.adoc +++ b/modules/rosa-sts-creating-a-cluster-with-customizations-ocm.adoc @@ -210,7 +210,7 @@ By enabling etcd encryption for the key values in etcd, you will incur a perform ==== Only persistent volumes (PVs) created from the default storage class are encrypted by default. -PVs created by using any other storage class are only encrypted if the the storage class is configured to be encrypted. +PVs created by using any other storage class are only encrypted if the storage class is configured to be encrypted. ==== .. Click *Next*. @@ -291,7 +291,7 @@ Alternatively, you can use *Auto* mode to automatically create the Operator role To enable *Auto* mode, the {cluster-manager} IAM role must have administrator capabilities. ==== -. Optional: Specify a *Custom operator roles prefix* for your cluster-specific Operator IAM roles. +. Optional: Specify a *Custom operator roles prefix* for your cluster-specific Operator IAM roles. + [NOTE] ==== diff --git a/modules/serverless-create-default-channel-yaml.adoc b/modules/serverless-create-default-channel-yaml.adoc index ba56ed9c9e..976bc006d7 100644 --- a/modules/serverless-create-default-channel-yaml.adoc +++ b/modules/serverless-create-default-channel-yaml.adoc @@ -11,7 +11,7 @@ Creating Knative resources by using YAML files uses a declarative API, which ena .Prerequisites * The {ServerlessOperatorName} and Knative Eventing are installed on the cluster. -* Install the OpenShift (`oc`) CLI. +* Install the OpenShift CLI (`oc`). * You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}. .Procedure diff --git a/modules/serverless-create-func-kn.adoc b/modules/serverless-create-func-kn.adoc index 68e6f3e5ba..1538aa64e6 100644 --- a/modules/serverless-create-func-kn.adoc +++ b/modules/serverless-create-func-kn.adoc @@ -7,7 +7,7 @@ [id="serverless-create-func-kn_{context}"] = Creating functions -Before you can build and deploy a function, you must create it by using the the Knative (`kn`) CLI. You can specify the path, runtime, template, and image registry as flags on the command line, or use the `-c` flag to start the interactive experience in the terminal. +Before you can build and deploy a function, you must create it by using the Knative (`kn`) CLI. You can specify the path, runtime, template, and image registry as flags on the command line, or use the `-c` flag to start the interactive experience in the terminal. .Prerequisites diff --git a/modules/serverless-create-kafka-channel-yaml.adoc b/modules/serverless-create-kafka-channel-yaml.adoc index 392dcb5b8f..86cd7e3caa 100644 --- a/modules/serverless-create-kafka-channel-yaml.adoc +++ b/modules/serverless-create-kafka-channel-yaml.adoc @@ -12,7 +12,7 @@ Creating Knative resources by using YAML files uses a declarative API, which ena .Prerequisites * The {ServerlessOperatorName}, Knative Eventing, and the `KnativeKafka` custom resource are installed on your {product-title} cluster. -* Install the OpenShift (`oc`) CLI. +* Install the OpenShift CLI (`oc`). * You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}. .Procedure diff --git a/modules/serverless-creating-broker-annotation.adoc b/modules/serverless-creating-broker-annotation.adoc index cc51c150ca..8d252626e1 100644 --- a/modules/serverless-creating-broker-annotation.adoc +++ b/modules/serverless-creating-broker-annotation.adoc @@ -17,7 +17,7 @@ If you delete the broker without having a cluster administrator remove this anno .Prerequisites * The {ServerlessOperatorName} and Knative Eventing are installed on your {product-title} cluster. -* Install the OpenShift (`oc`) CLI. +* Install the OpenShift CLI (`oc`). * You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}. .Procedure diff --git a/modules/serverless-creating-broker-labeling.adoc b/modules/serverless-creating-broker-labeling.adoc index de9efc4d5d..591a33420f 100644 --- a/modules/serverless-creating-broker-labeling.adoc +++ b/modules/serverless-creating-broker-labeling.adoc @@ -16,7 +16,7 @@ Brokers created using this method are not removed if you remove the label. You m .Prerequisites * The {ServerlessOperatorName} and Knative Eventing are installed on your {product-title} cluster. -* Install the OpenShift (`oc`) CLI. +* Install the OpenShift CLI (`oc`). * You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}. ifdef::openshift-dedicated,openshift-rosa[] diff --git a/modules/serverless-creating-subscriptions-yaml.adoc b/modules/serverless-creating-subscriptions-yaml.adoc index 40cef67eaa..dd06dfc8fc 100644 --- a/modules/serverless-creating-subscriptions-yaml.adoc +++ b/modules/serverless-creating-subscriptions-yaml.adoc @@ -11,7 +11,7 @@ After you have created a channel and an event sink, you can create a subscriptio .Prerequisites * The {ServerlessOperatorName} and Knative Eventing are installed on the cluster. -* Install the OpenShift (`oc`) CLI. +* Install the OpenShift CLI (`oc`). * You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}. .Procedure diff --git a/modules/serverless-deleting-broker-injection.adoc b/modules/serverless-deleting-broker-injection.adoc index 33f598726b..1d89ed4301 100644 --- a/modules/serverless-deleting-broker-injection.adoc +++ b/modules/serverless-deleting-broker-injection.adoc @@ -10,7 +10,7 @@ If you create a broker by injection and later want to delete it, you must delete .Prerequisites -* Install the OpenShift (`oc`) CLI. +* Install the OpenShift CLI (`oc`). .Procedure diff --git a/modules/serverless-functions-on-cluster-builds.adoc b/modules/serverless-functions-on-cluster-builds.adoc index 562be0c1de..aefc8c53a0 100644 --- a/modules/serverless-functions-on-cluster-builds.adoc +++ b/modules/serverless-functions-on-cluster-builds.adoc @@ -15,7 +15,7 @@ include::snippets/technology-preview.adoc[leveloffset=+1] * {pipelines-title} must be installed on your cluster. -* You have installed the OpenShift (`oc`) CLI. +* You have installed the OpenShift CLI (`oc`). * You have installed the Knative (`kn`) CLI. diff --git a/modules/serverless-jaeger-config.adoc b/modules/serverless-jaeger-config.adoc index bfc26026c0..3f6004d521 100644 --- a/modules/serverless-jaeger-config.adoc +++ b/modules/serverless-jaeger-config.adoc @@ -20,7 +20,7 @@ endif::[] * You have installed the {ServerlessOperatorName} and Knative Serving. * You have installed the {JaegerName} Operator. -* You have installed the OpenShift (`oc`) CLI. +* You have installed the OpenShift CLI (`oc`). * You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}. .Procedure diff --git a/modules/serverless-kafka-broker-configmap.adoc b/modules/serverless-kafka-broker-configmap.adoc index 4c24cb2186..741a518805 100644 --- a/modules/serverless-kafka-broker-configmap.adoc +++ b/modules/serverless-kafka-broker-configmap.adoc @@ -16,7 +16,7 @@ include::snippets/technology-preview.adoc[] * You have cluster or dedicated administrator permissions on {product-title}. * The {ServerlessOperatorName}, Knative Eventing, and the `KnativeKafka` custom resource (CR) are installed on your {product-title} cluster. * You have created a project or have access to a project that has the appropriate roles and permissions to create applications and other workloads in {product-title}. -* You have installed the OpenShift (`oc`) CLI. +* You have installed the OpenShift CLI (`oc`). .Procedure diff --git a/modules/serverless-kafka-broker-tls-default-config.adoc b/modules/serverless-kafka-broker-tls-default-config.adoc index 3ea5af9547..a2191df6f5 100644 --- a/modules/serverless-kafka-broker-tls-default-config.adoc +++ b/modules/serverless-kafka-broker-tls-default-config.adoc @@ -15,7 +15,7 @@ _Transport Layer Security_ (TLS) is used by Apache Kafka clients and servers to * You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}. * You have a Kafka cluster CA certificate stored as a `.pem` file. * You have a Kafka cluster client certificate and a key stored as `.pem` files. -* Install the OpenShift (`oc`) CLI. +* Install the OpenShift CLI (`oc`). .Procedure diff --git a/modules/serverless-kafka-broker-with-kafka-topic.adoc b/modules/serverless-kafka-broker-with-kafka-topic.adoc index d601d62a97..d9ed04814f 100644 --- a/modules/serverless-kafka-broker-with-kafka-topic.adoc +++ b/modules/serverless-kafka-broker-with-kafka-topic.adoc @@ -16,7 +16,7 @@ If you want to use a Kafka broker without allowing it to create its own internal * You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}. -* You have installed the OpenShift (`oc`) CLI. +* You have installed the OpenShift CLI (`oc`). .Procedure diff --git a/modules/serverless-kafka-broker.adoc b/modules/serverless-kafka-broker.adoc index 6eb1f75f5f..af2e5ac881 100644 --- a/modules/serverless-kafka-broker.adoc +++ b/modules/serverless-kafka-broker.adoc @@ -14,7 +14,7 @@ Creating Knative resources by using YAML files uses a declarative API, which ena * You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}. -* You have installed the OpenShift (`oc`) CLI. +* You have installed the OpenShift CLI (`oc`). .Procedure diff --git a/modules/serverless-kafka-sink.adoc b/modules/serverless-kafka-sink.adoc index 3fab69e22a..726024f1ce 100644 --- a/modules/serverless-kafka-sink.adoc +++ b/modules/serverless-kafka-sink.adoc @@ -13,7 +13,7 @@ You can create an event sink called a Kafka sink that sends events to a Kafka to * The {ServerlessOperatorName}, Knative Eventing, and the `KnativeKafka` custom resource (CR) are installed on your cluster. * You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}. * You have access to a Red Hat AMQ Streams (Kafka) cluster that produces the Kafka messages you want to import. -* Install the OpenShift (`oc`) CLI. +* Install the OpenShift CLI (`oc`). .Procedure diff --git a/modules/serverless-kafka-source-kn.adoc b/modules/serverless-kafka-source-kn.adoc index 30f5ba3206..6b9292e2fc 100644 --- a/modules/serverless-kafka-source-kn.adoc +++ b/modules/serverless-kafka-source-kn.adoc @@ -15,7 +15,7 @@ You can use the `kn source kafka create` command to create a Kafka source by usi * You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}. * You have access to a Red Hat AMQ Streams (Kafka) cluster that produces the Kafka messages you want to import. * You have installed the Knative (`kn`) CLI. -* Optional: You have installed the OpenShift (`oc`) CLI if you want to use the verification steps in this procedure. +* Optional: You have installed the OpenShift CLI (`oc`) if you want to use the verification steps in this procedure. .Procedure diff --git a/modules/serverless-kafka-tls-channels.adoc b/modules/serverless-kafka-tls-channels.adoc index f8a6676c24..96e361cc5d 100644 --- a/modules/serverless-kafka-tls-channels.adoc +++ b/modules/serverless-kafka-tls-channels.adoc @@ -15,7 +15,7 @@ _Transport Layer Security_ (TLS) is used by Apache Kafka clients and servers to * You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}. * You have a Kafka cluster CA certificate stored as a `.pem` file. * You have a Kafka cluster client certificate and a key stored as `.pem` files. -* Install the OpenShift (`oc`) CLI. +* Install the OpenShift CLI (`oc`). .Procedure diff --git a/modules/serverless-open-telemetry.adoc b/modules/serverless-open-telemetry.adoc index 9746f2f220..650750383d 100644 --- a/modules/serverless-open-telemetry.adoc +++ b/modules/serverless-open-telemetry.adoc @@ -13,7 +13,7 @@ * You have access to an {product-title} account with cluster administrator access. * You have not yet installed the {ServerlessOperatorName} and Knative Serving. These must be installed after the {DTProductName} installation. * You have installed {DTProductName} by following the {product-title} "Installing distributed tracing" documentation. -* You have installed the OpenShift (`oc`) CLI. +* You have installed the OpenShift CLI (`oc`). * You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}. .Procedure diff --git a/modules/serverless-ossm-secret-filtering.adoc b/modules/serverless-ossm-secret-filtering.adoc index fa04bf8482..0543559120 100644 --- a/modules/serverless-ossm-secret-filtering.adoc +++ b/modules/serverless-ossm-secret-filtering.adoc @@ -51,4 +51,4 @@ spec: sidecar.istio.io/rewriteAppHTTPProbers: "true" name: autoscaler ---- -<1> Adding this annotation injects an enviroment variable, `ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID=true`, to the `net-istio` controller pod. +<1> Adding this annotation injects an environment variable, `ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID=true`, to the `net-istio` controller pod. diff --git a/modules/serverless-ossm-setup.adoc b/modules/serverless-ossm-setup.adoc index 9fe8528749..42a1b07809 100644 --- a/modules/serverless-ossm-setup.adoc +++ b/modules/serverless-ossm-setup.adoc @@ -6,7 +6,7 @@ [id="serverless-ossm-setup_{context}"] = Integrating {SMProductShortName} with {ServerlessProductName} -You can integrate {SMProductShortName} with {ServerlessProductName} without using Kourier as the default ingress. To do this, do not install the Knative Serving component before completing the following procedure. There are additional steps required when creating the `KnativeServing` custom resource defintion (CRD) to integrate Knative Serving with {SMProductShortName}, which are not covered in the general Knative Serving installation procedure. This procedure might be useful if you want to integrate {SMProductShortName} as the default and only ingress for your {ServerlessProductName} installation. +You can integrate {SMProductShortName} with {ServerlessProductName} without using Kourier as the default ingress. To do this, do not install the Knative Serving component before completing the following procedure. There are additional steps required when creating the `KnativeServing` custom resource definition (CRD) to integrate Knative Serving with {SMProductShortName}, which are not covered in the general Knative Serving installation procedure. This procedure might be useful if you want to integrate {SMProductShortName} as the default and only ingress for your {ServerlessProductName} installation. .Prerequisites diff --git a/modules/serverless-pingsource-kn.adoc b/modules/serverless-pingsource-kn.adoc index 8e747bb634..988dc94e39 100644 --- a/modules/serverless-pingsource-kn.adoc +++ b/modules/serverless-pingsource-kn.adoc @@ -14,7 +14,7 @@ You can use the `kn source ping create` command to create a ping source by using * The {ServerlessOperatorName}, Knative Serving and Knative Eventing are installed on the cluster. * You have installed the Knative (`kn`) CLI. * You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}. -* Optional: If you want to use the verification steps for this procedure, install the OpenShift (`oc`) CLI. +* Optional: If you want to use the verification steps for this procedure, install the OpenShift CLI (`oc`). .Procedure diff --git a/modules/serverless-pingsource-yaml.adoc b/modules/serverless-pingsource-yaml.adoc index 3e94cce599..fe866dc355 100644 --- a/modules/serverless-pingsource-yaml.adoc +++ b/modules/serverless-pingsource-yaml.adoc @@ -32,7 +32,7 @@ spec: .Prerequisites * The {ServerlessOperatorName}, Knative Serving and Knative Eventing are installed on the cluster. -* Install the OpenShift (`oc`) CLI. +* Install the OpenShift CLI (`oc`). * You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}. .Procedure diff --git a/modules/serverless-sinkbinding-kn.adoc b/modules/serverless-sinkbinding-kn.adoc index c044add784..9fa5176156 100644 --- a/modules/serverless-sinkbinding-kn.adoc +++ b/modules/serverless-sinkbinding-kn.adoc @@ -13,7 +13,7 @@ You can use the `kn source binding create` command to create a sink binding by u * The {ServerlessOperatorName}, Knative Serving and Knative Eventing are installed on the cluster. * You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}. * Install the Knative (`kn`) CLI. -* Install the OpenShift (`oc`) CLI. +* Install the OpenShift CLI (`oc`). [NOTE] ==== diff --git a/modules/serverless-sinkbinding-yaml.adoc b/modules/serverless-sinkbinding-yaml.adoc index cff69c7a75..2d10cd96f3 100644 --- a/modules/serverless-sinkbinding-yaml.adoc +++ b/modules/serverless-sinkbinding-yaml.adoc @@ -11,7 +11,7 @@ Creating Knative resources by using YAML files uses a declarative API, which ena .Prerequisites * The {ServerlessOperatorName}, Knative Serving and Knative Eventing are installed on the cluster. -* Install the OpenShift (`oc`) CLI. +* Install the OpenShift CLI (`oc`). * You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}. .Procedure diff --git a/modules/virt-customizing-storage-profile.adoc b/modules/virt-customizing-storage-profile.adoc index 2e78560ce2..167c16a588 100644 --- a/modules/virt-customizing-storage-profile.adoc +++ b/modules/virt-customizing-storage-profile.adoc @@ -82,7 +82,7 @@ Cloning strategies can be specified by setting the `cloneStrategy` attribute in [NOTE] ==== -You can also set clone strategies usng the CLI without modifying the default `claimPropertySets` in your YAML `spec` section. +You can also set clone strategies using the CLI without modifying the default `claimPropertySets` in your YAML `spec` section. ==== .Example storage profile diff --git a/modules/ztp-filtering-ai-crs-using-siteconfig.adoc b/modules/ztp-filtering-ai-crs-using-siteconfig.adoc index 1a434e3ab2..e85026cfa2 100644 --- a/modules/ztp-filtering-ai-crs-using-siteconfig.adoc +++ b/modules/ztp-filtering-ai-crs-using-siteconfig.adoc @@ -47,7 +47,7 @@ spec: + The ZTP pipeline skips the `03-sctp-machine-config-worker.yaml` CR during installation. All other CRs in `/source-crs/extra-manifest` are applied. -. Save the `SiteConfig` CR and and push the changes to the site configuration repository. +. Save the `SiteConfig` CR and push the changes to the site configuration repository. + The ZTP pipeline monitors and adjusts what CRs it applies based on the `SiteConfig` filter instructions. diff --git a/nodes/pods/nodes-pods-autoscaling-custom.adoc b/nodes/pods/nodes-pods-autoscaling-custom.adoc index 3243d73a60..45f8936b55 100644 --- a/nodes/pods/nodes-pods-autoscaling-custom.adoc +++ b/nodes/pods/nodes-pods-autoscaling-custom.adoc @@ -6,16 +6,16 @@ include::_attributes/common-attributes.adoc[] toc::[] -As a developer, you can use the custom metrics autoscaler to specify how {product-title} should automatically increase or decrease the number of pods for a deployment, stateful set, custom resource, or job based on custom metrics that are not just based on CPU or memory. +As a developer, you can use the custom metrics autoscaler to specify how {product-title} should automatically increase or decrease the number of pods for a deployment, stateful set, custom resource, or job based on custom metrics that are not just based on CPU or memory. [NOTE] ==== -The custom metrics autoscaler currently supports only the Prometheus trigger, which can use the installed {product-title} monitoring or an external Prometheus server as the metrics source. +The custom metrics autoscaler currently supports only the Prometheus trigger, which can use the installed {product-title} monitoring or an external Prometheus server as the metrics source. ==== -// For example, you can scale a database application based on the number of tables in the database, scale another application based on the number of messages in a Kafka topic, or scale based on incoming HTTP requests collected by {product-title} monitoring. +// For example, you can scale a database application based on the number of tables in the database, scale another application based on the number of messages in a Kafka topic, or scale based on incoming HTTP requests collected by {product-title} monitoring. -:FeatureName: The custom metrics autoscaler +:FeatureName: The custom metrics autoscaler include::snippets/technology-preview.adoc[leveloffset=+0] @@ -26,7 +26,7 @@ include::snippets/technology-preview.adoc[leveloffset=+0] include::modules/nodes-pods-autoscaling-custom-about.adoc[leveloffset=+1] -// Hide this topic until the list of supported triggers/scalers is determined +// Hide this topic until the list of supported triggers/scalers is determined // include modules/nodes-pods-autoscaling-custom-metrics.adoc[leveloffset=+1] include::modules/nodes-pods-autoscaling-custom-install.adoc[leveloffset=+1] @@ -39,7 +39,7 @@ include::modules/nodes-pods-autoscaling-custom-trigger.adoc[leveloffset=+1] include::modules/nodes-pods-autoscaling-custom-trigger-auth.adoc[leveloffset=+1] -.Additional references +.Additional resources * For information on {product-title} secrets, see xref:../../nodes/pods/nodes-pods-secrets.adoc#nodes-pods-secrets[Providing sensitive data to pods]. @@ -47,7 +47,7 @@ include::modules/nodes-pods-autoscaling-custom-creating-trigger-auth.adoc[levelo include::modules/nodes-pods-autoscaling-custom-prometheus-config.adoc[leveloffset=+1] -.Additional references +.Additional resources * For information on enabing monitoring of user-defined workloads, see xref:../../monitoring/configuring-the-monitoring-stack.html#creating-user-defined-workload-monitoring-configmap_configuring-the-monitoring-stack[Creating a user-defined workload monitoring config map]. @@ -55,7 +55,7 @@ include::modules/nodes-pods-autoscaling-custom-adding.adoc[leveloffset=+1] include::modules/nodes-pods-autoscaling-custom-creating-workload.adoc[leveloffset=+2] -.Additional references +.Additional resources * xref:../../nodes/pods/nodes-pods-autoscaling.adoc#nodes-pods-autoscaling-policies_nodes-pods-autoscaling[Scaling policies] * xref:../../nodes/pods/nodes-pods-autoscaling-custom.adoc#nodes-pods-autoscaling-custom-trigger_nodes-pods-autoscaling-custom[Understanding the custom metrics autoscaler triggers] diff --git a/serverless/cli_tools/installing-kn.adoc b/serverless/cli_tools/installing-kn.adoc index f0c9c6e6fe..aa5acf0c9e 100644 --- a/serverless/cli_tools/installing-kn.adoc +++ b/serverless/cli_tools/installing-kn.adoc @@ -6,10 +6,10 @@ include::_attributes/common-attributes.adoc[] toc::[] -The Knative (`kn`) CLI does not have its own login mechanism. To log in to the cluster, you must install the OpenShift (`oc`) CLI and use the `oc login` command. Installation options for the CLIs may vary depending on your operating system. +The Knative (`kn`) CLI does not have its own login mechanism. To log in to the cluster, you must install the OpenShift CLI (`oc`) and use the `oc login` command. Installation options for the CLIs may vary depending on your operating system. ifdef::openshift-enterprise[] -For more information on installing the OpenShift (`oc`) CLI for your operating system and logging in with `oc`, see the xref:../../cli_reference/openshift_cli/getting-started-cli.adoc#cli-getting-started[OpenShift CLI getting started] documentation. +For more information on installing the OpenShift CLI (`oc`) for your operating system and logging in with `oc`, see the xref:../../cli_reference/openshift_cli/getting-started-cli.adoc#cli-getting-started[OpenShift CLI getting started] documentation. endif::[] // need to wait til CLI docs are added to OSD and ROSA for this link to work // TODO: remove this conditional once this is available diff --git a/serverless/develop/serverless-kafka-developer.adoc b/serverless/develop/serverless-kafka-developer.adoc index f1d2da6232..42e9426d53 100644 --- a/serverless/develop/serverless-kafka-developer.adoc +++ b/serverless/develop/serverless-kafka-developer.adoc @@ -32,7 +32,7 @@ See the xref:../../serverless/develop/serverless-event-delivery.adoc#serverless- [id="serverless-kafka-developer-source"] == Kafka source -You can create a Kafka source that reads events from an Apache Kafka cluster and passes these events to a sink. You can create a Kafka source by using the {product-title} web console, the Knative (`kn`) CLI, or by creating a `KafkaSource` object directly as a YAML file and using the OpenShift (`oc`) CLI to apply it. +You can create a Kafka source that reads events from an Apache Kafka cluster and passes these events to a sink. You can create a Kafka source by using the {product-title} web console, the Knative (`kn`) CLI, or by creating a `KafkaSource` object directly as a YAML file and using the OpenShift CLI (`oc`) to apply it. // dev console include::modules/serverless-kafka-source-odc.adoc[leveloffset=+2] diff --git a/snippets/service-account-auto-secret-removed.adoc b/snippets/service-account-auto-secret-removed.adoc index 6c78d2b1b1..78db78507b 100644 --- a/snippets/service-account-auto-secret-removed.adoc +++ b/snippets/service-account-auto-secret-removed.adoc @@ -4,9 +4,9 @@ [id="auto-generated-sa-token-secrets_{context}"] == About automatically-generated service account token secrets -In {product-version}, {product-title} is adopting an link:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#urgent-upgrade-notes-1[enhancement from upstream Kubernetes], which enables the `LegacyServiceAccountTokenNoAutoGeneration` feature by default. As a result, when creating new serivce accounts (SA), a service account token secret is no longer automatically generated. Previously, {product-title} automatically added a service account token to a secret for each new SA. +In {product-version}, {product-title} is adopting an link:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#urgent-upgrade-notes-1[enhancement from upstream Kubernetes], which enables the `LegacyServiceAccountTokenNoAutoGeneration` feature by default. As a result, when creating new service accounts (SA), a service account token secret is no longer automatically generated. Previously, {product-title} automatically added a service account token to a secret for each new SA. -However, some features and workloads need service account token secrets to communicate with the Kubernetes API server, for example, the OpenShift Controller Manager. While this requirement will be changed in a future release, it remains in {product-title} {product-version}. As a result, if you need a service account token secret, you must manually use the TokenRequest API to request bound service account tokens or create a service account token secret. +However, some features and workloads need service account token secrets to communicate with the Kubernetes API server, for example, the OpenShift Controller Manager. While this requirement will be changed in a future release, it remains in {product-title} {product-version}. As a result, if you need a service account token secret, you must manually use the TokenRequest API to request bound service account tokens or create a service account token secret. After upgrading to {product-version}, existing service account token secrets are not deleted and continue to function as expected. @@ -14,4 +14,3 @@ After upgrading to {product-version}, existing service account token secrets are ==== In {product-version}, service account token secrets still appear to have been automatically generated. Although, instead creating two secrets per service account, {product-title} now creates one token, which does not work. In a future release, the number will be further reduced to zero. Note that `dockercfg` secrets are still generated and no secrets are deleted during upgrades. ==== -