1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Fixing typos and other misc things

This commit is contained in:
Andrea Hoffer
2022-09-21 16:28:05 -04:00
committed by openshift-cherrypick-robot
parent 5720e895a6
commit 0c9bae67b3
49 changed files with 80 additions and 82 deletions

View File

@@ -26,7 +26,7 @@ include::modules/alternatives-to-storing-admin-secrets-in-kube-system.adoc[level
include::modules/manually-create-identity-access-management.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional references
.Additional resources
* xref:../../updating/updating-cluster-within-minor.adoc#manually-maintained-credentials-upgrade_updating-cluster-within-minor[Updating a cluster using the web console]
* xref:../../updating/updating-cluster-cli.adoc#manually-maintained-credentials-upgrade_updating-cluster-cli[Updating a cluster using the CLI]

View File

@@ -17,7 +17,7 @@ include::modules/alternatives-to-storing-admin-secrets-in-kube-system.adoc[level
include::modules/manually-create-identity-access-management.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional references
.Additional resources
* xref:../../updating/updating-cluster-within-minor.adoc#manually-maintained-credentials-upgrade_updating-cluster-within-minor[Updating a cluster using the web console]
* xref:../../updating/updating-cluster-cli.adoc#manually-maintained-credentials-upgrade_updating-cluster-cli[Updating a cluster using the CLI]

View File

@@ -21,7 +21,7 @@ For a detailed description of all available CCO credential modes and their suppo
include::modules/manually-create-identity-access-management.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional references
.Additional resources
* xref:../../updating/updating-cluster-within-minor.adoc#manually-maintained-credentials-upgrade_updating-cluster-within-minor[Updating a cluster using the web console]
* xref:../../updating/updating-cluster-cli.adoc#manually-maintained-credentials-upgrade_updating-cluster-cli[Updating a cluster using the CLI]

View File

@@ -88,7 +88,7 @@ spec:
- logcollector
----
+
. Use the `ccoctl` command to to create a role for AWS using your `CredentialsRequest` CR. With the `CredentialsRequest` object, this `ccoctl` command creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy that grants permissions to perform operations on CloudWatch resources. This command also creates a YAML configuration file in ``/<path_to_ccoctl_output_dir>/manifests/openshift-logging-<your_role_name>-credentials.yaml`. This secret file contains the `role_arn` key/value used during authentication with the AWS IAM identity provider.
. Use the `ccoctl` command to create a role for AWS using your `CredentialsRequest` CR. With the `CredentialsRequest` object, this `ccoctl` command creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy that grants permissions to perform operations on CloudWatch resources. This command also creates a YAML configuration file in `/<path_to_ccoctl_output_dir>/manifests/openshift-logging-<your_role_name>-credentials.yaml`. This secret file contains the `role_arn` key/value used during authentication with the AWS IAM identity provider.
+
[source,terminal]
----

View File

@@ -13,7 +13,7 @@ You can use the `kn source apiserver create` command to create an API server sou
* The {ServerlessOperatorName} and Knative Eventing are installed on the cluster.
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
* You have installed the OpenShift (`oc`) CLI.
* You have installed the OpenShift CLI (`oc`).
* You have installed the Knative (`kn`) CLI.
.Procedure

View File

@@ -13,7 +13,7 @@ Creating Knative resources by using YAML files uses a declarative API, which ena
* The {ServerlessOperatorName} and Knative Eventing are installed on the cluster.
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
* You have created the `default` broker in the same namespace as the one defined in the API server source YAML file.
* Install the OpenShift (`oc`) CLI.
* Install the OpenShift CLI (`oc`).
.Procedure

View File

@@ -75,7 +75,7 @@ $ oc -n openshift-logging edit ClusterLogging instance
| Infra container logs | &#10003; | &#10003;
| Infra journal logs | &#10003; | &#10003;
| Kube API audit logs | &#10003; | &#10003;
| Openshift API audit logs | &#10003; | &#10003;
| OpenShift API audit logs | &#10003; | &#10003;
| Open Virtual Network (OVN) audit logs| &#10003; | &#10003;
|===============================================================

View File

@@ -20,8 +20,8 @@
|Instructions on deploying {rh-storage} to local storage on bare metal infrastructure
|link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure[Deploying OpenShift Data Foundation 4.9 using bare metal infrastructure]
|Instructions on deploying {rh-storage} on Red Hat {product-title} VMWare vSphere clusters
|link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_on_vmware_vsphere[Deploying OpenShift Data Foundation 4.9 on VMWare vSphere]
|Instructions on deploying {rh-storage} on Red Hat {product-title} VMware vSphere clusters
|link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_on_vmware_vsphere[Deploying OpenShift Data Foundation 4.9 on VMware vSphere]
|Instructions on deploying {rh-storage} using Amazon Web Services for local or cloud storage
|link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_amazon_web_services[Deploying OpenShift Data Foundation 4.9 using Amazon Web Services]

View File

@@ -12,7 +12,7 @@ Console users can use the `disable-plugins` query parameter to disable specific
* To disable a specific plug-in(s), remove the plug-in you want to disable from the comma-separated list of plug-in names.
* To disable all plug-ins, leave an an empty string in the `disable-plugins` query parameter.
* To disable all plug-ins, leave an empty string in the `disable-plugins` query parameter.
[NOTE]
====

View File

@@ -18,7 +18,7 @@ The `install-config.yaml` file must contain the disconnected registry node's cer
$ pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "you@example.com"}}}'----
----
+
For `<mirror_host_name>`, specify the registry domain name that you specified in the certificate for your mirror registry, and for `<credentials>``, specify the base64-encoded user name and password for your mirror registry.
For `<mirror_host_name>`, specify the registry domain name that you specified in the certificate for your mirror registry, and for `<credentials>`, specify the base64-encoded user name and password for your mirror registry.
.. Add the `additionalTrustBundle` parameter and value:
+

View File

@@ -1,32 +1,32 @@
// module included in the following assembly:
// module included in the following assembly:
//
// * installing-mirroring-creating-registry.adoc
:_content-type: PROCEDURE
[id="mirror-registry-localhost-update_{context}"]
= Updating mirror registry for Red Hat OpenShift from a local host
= Updating mirror registry for Red Hat OpenShift from a local host
This procedure explains how to update the _mirror registry for Red Hat OpenShift_ from a local host using the `upgrade` command. Updating to the latest version ensures bug fixes and security vulnerability fixes.
This procedure explains how to update the _mirror registry for Red Hat OpenShift_ from a local host using the `upgrade` command. Updating to the latest version ensures bug fixes and security vulnerability fixes.
[IMPORTANT]
====
When updating, there is intermittent downtime of your mirror registry, as it is restarted during the update process.
When updating, there is intermittent downtime of your mirror registry, as it is restarted during the update process.
====
.Prerequisites
* You have installed the _mirror registry for Red Hat OpenShift_ on a local host.
* You have installed the _mirror registry for Red Hat OpenShift_ on a local host.
.Procedure
.Procedure
* To upgrade the the _mirror registry for Red Hat OpenShift_ from localhost, enter the following command:
* To upgrade the _mirror registry for Red Hat OpenShift_ from localhost, enter the following command:
+
[source,terminal]
----
$ sudo ./mirror-registry upgrade -v
$ sudo ./mirror-registry upgrade -v
----
+
[NOTE]
====
Users who upgrade the _mirror registry for Red Hat OpenShift_ with the `./mirror-registry upgrade -v` flag must include the same credentials used when creating their mirror registry. For example, if you installed the _mirror registry for Red Hat OpenShift_ with `--quayHostname <host_example_com>` and `--quayRoot <example_directory_name>`, you must include that string to properly upgrade the mirror registry.
====
====

View File

@@ -1,25 +1,25 @@
// module included in the following assembly:
// module included in the following assembly:
//
// * installing-mirroring-creating-registry.adoc
:_content-type: PROCEDURE
[id="mirror-registry-remote-host-update_{context}"]
= Updating mirror registry for Red Hat OpenShift from a remote host
= Updating mirror registry for Red Hat OpenShift from a remote host
This procedure explains how to update the _mirror registry for Red Hat OpenShift_ from a remote host using the `upgrade` command. Updating to the latest version ensures bug fixes and security vulnerability fixes.
This procedure explains how to update the _mirror registry for Red Hat OpenShift_ from a remote host using the `upgrade` command. Updating to the latest version ensures bug fixes and security vulnerability fixes.
[IMPORTANT]
====
When updating, there is intermittent downtime of your mirror registry, as it is restarted during the update process.
When updating, there is intermittent downtime of your mirror registry, as it is restarted during the update process.
====
.Prerequisites
* You have installed the _mirror registry for Red Hat OpenShift_ on a remote host.
* You have installed the _mirror registry for Red Hat OpenShift_ on a remote host.
.Procedure
.Procedure
* To upgrade the the _mirror registry for Red Hat OpenShift_ from a remote host, enter the following command:
* To upgrade the _mirror registry for Red Hat OpenShift_ from a remote host, enter the following command:
+
[source,terminal]
----
@@ -29,4 +29,4 @@ $ sudo ./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetU
[NOTE]
====
Users who upgrade the _mirror registry for Red Hat OpenShift_ with the `./mirror-registry upgrade -v` flag must include the same credentials used when creating their mirror registry. For example, if you installed the _mirror registry for Red Hat OpenShift_ with `--quayHostname <host_example_com>` and `--quayRoot <example_directory_name>`, you must include that string to properly upgrade the mirror registry.
====
====

View File

@@ -26,11 +26,11 @@ You must perform the following tasks, as described in this section:
* Monitoring of user-defined workloads must be enabled in {product-title} monitoring, as described in the *Creating a user-defined workload monitoring config map* section.
* The Custom Metrics Autoscaler Operator must be installed.
* The Custom Metrics Autoscaler Operator must be installed.
.Procedure
. Change to the project with the the object you want to scale:
. Change to the project with the object you want to scale:
+
[source,terminal]
----
@@ -101,7 +101,7 @@ spec:
<4> Specifies the key in the token to use with the specified parameter.
.. Create the CR object:
+
+
[source,terminal]
----
$ oc create -f <file-name>.yaml
@@ -136,7 +136,7 @@ rules:
----
.. Create the CR object:
+
+
[source,terminal]
----
$ oc create -f <file-name>.yaml
@@ -167,11 +167,10 @@ subjects:
<3> Specifies the name of the service account to bind to the role.
<4> Specifies the namespace of the object you want to scale.
.. Create the CR object:
+
+
[source,terminal]
----
$ oc create -f <file-name>.yaml
----
You can now deploy a scaled object or scaled job to enable autoscaling for your application, as described in the following sections. To use {product-title} monitoring as the source, in the trigger, or scaler, specify the `prometheus` type and use `\https://thanos-querier.openshift-monitoring.svc.cluster.local:9092` as the `serverAddress`.

View File

@@ -14,7 +14,7 @@ It is recommended to obtain bound service account tokens using the TokenRequest
You should create a service account token secret only if you cannot use the TokenRequest API and if the security exposure of a non-expiring token in a readable API object is acceptable to you.
See the Additional references section that follows for information on creating bound service account tokens.
See the Additional resources section that follows for information on creating bound service account tokens.
====
.Procedure

View File

@@ -42,7 +42,7 @@ Support for the egress IP address functionality on various platforms is summariz
| Platform | Supported
| Bare metal | Yes
| VMWare vSphere | Yes
| VMware vSphere | Yes
| {rh-openstack-first} | No
| Amazon Web Services (AWS) | Yes
| Google Cloud Platform (GCP) | Yes

View File

@@ -105,7 +105,7 @@ If you are using link:https://access.redhat.com/documentation/en-us/red_hat_virt
VMware vSphere::
If you are using VMware vSphere, see the link:https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.security.doc/GUID-3507432E-AFEA-4B6B-B404-17A020575358.html[VMWare documentation for securing vSphere standard switches]. View and change VMWare vSphere default settings by selecting the host virtual switch from the vSphere Web Client.
If you are using VMware vSphere, see the link:https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.security.doc/GUID-3507432E-AFEA-4B6B-B404-17A020575358.html[VMware documentation for securing vSphere standard switches]. View and change VMware vSphere default settings by selecting the host virtual switch from the vSphere Web Client.
Specifically, ensure that the following are enabled:

View File

@@ -13,7 +13,7 @@ After Knative Eventing is installed on your cluster, you can create an API serve
* You have logged in to the {product-title} web console.
* The {ServerlessOperatorName} and Knative Eventing are installed on the cluster.
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
* You have installed the OpenShift (`oc`) CLI.
* You have installed the OpenShift CLI (`oc`).
.Procedure

View File

@@ -252,7 +252,7 @@ Pipelines as Code supports the following features:
== Deprecated features
// Pipelines
* Breaking change: This update removes the `disable-working-directory-overwrite` and `disable-home-env-overwrite` fields from the `TektonConfig` custom resource (CR). As a result, the `TektonConfig` CR no longer automatically sets the `$HOME` environment variable and `workingDir` parameter. You can still set the `$HOME` environment variable and `workingDir` parameter by using the the `env` and `workingDir` fields in the `Task` custom resource definition (CRD).
* Breaking change: This update removes the `disable-working-directory-overwrite` and `disable-home-env-overwrite` fields from the `TektonConfig` custom resource (CR). As a result, the `TektonConfig` CR no longer automatically sets the `$HOME` environment variable and `workingDir` parameter. You can still set the `$HOME` environment variable and `workingDir` parameter by using the `env` and `workingDir` fields in the `Task` custom resource definition (CRD).
// https://github.com/tektoncd/pipeline/pull/4587
@@ -427,4 +427,4 @@ With this update, {pipelines-title} General Availability (GA) 1.7.3 is available
* Previously, upgrading the {pipelines-title} Operator caused the `pipeline` service account to be recreated, which meant that the secrets linked to the service account were lost. This update fixes the issue. During upgrades, the Operator no longer recreates the `pipeline` service account. As a result, secrets attached to the `pipeline` service account persist after upgrades, and the resources (tasks and pipelines) continue to work correctly.
// link:https://issues.redhat.com/browse/SRVKP-2256
// Kushagra Kulshreshtha
// Kushagra Kulshreshtha

View File

@@ -13,4 +13,4 @@ Installing Kiali via the Service Mesh on {product-title} differs from community
* Ingress has been enabled by default.
* Updates have been made to the Kiali ConfigMap.
* Updates have been made to the ClusterRole settings for Kiali.
* Do not edit the ConfigMap, because your changes might be overwritten by the {SMProductShortName} or Kiali Operators. Files that the Kiali Operator manages have a `kiali.io/`` label or annotation. Updating the Operator files should be restricted to those users with `cluster-admin` privileges. If you use {product-dedicated}, updating the Operator files should be restricted to those users with `dedicated-admin` privileges.
* Do not edit the ConfigMap, because your changes might be overwritten by the {SMProductShortName} or Kiali Operators. Files that the Kiali Operator manages have a `kiali.io/` label or annotation. Updating the Operator files should be restricted to those users with `cluster-admin` privileges. If you use {product-dedicated}, updating the Operator files should be restricted to those users with `dedicated-admin` privileges.

View File

@@ -210,7 +210,7 @@ By enabling etcd encryption for the key values in etcd, you will incur a perform
====
Only persistent volumes (PVs) created from the default storage class are encrypted by default.
PVs created by using any other storage class are only encrypted if the the storage class is configured to be encrypted.
PVs created by using any other storage class are only encrypted if the storage class is configured to be encrypted.
====
.. Click *Next*.
@@ -291,7 +291,7 @@ Alternatively, you can use *Auto* mode to automatically create the Operator role
To enable *Auto* mode, the {cluster-manager} IAM role must have administrator capabilities.
====
. Optional: Specify a *Custom operator roles prefix* for your cluster-specific Operator IAM roles.
. Optional: Specify a *Custom operator roles prefix* for your cluster-specific Operator IAM roles.
+
[NOTE]
====

View File

@@ -11,7 +11,7 @@ Creating Knative resources by using YAML files uses a declarative API, which ena
.Prerequisites
* The {ServerlessOperatorName} and Knative Eventing are installed on the cluster.
* Install the OpenShift (`oc`) CLI.
* Install the OpenShift CLI (`oc`).
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
.Procedure

View File

@@ -7,7 +7,7 @@
[id="serverless-create-func-kn_{context}"]
= Creating functions
Before you can build and deploy a function, you must create it by using the the Knative (`kn`) CLI. You can specify the path, runtime, template, and image registry as flags on the command line, or use the `-c` flag to start the interactive experience in the terminal.
Before you can build and deploy a function, you must create it by using the Knative (`kn`) CLI. You can specify the path, runtime, template, and image registry as flags on the command line, or use the `-c` flag to start the interactive experience in the terminal.
.Prerequisites

View File

@@ -12,7 +12,7 @@ Creating Knative resources by using YAML files uses a declarative API, which ena
.Prerequisites
* The {ServerlessOperatorName}, Knative Eventing, and the `KnativeKafka` custom resource are installed on your {product-title} cluster.
* Install the OpenShift (`oc`) CLI.
* Install the OpenShift CLI (`oc`).
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
.Procedure

View File

@@ -17,7 +17,7 @@ If you delete the broker without having a cluster administrator remove this anno
.Prerequisites
* The {ServerlessOperatorName} and Knative Eventing are installed on your {product-title} cluster.
* Install the OpenShift (`oc`) CLI.
* Install the OpenShift CLI (`oc`).
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
.Procedure

View File

@@ -16,7 +16,7 @@ Brokers created using this method are not removed if you remove the label. You m
.Prerequisites
* The {ServerlessOperatorName} and Knative Eventing are installed on your {product-title} cluster.
* Install the OpenShift (`oc`) CLI.
* Install the OpenShift CLI (`oc`).
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
ifdef::openshift-dedicated,openshift-rosa[]

View File

@@ -11,7 +11,7 @@ After you have created a channel and an event sink, you can create a subscriptio
.Prerequisites
* The {ServerlessOperatorName} and Knative Eventing are installed on the cluster.
* Install the OpenShift (`oc`) CLI.
* Install the OpenShift CLI (`oc`).
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
.Procedure

View File

@@ -10,7 +10,7 @@ If you create a broker by injection and later want to delete it, you must delete
.Prerequisites
* Install the OpenShift (`oc`) CLI.
* Install the OpenShift CLI (`oc`).
.Procedure

View File

@@ -15,7 +15,7 @@ include::snippets/technology-preview.adoc[leveloffset=+1]
* {pipelines-title} must be installed on your cluster.
* You have installed the OpenShift (`oc`) CLI.
* You have installed the OpenShift CLI (`oc`).
* You have installed the Knative (`kn`) CLI.

View File

@@ -20,7 +20,7 @@ endif::[]
* You have installed the {ServerlessOperatorName} and Knative Serving.
* You have installed the {JaegerName} Operator.
* You have installed the OpenShift (`oc`) CLI.
* You have installed the OpenShift CLI (`oc`).
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
.Procedure

View File

@@ -16,7 +16,7 @@ include::snippets/technology-preview.adoc[]
* You have cluster or dedicated administrator permissions on {product-title}.
* The {ServerlessOperatorName}, Knative Eventing, and the `KnativeKafka` custom resource (CR) are installed on your {product-title} cluster.
* You have created a project or have access to a project that has the appropriate roles and permissions to create applications and other workloads in {product-title}.
* You have installed the OpenShift (`oc`) CLI.
* You have installed the OpenShift CLI (`oc`).
.Procedure

View File

@@ -15,7 +15,7 @@ _Transport Layer Security_ (TLS) is used by Apache Kafka clients and servers to
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
* You have a Kafka cluster CA certificate stored as a `.pem` file.
* You have a Kafka cluster client certificate and a key stored as `.pem` files.
* Install the OpenShift (`oc`) CLI.
* Install the OpenShift CLI (`oc`).
.Procedure

View File

@@ -16,7 +16,7 @@ If you want to use a Kafka broker without allowing it to create its own internal
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
* You have installed the OpenShift (`oc`) CLI.
* You have installed the OpenShift CLI (`oc`).
.Procedure

View File

@@ -14,7 +14,7 @@ Creating Knative resources by using YAML files uses a declarative API, which ena
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
* You have installed the OpenShift (`oc`) CLI.
* You have installed the OpenShift CLI (`oc`).
.Procedure

View File

@@ -13,7 +13,7 @@ You can create an event sink called a Kafka sink that sends events to a Kafka to
* The {ServerlessOperatorName}, Knative Eventing, and the `KnativeKafka` custom resource (CR) are installed on your cluster.
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
* You have access to a Red Hat AMQ Streams (Kafka) cluster that produces the Kafka messages you want to import.
* Install the OpenShift (`oc`) CLI.
* Install the OpenShift CLI (`oc`).
.Procedure

View File

@@ -15,7 +15,7 @@ You can use the `kn source kafka create` command to create a Kafka source by usi
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
* You have access to a Red Hat AMQ Streams (Kafka) cluster that produces the Kafka messages you want to import.
* You have installed the Knative (`kn`) CLI.
* Optional: You have installed the OpenShift (`oc`) CLI if you want to use the verification steps in this procedure.
* Optional: You have installed the OpenShift CLI (`oc`) if you want to use the verification steps in this procedure.
.Procedure

View File

@@ -15,7 +15,7 @@ _Transport Layer Security_ (TLS) is used by Apache Kafka clients and servers to
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
* You have a Kafka cluster CA certificate stored as a `.pem` file.
* You have a Kafka cluster client certificate and a key stored as `.pem` files.
* Install the OpenShift (`oc`) CLI.
* Install the OpenShift CLI (`oc`).
.Procedure

View File

@@ -13,7 +13,7 @@
* You have access to an {product-title} account with cluster administrator access.
* You have not yet installed the {ServerlessOperatorName} and Knative Serving. These must be installed after the {DTProductName} installation.
* You have installed {DTProductName} by following the {product-title} "Installing distributed tracing" documentation.
* You have installed the OpenShift (`oc`) CLI.
* You have installed the OpenShift CLI (`oc`).
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
.Procedure

View File

@@ -51,4 +51,4 @@ spec:
sidecar.istio.io/rewriteAppHTTPProbers: "true"
name: autoscaler
----
<1> Adding this annotation injects an enviroment variable, `ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID=true`, to the `net-istio` controller pod.
<1> Adding this annotation injects an environment variable, `ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID=true`, to the `net-istio` controller pod.

View File

@@ -6,7 +6,7 @@
[id="serverless-ossm-setup_{context}"]
= Integrating {SMProductShortName} with {ServerlessProductName}
You can integrate {SMProductShortName} with {ServerlessProductName} without using Kourier as the default ingress. To do this, do not install the Knative Serving component before completing the following procedure. There are additional steps required when creating the `KnativeServing` custom resource defintion (CRD) to integrate Knative Serving with {SMProductShortName}, which are not covered in the general Knative Serving installation procedure. This procedure might be useful if you want to integrate {SMProductShortName} as the default and only ingress for your {ServerlessProductName} installation.
You can integrate {SMProductShortName} with {ServerlessProductName} without using Kourier as the default ingress. To do this, do not install the Knative Serving component before completing the following procedure. There are additional steps required when creating the `KnativeServing` custom resource definition (CRD) to integrate Knative Serving with {SMProductShortName}, which are not covered in the general Knative Serving installation procedure. This procedure might be useful if you want to integrate {SMProductShortName} as the default and only ingress for your {ServerlessProductName} installation.
.Prerequisites

View File

@@ -14,7 +14,7 @@ You can use the `kn source ping create` command to create a ping source by using
* The {ServerlessOperatorName}, Knative Serving and Knative Eventing are installed on the cluster.
* You have installed the Knative (`kn`) CLI.
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
* Optional: If you want to use the verification steps for this procedure, install the OpenShift (`oc`) CLI.
* Optional: If you want to use the verification steps for this procedure, install the OpenShift CLI (`oc`).
.Procedure

View File

@@ -32,7 +32,7 @@ spec:
.Prerequisites
* The {ServerlessOperatorName}, Knative Serving and Knative Eventing are installed on the cluster.
* Install the OpenShift (`oc`) CLI.
* Install the OpenShift CLI (`oc`).
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
.Procedure

View File

@@ -13,7 +13,7 @@ You can use the `kn source binding create` command to create a sink binding by u
* The {ServerlessOperatorName}, Knative Serving and Knative Eventing are installed on the cluster.
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
* Install the Knative (`kn`) CLI.
* Install the OpenShift (`oc`) CLI.
* Install the OpenShift CLI (`oc`).
[NOTE]
====

View File

@@ -11,7 +11,7 @@ Creating Knative resources by using YAML files uses a declarative API, which ena
.Prerequisites
* The {ServerlessOperatorName}, Knative Serving and Knative Eventing are installed on the cluster.
* Install the OpenShift (`oc`) CLI.
* Install the OpenShift CLI (`oc`).
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
.Procedure

View File

@@ -82,7 +82,7 @@ Cloning strategies can be specified by setting the `cloneStrategy` attribute in
[NOTE]
====
You can also set clone strategies usng the CLI without modifying the default `claimPropertySets` in your YAML `spec` section.
You can also set clone strategies using the CLI without modifying the default `claimPropertySets` in your YAML `spec` section.
====
.Example storage profile

View File

@@ -47,7 +47,7 @@ spec:
+
The ZTP pipeline skips the `03-sctp-machine-config-worker.yaml` CR during installation. All other CRs in `/source-crs/extra-manifest` are applied.
. Save the `SiteConfig` CR and and push the changes to the site configuration repository.
. Save the `SiteConfig` CR and push the changes to the site configuration repository.
+
The ZTP pipeline monitors and adjusts what CRs it applies based on the `SiteConfig` filter instructions.

View File

@@ -6,16 +6,16 @@ include::_attributes/common-attributes.adoc[]
toc::[]
As a developer, you can use the custom metrics autoscaler to specify how {product-title} should automatically increase or decrease the number of pods for a deployment, stateful set, custom resource, or job based on custom metrics that are not just based on CPU or memory.
As a developer, you can use the custom metrics autoscaler to specify how {product-title} should automatically increase or decrease the number of pods for a deployment, stateful set, custom resource, or job based on custom metrics that are not just based on CPU or memory.
[NOTE]
====
The custom metrics autoscaler currently supports only the Prometheus trigger, which can use the installed {product-title} monitoring or an external Prometheus server as the metrics source.
The custom metrics autoscaler currently supports only the Prometheus trigger, which can use the installed {product-title} monitoring or an external Prometheus server as the metrics source.
====
// For example, you can scale a database application based on the number of tables in the database, scale another application based on the number of messages in a Kafka topic, or scale based on incoming HTTP requests collected by {product-title} monitoring.
// For example, you can scale a database application based on the number of tables in the database, scale another application based on the number of messages in a Kafka topic, or scale based on incoming HTTP requests collected by {product-title} monitoring.
:FeatureName: The custom metrics autoscaler
:FeatureName: The custom metrics autoscaler
include::snippets/technology-preview.adoc[leveloffset=+0]
@@ -26,7 +26,7 @@ include::snippets/technology-preview.adoc[leveloffset=+0]
include::modules/nodes-pods-autoscaling-custom-about.adoc[leveloffset=+1]
// Hide this topic until the list of supported triggers/scalers is determined
// Hide this topic until the list of supported triggers/scalers is determined
// include modules/nodes-pods-autoscaling-custom-metrics.adoc[leveloffset=+1]
include::modules/nodes-pods-autoscaling-custom-install.adoc[leveloffset=+1]
@@ -39,7 +39,7 @@ include::modules/nodes-pods-autoscaling-custom-trigger.adoc[leveloffset=+1]
include::modules/nodes-pods-autoscaling-custom-trigger-auth.adoc[leveloffset=+1]
.Additional references
.Additional resources
* For information on {product-title} secrets, see xref:../../nodes/pods/nodes-pods-secrets.adoc#nodes-pods-secrets[Providing sensitive data to pods].
@@ -47,7 +47,7 @@ include::modules/nodes-pods-autoscaling-custom-creating-trigger-auth.adoc[levelo
include::modules/nodes-pods-autoscaling-custom-prometheus-config.adoc[leveloffset=+1]
.Additional references
.Additional resources
* For information on enabing monitoring of user-defined workloads, see xref:../../monitoring/configuring-the-monitoring-stack.html#creating-user-defined-workload-monitoring-configmap_configuring-the-monitoring-stack[Creating a user-defined workload monitoring config map].
@@ -55,7 +55,7 @@ include::modules/nodes-pods-autoscaling-custom-adding.adoc[leveloffset=+1]
include::modules/nodes-pods-autoscaling-custom-creating-workload.adoc[leveloffset=+2]
.Additional references
.Additional resources
* xref:../../nodes/pods/nodes-pods-autoscaling.adoc#nodes-pods-autoscaling-policies_nodes-pods-autoscaling[Scaling policies]
* xref:../../nodes/pods/nodes-pods-autoscaling-custom.adoc#nodes-pods-autoscaling-custom-trigger_nodes-pods-autoscaling-custom[Understanding the custom metrics autoscaler triggers]

View File

@@ -6,10 +6,10 @@ include::_attributes/common-attributes.adoc[]
toc::[]
The Knative (`kn`) CLI does not have its own login mechanism. To log in to the cluster, you must install the OpenShift (`oc`) CLI and use the `oc login` command. Installation options for the CLIs may vary depending on your operating system.
The Knative (`kn`) CLI does not have its own login mechanism. To log in to the cluster, you must install the OpenShift CLI (`oc`) and use the `oc login` command. Installation options for the CLIs may vary depending on your operating system.
ifdef::openshift-enterprise[]
For more information on installing the OpenShift (`oc`) CLI for your operating system and logging in with `oc`, see the xref:../../cli_reference/openshift_cli/getting-started-cli.adoc#cli-getting-started[OpenShift CLI getting started] documentation.
For more information on installing the OpenShift CLI (`oc`) for your operating system and logging in with `oc`, see the xref:../../cli_reference/openshift_cli/getting-started-cli.adoc#cli-getting-started[OpenShift CLI getting started] documentation.
endif::[]
// need to wait til CLI docs are added to OSD and ROSA for this link to work
// TODO: remove this conditional once this is available

View File

@@ -32,7 +32,7 @@ See the xref:../../serverless/develop/serverless-event-delivery.adoc#serverless-
[id="serverless-kafka-developer-source"]
== Kafka source
You can create a Kafka source that reads events from an Apache Kafka cluster and passes these events to a sink. You can create a Kafka source by using the {product-title} web console, the Knative (`kn`) CLI, or by creating a `KafkaSource` object directly as a YAML file and using the OpenShift (`oc`) CLI to apply it.
You can create a Kafka source that reads events from an Apache Kafka cluster and passes these events to a sink. You can create a Kafka source by using the {product-title} web console, the Knative (`kn`) CLI, or by creating a `KafkaSource` object directly as a YAML file and using the OpenShift CLI (`oc`) to apply it.
// dev console
include::modules/serverless-kafka-source-odc.adoc[leveloffset=+2]

View File

@@ -4,9 +4,9 @@
[id="auto-generated-sa-token-secrets_{context}"]
== About automatically-generated service account token secrets
In {product-version}, {product-title} is adopting an link:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#urgent-upgrade-notes-1[enhancement from upstream Kubernetes], which enables the `LegacyServiceAccountTokenNoAutoGeneration` feature by default. As a result, when creating new serivce accounts (SA), a service account token secret is no longer automatically generated. Previously, {product-title} automatically added a service account token to a secret for each new SA.
In {product-version}, {product-title} is adopting an link:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#urgent-upgrade-notes-1[enhancement from upstream Kubernetes], which enables the `LegacyServiceAccountTokenNoAutoGeneration` feature by default. As a result, when creating new service accounts (SA), a service account token secret is no longer automatically generated. Previously, {product-title} automatically added a service account token to a secret for each new SA.
However, some features and workloads need service account token secrets to communicate with the Kubernetes API server, for example, the OpenShift Controller Manager. While this requirement will be changed in a future release, it remains in {product-title} {product-version}. As a result, if you need a service account token secret, you must manually use the TokenRequest API to request bound service account tokens or create a service account token secret.
However, some features and workloads need service account token secrets to communicate with the Kubernetes API server, for example, the OpenShift Controller Manager. While this requirement will be changed in a future release, it remains in {product-title} {product-version}. As a result, if you need a service account token secret, you must manually use the TokenRequest API to request bound service account tokens or create a service account token secret.
After upgrading to {product-version}, existing service account token secrets are not deleted and continue to function as expected.
@@ -14,4 +14,3 @@ After upgrading to {product-version}, existing service account token secrets are
====
In {product-version}, service account token secrets still appear to have been automatically generated. Although, instead creating two secrets per service account, {product-title} now creates one token, which does not work. In a future release, the number will be further reduced to zero. Note that `dockercfg` secrets are still generated and no secrets are deleted during upgrades.
====