1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Add CMA docs on using a per-TriggerAuthentication CA with Prometheus/Kafka triggers

This commit is contained in:
Michael Burke
2024-08-12 15:43:09 -04:00
committed by openshift-cherrypick-robot
parent b66d3efb89
commit 050aa474d5
5 changed files with 106 additions and 33 deletions

View File

@@ -43,6 +43,7 @@ spec:
excludePersistentLag: false <10>
version: '1.0.0' <11>
partitionLimitation: '1,2,10-20,31' <12>
tls: enable <13>
----
<1> Specifies Kafka as the trigger type.
<2> Specifies the name of the Kafka topic on which Kafka is processing the offset lag.
@@ -62,4 +63,4 @@ spec:
* If `false`, the trigger includes all consumer lag in all partitions. This is the default.
<11> Optional: Specifies the version of your Kafka brokers. Must be specified as a quoted string value. The default is `1.0.0`.
<12> Optional: Specifies a comma-separated list of partition IDs to scope the scaling on. If set, only the listed IDs are considered when calculating lag. Must be specified as a quoted string value. The default is to consider all partitions.
<13> Optional: Specifies whether to use TSL client authentication for Kafka. The default is `disable`. For information on configuring TLS, see "Understanding custom metrics autoscaler trigger authentications".

View File

@@ -6,7 +6,7 @@
[id="nodes-cma-autoscaling-custom-trigger-prom_{context}"]
= Understanding the Prometheus trigger
You can scale pods based on Prometheus metrics, which can use the installed {product-title} monitoring or an external Prometheus server as the metrics source. See "Additional resources" for information on the configurations required to use the {product-title} monitoring as a source for metrics.
You can scale pods based on Prometheus metrics, which can use the installed {product-title} monitoring or an external Prometheus server as the metrics source. See "Configuring the custom metrics autoscaler to use {product-title} monitoring" for information on the configurations required to use the {product-title} monitoring as a source for metrics.
[NOTE]
====
@@ -47,7 +47,11 @@ spec:
<9> Optional: Specifies how the trigger should proceed if the Prometheus target is lost.
* If `true`, the trigger continues to operate if the Prometheus target is lost. This is the default behavior.
* If `false`, the trigger returns an error if the Prometheus target is lost.
<10> Optional: Specifies whether the certificate check should be skipped. For example, you might skip the check if you use self-signed certificates at the Prometheus endpoint.
* If `true`, the certificate check is performed.
* If `false`, the certificate check is not performed. This is the default behavior.
<10> Optional: Specifies whether the certificate check should be skipped. For example, you might skip the check if you are running in a test environment and using self-signed certificates at the Prometheus endpoint.
* If `false`, the certificate check is performed. This is the default behavior.
* If `true`, the certificate check is not performed.
+
[IMPORTANT]
====
Skipping the check is not recommended.
====

View File

@@ -15,7 +15,21 @@ Alternatively, to share credentials between objects in multiple namespaces, you
Trigger authentications and cluster trigger authentication use the same configuration. However, a cluster trigger authentication requires an additional `kind` parameter in the authentication reference of the scaled object.
.Example trigger authentication with a secret
.Example secret for Basic authentication
[source,yaml]
----
apiVersion: v1
kind: Secret
metadata:
name: my-basic-secret
namespace: default
data:
username: "dXNlcm5hbWU=" <1>
password: "cGFzc3dvcmQ="
----
<1> User name and password to supply to the trigger authentication. The values in a `data` stanza must be base-64 encoded.
.Example trigger authentication using a secret for Basic authentication
[source,yaml]
----
kind: TriggerAuthentication
@@ -25,20 +39,20 @@ metadata:
namespace: my-namespace <1>
spec:
secretTargetRef: <2>
- parameter: user-name <3>
name: my-secret <4>
key: USER_NAME <5>
- parameter: username <3>
name: my-basic-secret <4>
key: username <5>
- parameter: password
name: my-secret
key: USER_PASSWORD
name: my-basic-secret
key: password
----
<1> Specifies the namespace of the object you want to scale.
<2> Specifies that this trigger authentication uses a secret for authorization.
<2> Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint.
<3> Specifies the authentication parameter to supply by using the secret.
<4> Specifies the name of the secret to use.
<5> Specifies the key in the secret to use with the specified parameter.
.Example cluster trigger authentication with a secret
.Example cluster trigger authentication with a secret for Basic authentication
[source,yaml]
----
kind: ClusterTriggerAuthentication
@@ -47,20 +61,75 @@ metadata: <1>
name: secret-cluster-triggerauthentication
spec:
secretTargetRef: <2>
- parameter: user-name <3>
name: secret-name <4>
key: USER_NAME <5>
- parameter: user-password
name: secret-name
key: USER_PASSWORD
- parameter: username <3>
name: my-basic-secret <4>
key: username <5>
- parameter: password
name: my-basic-secret
key: password
----
<1> Note that no namespace is used with a cluster trigger authentication.
<2> Specifies that this trigger authentication uses a secret for authorization.
<2> Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint.
<3> Specifies the authentication parameter to supply by using the secret.
<4> Specifies the name of the secret to use.
<5> Specifies the key in the secret to use with the specified parameter.
.Example trigger authentication with a token
.Example secret with certificate authority (CA) details
[source,yaml]
----
apiVersion: v1
kind: Secret
metadata:
name: my-secret
namespace: my-namespace
data:
ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0... <1>
client-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... <2>
client-key.pem: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0t...
----
<1> Specifies the TLS CA Certificate for authentication of the metrics endpoint. The value must be base-64 encoded.
<2> Specifies the TLS certificates and key for TLS client authentication. The values must be base-64 encoded.
.Example trigger authentication using a secret for CA details
[source,yaml]
----
kind: TriggerAuthentication
apiVersion: keda.sh/v1alpha1
metadata:
name: secret-triggerauthentication
namespace: my-namespace <1>
spec:
secretTargetRef: <2>
- parameter: key <3>
name: my-secret <4>
key: client-key.pem <5>
- parameter: ca <6>
name: my-secret <7>
key: ca-cert.pem <8>
----
<1> Specifies the namespace of the object you want to scale.
<2> Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint.
<3> Specifies the type of authentication to use.
<4> Specifies the name of the secret to use.
<5> Specifies the key in the secret to use with the specified parameter.
<6> Specifies the authentication parameter for a custom CA when connecting to the metrics endpoint.
<7> Specifies the name of the secret to use.
<8> Specifies the key in the secret to use with the specified parameter.
.Example secret with a bearer token
[source,yaml]
----
apiVersion: v1
kind: Secret
metadata:
name: my-secret
namespace: my-namespace
data:
bearerToken: "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXV" <1>
----
<1> Specifies a bearer token to use with bearer authentication. The value in a `data` stanza must be base-64 encoded.
.Example trigger authentication with a bearer token
[source,yaml]
----
kind: TriggerAuthentication
@@ -71,16 +140,13 @@ metadata:
spec:
secretTargetRef: <2>
- parameter: bearerToken <3>
name: my-token-2vzfq <4>
key: token <5>
- parameter: ca
name: my-token-2vzfq
key: ca.crt
name: my-secret <4>
key: bearerToken <5>
----
<1> Specifies the namespace of the object you want to scale.
<2> Specifies that this trigger authentication uses a secret for authorization.
<3> Specifies the authentication parameter to supply by using the token.
<4> Specifies the name of the token to use.
<2> Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint.
<3> Specifies the type of authentication to use.
<4> Specifies the name of the secret to use.
<5> Specifies the key in the token to use with the specified parameter.
.Example trigger authentication with an environment variable
@@ -98,7 +164,7 @@ spec:
containerName: my-container <5>
----
<1> Specifies the namespace of the object you want to scale.
<2> Specifies that this trigger authentication uses environment variables for authorization.
<2> Specifies that this trigger authentication uses environment variables for authorization when connecting to the metrics endpoint.
<3> Specify the parameter to set with this variable.
<4> Specify the name of the environment variable.
<5> Optional: Specify a container that requires authentication. The container must be in the same resource as referenced by `scaleTargetRef` in the scaled object.
@@ -116,7 +182,7 @@ spec:
provider: aws-eks <3>
----
<1> Specifies the namespace of the object you want to scale.
<2> Specifies that this trigger authentication uses a platform-native pod authentication method for authorization.
<2> Specifies that this trigger authentication uses a platform-native pod authentication when connecting to the metrics endpoint.
<3> Specifies a pod identity. Supported values are `none`, `azure`, `gcp`, `aws-eks`, or `aws-kiam`. The default is `none`.
// Remove ifdef after https://github.com/openshift/openshift-docs/pull/62147 merges

View File

@@ -14,6 +14,8 @@ The custom metrics autoscaler currently supports only the Prometheus, CPU, memor
You use a `ScaledObject` or `ScaledJob` custom resource to configure triggers for specific objects, as described in the sections that follow.
You can configure a certificate authority xref:../../nodes/cma/nodes-cma-autoscaling-custom-trigger-auth.adoc#nodes-cma-autoscaling-custom-trigger-auth[to use with your scaled objects] or xref:../../nodes/cma/nodes-cma-autoscaling-custom.adoc#nodes-cma-autoscaling-custom-ca_nodes-cma-autoscaling-custom[for all scalers in the cluster].
// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
// modules required to cover the user story. You can also include other

View File

@@ -56,7 +56,7 @@ image::564_OpenShift_Custom_Metrics_Autoscaler_0224.png[Custom metrics autoscale
[id="nodes-cma-autoscaling-custom-ca_{context}"]
== Custom CA certificates for the Custom Metrics Autoscaler
By default, the Custom Metrics Autoscaler Operator uses automatically-generated service CA certificate to connect to on-cluster services.
By default, the Custom Metrics Autoscaler Operator uses automatically-generated service CA certificates to connect to on-cluster services.
If you want to use off-cluster services that require custom CA certificates, you can add the required certificates to a config map. Then, add the config map to the `KedaController` custom resource as described in xref:../../nodes/cma/nodes-cma-autoscaling-custom-install.adoc#nodes-cma-autoscaling-custom-install[Installing the custom metrics autoscaler]. The Operator loads those certificates on start-up and registers them as trusted by the Operator.