1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Merge pull request #102352 from openshift-cherrypick-robot/cherry-pick-102122-to-enterprise-4.21

[enterprise-4.21] OSDOCS-17072-batch5
This commit is contained in:
Darragh Fitzmaurice
2025-11-12 12:38:22 +00:00
committed by GitHub
11 changed files with 153 additions and 52 deletions

View File

@@ -20,7 +20,10 @@ As a cluster administrator, you can modify an existing Ingress Controller to man
[source,terminal]
----
$ SCOPE=$(oc -n openshift-ingress-operator get ingresscontroller <name> -o=jsonpath="{.status.endpointPublishingStrategy.loadBalancer.scope}")
----
+
[source,terminal]
----
$ oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch="{\"spec\":{\"endpointPublishingStrategy\":{\"type\":\"LoadBalancerService\",\"loadBalancer\":{\"dnsManagementPolicy\":\"Unmanaged\", \"scope\":\"${SCOPE}\"}}}}"
ingresscontroller.operator.openshift.io/default patched
----
@@ -32,4 +35,4 @@ ingresscontroller.operator.openshift.io/default patched
$ oc get ingresscontroller <name> -n openshift-ingress-operator -o=jsonpath={.spec.endpointPublishingStrategy.loadBalancer}
----
+
Inspect the output and confirm that `dnsManagementPolicy` is set to `Unmanaged`.
Inspect the output and confirm that `dnsManagementPolicy` is set to `Unmanaged`.

View File

@@ -49,7 +49,7 @@ data:
----
<1> Set the value to `true` to enable logging and `false` to disable logging. The default value is `false`.
<2> Set the value to `debug`, `info`, `warn`, or `error`. If no value exists for `logLevel`, the log level defaults to `error`.
+
. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
.Verification
@@ -60,14 +60,19 @@ data:
----
$ oc -n openshift-monitoring get pods
----
+
. Run a test query using the following sample commands as a model:
+
[source,terminal]
----
$ token=`oc create token prometheus-k8s -n openshift-monitoring`
----
+
[source,terminal]
----
$ oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer $token" 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_version'
----
. Run the following command to read the query log:
+
[source,terminal]
@@ -79,6 +84,6 @@ $ oc -n openshift-monitoring logs <thanos_querier_pod_name> -c thanos-query
====
Because the `thanos-querier` pods are highly available (HA) pods, you might be able to see logs in only one pod.
====
+
. After you examine the logged query information, disable query logging by changing the `enableRequestLogging` value to `false` in the config map.

View File

@@ -2,7 +2,6 @@
//
// scalability_and_performance/scaling-worker-latency-profiles.adoc
:_mod-docs-content-type: PROCEDURE
[id="nodes-cluster-worker-latency-profiles-examining_{context}"]
= Example steps for displaying resulting values of workerLatencyProfile
@@ -46,14 +45,22 @@ node-monitor-grace-period:
[source,terminal]
----
$ oc debug node/<worker-node-name>
----
+
[source,terminal]
----
$ chroot /host
----
+
[source,terminal]
----
# cat /etc/kubernetes/kubelet.conf|grep nodeStatusUpdateFrequency
----
+
.Example output
[source,terminal]
----
“nodeStatusUpdateFrequency”: “10s”
“nodeStatusUpdateFrequency”: “10s”
----
These outputs validate the set of timing variables for the Worker Latency Profile.
These outputs validate the set of timing variables for the Worker Latency Profile.

View File

@@ -12,7 +12,7 @@ By exposing the `/dev/fuse` device to an unprivileged pod, you grant it the capa
. Define the pod with `/dev/fuse` access:
+
* Create a YAML file named `fuse-builder-pod.yaml` with the following content:
.. Create a YAML file named `fuse-builder-pod.yaml` with the following content:
+
[source,yaml]
----
@@ -21,29 +21,30 @@ kind: Pod
metadata:
name: fuse-builder-pod
annotations:
io.kubernetes.cri-o.Devices: "/dev/fuse" <1>
io.kubernetes.cri-o.Devices: "/dev/fuse"
spec:
containers:
- name: build-container
image: quay.io/podman/stable <2>
image: quay.io/podman/stable
command: ["/bin/sh", "-c"]
args: ["echo 'Container is running. Use oc exec to get a shell.'; sleep infinity"] <3>
securityContext: <4>
args: ["echo 'Container is running. Use oc exec to get a shell.'; sleep infinity"]
securityContext:
runAsUser: 1000
----
+
<1> The `io.kubernetes.cri-o.Devices: "/dev/fuse"` annotation makes the FUSE device available.
<2> This annotation specifies a container that uses an image that includes `podman` (for example, `quay.io/podman/stable`).
<3> This command keeps the container running so you can `exec` into it.
<4> This annotation specifies a `securityContext` that runs the container as an unprivileged user (for example, `runAsUser: 1000`).
*
where:
+
`io.kubernetes.cri-o.Devices`:: The `io.kubernetes.cri-o.Devices: "/dev/fuse"` annotation makes the FUSE device available.
`image`:: This annotation specifies a container that uses an image that includes `podman` (for example, `quay.io/podman/stable`).
`args`:: This command keeps the container running so you can `exec` into it.
`securityContext`:: This annotation specifies a `securityContext` that runs the container as an unprivileged user (for example, `runAsUser: 1000`).
+
[NOTE]
====
Depending on your cluster's Security Context Constraints (SCCs) or other policies, you might need to further adjust the `securityContext` specification, for example, by allowing specific capabilities if `/dev/fuse` alone is not sufficient for `fuse-overlayfs` to operate.
====
+
* Create the pod by running the following command:
.. Create the pod by running the following command:
+
[source,terminal]
----
@@ -71,7 +72,15 @@ You are now inside the container. Because the default working directory might no
[source,terminal]
----
$ cd /tmp
----
+
[source,terminal]
----
$ pwd
----
+
[source,terminal]
----
/tmp
----
@@ -115,21 +124,21 @@ This should output the content of the `/app/build_info.txt` file and the copied
. Exit the pod and clean up:
+
* After you are done, exit the shell session in the pod:
.. After you are done, exit the shell session in the pod:
+
[source,terminal]
----
$ exit
----
+
* You can then delete the pod if it's no longer needed:
.. Delete the pod if it's no longer needed:
+
[source,terminal]
----
$ oc delete pod fuse-builder-pod
----
+
* Remove the local YAML file:
.. Remove the local YAML file:
+
[source,terminal]
----

View File

@@ -20,9 +20,25 @@ You can create Domain Name Server (DNS) records on a public or private DNS zone
[source,terminal]
----
$ CLIENT_ID=$(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_id}} | base64 -d)
----
+
[source,terminal]
----
$ CLIENT_SECRET=$(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_secret}} | base64 -d)
----
+
[source,terminal]
----
$ RESOURCE_GROUP=$(oc get secrets azure-credentials -n kube-system --template={{.data.azure_resourcegroup}} | base64 -d)
----
+
[source,terminal]
----
$ SUBSCRIPTION_ID=$(oc get secrets azure-credentials -n kube-system --template={{.data.azure_subscription_id}} | base64 -d)
----
+
[source,terminal]
----
$ TENANT_ID=$(oc get secrets azure-credentials -n kube-system --template={{.data.azure_tenant_id}} | base64 -d)
----
@@ -64,7 +80,6 @@ $ az network dns zone list --resource-group "${RESOURCE_GROUP}"
$ az network private-dns zone list -g "${RESOURCE_GROUP}"
----
. Create a YAML file, for example, `external-dns-sample-azure.yaml`, that defines the `ExternalDNS` object:
+
.Example `external-dns-sample-azure.yaml` file

View File

@@ -46,7 +46,10 @@ Server:
[source,terminal]
----
$ CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}')
----
+
[source,terminal]
----
$ [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"
----
+
@@ -74,4 +77,4 @@ $ oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-c
/tmp/your-cacert.txt
----
In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required.
In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required.

View File

@@ -15,11 +15,21 @@ Before pushing signatures to an OCI registry, cluster administrators must config
+
[source,terminal]
----
$ export NAMESPACE=<namespace> <1>
$ export SERVICE_ACCOUNT_NAME=<service_account> <2>
$ export NAMESPACE=<namespace>
----
<1> The namespace associated with the service account.
<2> The name of the service account.
+
where:
+
`<namespace>`:: The namespace associated with the service account.
+
[source,terminal]
----
$ export SERVICE_ACCOUNT_NAME=<service_account>
----
+
where:
+
`<service_account>`:: The name of the service account.
. Create a Kubernetes secret.
+
@@ -41,14 +51,14 @@ $ oc patch serviceaccount $SERVICE_ACCOUNT_NAME \
----
+
If you patch the default `pipeline` service account that {pipelines-title} assigns to all task runs, the {pipelines-title} Operator will override the service account. As a best practice, you can perform the following steps:
+
.. Create a separate service account to assign to user's task runs.
+
[source,terminal]
----
$ oc create serviceaccount <service_account_name>
----
+
.. Associate the service account to the task runs by setting the value of the `serviceaccountname` field in the task run template.
+
[source,yaml]
@@ -58,9 +68,12 @@ kind: TaskRun
metadata:
name: build-push-task-run-2
spec:
serviceAccountName: build-bot <1>
serviceAccountName: build-bot
taskRef:
name: build-push
...
----
<1> Substitute with the name of the newly created service account.
+
where:
+
`<serviceAccountName>`:: Substitute with the name of the newly created service account.

View File

@@ -36,9 +36,9 @@ $ cosign generate-key-pair k8s://openshift-pipelines/signing-secrets
Provide a password when prompted. Cosign stores the resulting private key as part of the `signing-secrets` Kubernetes secret in the `openshift-pipelines` namespace, and writes the public key to the `cosign.pub` local file.
. Configure authentication for the image registry.
+
.. To configure the {tekton-chains} controller for pushing signature to an OCI registry, use the credentials associated with the service account of the task run. For detailed information, see the "Authenticating to an OCI registry" section.
+
.. To configure authentication for a Kaniko task that builds and pushes image to the registry, create a Kubernetes secret of the docker `config.json` file containing the required credentials.
+
[source,terminal]
@@ -54,33 +54,51 @@ $ oc create secret generic <docker_config_secret_name> \ <1>
[source,terminal]
----
$ oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"artifacts.taskrun.format": "in-toto"}}'
----
+
[source,terminal]
----
$ oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"artifacts.taskrun.storage": "oci"}}'
----
+
[source,terminal]
----
$ oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"transparency.enabled": "true"}}'
----
. Start the Kaniko task.
+
.. Apply the Kaniko task to the cluster.
+
[source,terminal]
----
$ oc apply -f examples/kaniko/kaniko.yaml <1>
----
<1> Substitute with the URI or file path to your Kaniko task.
+
where:
+
`<examples/kaniko/kaniko.yaml>`:: Substitute with the URI or file path to your Kaniko task.
+
.. Set the appropriate environment variables.
+
[source,terminal]
----
$ export REGISTRY=<url_of_registry> <1>
$ export REGISTRY=<url_of_registry>
----
+
where:
+
`<url_of_registry>`:: Substitute with the URL of the registry where you want to push the image.
+
[source,terminal]
----
$ export DOCKERCONFIG_SECRET_NAME=<name_of_the_secret_in_docker_config_json> <2>
----
<1> Substitute with the URL of the registry where you want to push the image.
<2> Substitute with the name of the secret in the docker `config.json` file.
+
where:
+
`<name_of_the_secret_in_docker_config_json>`:: Substitute with the name of the secret in the docker `config.json` file.
+
.. Start the Kaniko task.
+
[source,terminal]
@@ -109,14 +127,17 @@ $ oc get tr <task_run_name> \ <1>
[source,terminal]
----
$ cosign verify --key cosign.pub $REGISTRY/kaniko-chains
----
+
[source,terminal]
----
$ cosign verify-attestation --key cosign.pub $REGISTRY/kaniko-chains
----
. Find the provenance for the image in Rekor.
+
.. Get the digest of the $REGISTRY/kaniko-chains image. You can search for it ing the task run, or pull the image to extract the digest.
+
.. Search Rekor to find all entries that match the `sha256` digest of the image.
+
[source,terminal]
@@ -132,7 +153,7 @@ $ rekor-cli search --sha <image_digest> <1>
<3> The second matching UUID.
+
The search result displays UUIDs of the matching entries. One of those UUIDs holds the attestation.
+
.. Check the attestation.
+
[source,terminal]

View File

@@ -13,8 +13,20 @@ Use the Bookinfo sample application to verify that the workload certificates are
[source,terminal]
----
$ sleep 60
----
+
[source,terminal]
----
$ oc -n bookinfo exec "$(oc -n bookinfo get pod -l app=productpage -o jsonpath={.items..metadata.name})" -c istio-proxy -- openssl s_client -showcerts -connect details:9080 > bookinfo-proxy-cert.txt
----
+
[source,terminal]
----
$ sed -n '/-----BEGIN CERTIFICATE-----/{:start /-----END CERTIFICATE-----/!{N;b start};/.*/p}' bookinfo-proxy-cert.txt > certs.pem
----
+
[source,terminal]
----
$ awk 'BEGIN {counter=0;} /BEGIN CERT/{counter++} { print > "proxy-cert-" counter ".pem"}' < certs.pem
----
+
@@ -44,25 +56,27 @@ $ diff -s /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt
You should see the following result:
`Files /tmp/root-cert.crt.txt and /tmp/pod-root-cert.crt.txt are identical`
. Verify that the CA certificate is the same as the one specified by the administrator. Replace `<path>` with the path to your certificates.
+
[source,terminal]
----
$ openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt
----
+
Run the following syntax at the terminal window.
+
[source,terminal]
----
$ openssl x509 -in ./proxy-cert-2.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt
----
+
Compare the certificates by running the following syntax at the terminal window.
+
[source,terminal]
----
$ diff -s /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt
----
+
You should see the following result:
`Files /tmp/ca-cert.crt.txt and /tmp/pod-cert-chain-ca.crt.txt are identical.`
@@ -72,5 +86,6 @@ You should see the following result:
----
$ openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) ./proxy-cert-1.pem
----
+
You should see the following result:
`./proxy-cert-1.pem: OK`

View File

@@ -38,6 +38,10 @@ $ oc get smcp -o yaml
----
$ oc get smcp.v1.maistra.io <smcp_name> > smcp-resource.yaml
#Edit the smcp-resource.yaml file.
----
+
[source,terminal]
----
$ oc replace -f smcp-resource.yaml
----
+

View File

@@ -23,7 +23,10 @@ To use Azure File dynamic provisioning across subscriptions:
[source,terminal]
----
$ sp_id=$(oc -n openshift-cluster-csi-drivers get secret azure-file-credentials -o jsonpath='{.data.azure_client_id}' | base64 --decode)
----
+
[source,terminal]
----
$ az ad sp show --id ${sp_id} --query displayName --output tsv
----
+
@@ -32,7 +35,10 @@ $ az ad sp show --id ${sp_id} --query displayName --output tsv
[source,terminal]
----
$ mi_id=$(oc -n openshift-cluster-csi-drivers get secret azure-file-credentials -o jsonpath='{.data.azure_client_id}' | base64 --decode)
----
+
[source,terminal]
----
$ az identity list --query "[?clientId=='${mi_id}'].{Name:name}" --output tsv
----