diff --git a/modules/configuring-egress-proxy-edns-operator.adoc b/modules/configuring-egress-proxy-edns-operator.adoc index 21b099957f..2e376d6bcf 100644 --- a/modules/configuring-egress-proxy-edns-operator.adoc +++ b/modules/configuring-egress-proxy-edns-operator.adoc @@ -33,15 +33,10 @@ $ oc -n external-dns-operator patch subscription external-dns-operator --type='j .Verification -* After the deployment of the External DNS Operator is completed, verify that the trusted CA environment variable is added to the `external-dns-operator` deployment by running the following command: +* After the deployment of the External DNS Operator is completed, verify that the trusted CA environment variable is added, outputted as `trusted-ca`, to the `external-dns-operator` deployment by running the following command: + [source,terminal] ---- $ oc -n external-dns-operator exec deploy/external-dns-operator -c external-dns-operator -- printenv TRUSTED_CA_CONFIGMAP_NAME ---- -+ -.Example output -[source,terminal] ----- -trusted-ca ----- + diff --git a/modules/k8s-nmstate-deploying-nmstate-CLI.adoc b/modules/k8s-nmstate-deploying-nmstate-CLI.adoc index fb0e9db4cf..6eda1f9a84 100644 --- a/modules/k8s-nmstate-deploying-nmstate-CLI.adoc +++ b/modules/k8s-nmstate-deploying-nmstate-CLI.adoc @@ -72,13 +72,6 @@ EOF $ oc get clusterserviceversion -n openshift-nmstate \ -o custom-columns=Name:.metadata.name,Phase:.status.phase ---- -+ -.Example output -[source,terminal,subs="attributes+"] ----- -Name Phase -kubernetes-nmstate-operator.{product-version}.0-202210210157 Succeeded ----- . Create an instance of the `nmstate` Operator: + @@ -116,19 +109,10 @@ $ oc apply -f .yaml .Verification -. Verify that all pods for the NMState Operator are in a `Running` state: +* Verify that all pods for the NMState Operator have the `Running` status by entering the following command: + [source,terminal] ---- $ oc get pod -n openshift-nmstate ---- -+ -.Example output -[source,terminal,subs="attributes+"] ----- -Name Ready Status Restarts Age -pod/nmstate-handler-wn55p 1/1 Running 0 77s -pod/nmstate-operator-f6bb869b6-v5m92 1/1 Running 0 4m51s -... ----- diff --git a/modules/k8s-nmstate-installing-the-kubernetes-nmstate-operator.adoc b/modules/k8s-nmstate-installing-the-kubernetes-nmstate-operator.adoc index 4c7a23b3be..ba630d6686 100644 --- a/modules/k8s-nmstate-installing-the-kubernetes-nmstate-operator.adoc +++ b/modules/k8s-nmstate-installing-the-kubernetes-nmstate-operator.adoc @@ -7,7 +7,7 @@ [id="installing-the-kubernetes-nmstate-operator-web-console_{context}"] = Installing the Kubernetes NMState Operator by using the web console -You can install the Kubernetes NMState Operator by using the web console. After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes. +You can install the Kubernetes NMState Operator by using the web console. After you install the Kubernetes NMState Operator, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes. .Prerequisites @@ -38,6 +38,3 @@ The name restriction is a known issue. The instance is a singleton for the entir . Accept the default settings and click *Create* to create the instance. -.Summary - -After you install the Kubernetes NMState Operator, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes. diff --git a/modules/nw-autoscaling-ingress-controller.adoc b/modules/nw-autoscaling-ingress-controller.adoc index c6589df84e..aa2d775f4a 100644 --- a/modules/nw-autoscaling-ingress-controller.adoc +++ b/modules/nw-autoscaling-ingress-controller.adoc @@ -193,18 +193,12 @@ $ oc apply -f ingress-autoscaler.yaml .Verification * Verify that the default Ingress Controller is scaled out to match the value returned by the `kube-state-metrics` query by running the following commands: -** Use the `grep` command to search the Ingress Controller YAML file for replicas: +** Use the `grep` command to search the Ingress Controller YAML file for the number of replicas: + [source,terminal] ---- $ oc get -n openshift-ingress-operator ingresscontroller/default -o yaml | grep replicas: ---- -+ -.Example output -[source,terminal] ----- - replicas: 3 ----- ** Get the pods in the `openshift-ingress` project: + diff --git a/modules/nw-aws-load-balancer-operator.adoc b/modules/nw-aws-load-balancer-operator.adoc index f33b43e0b9..0c95f99720 100644 --- a/modules/nw-aws-load-balancer-operator.adoc +++ b/modules/nw-aws-load-balancer-operator.adoc @@ -15,18 +15,12 @@ The AWS Load Balancer Operator supports the Kubernetes service resource of type .Procedure -. You can deploy the AWS Load Balancer Operator on demand from OperatorHub, by creating a `Subscription` object by running the following command: +. To deploy the AWS Load Balancer Operator on-demand from OperatorHub, create a `Subscription` object by running the following command: + [source,terminal] ---- $ oc -n aws-load-balancer-operator get sub aws-load-balancer-operator --template='{{.status.installplan.name}}{{"\n"}}' ---- -+ -.Example output -[source,terminal] ----- -install-zlfbt ----- . Check if the status of an install plan is `Complete` by running the following command: + @@ -34,12 +28,6 @@ install-zlfbt ---- $ oc -n aws-load-balancer-operator get ip --template='{{.status.phase}}{{"\n"}}' ---- -+ -.Example output -[source,terminal] ----- -Complete ----- . View the status of the `aws-load-balancer-operator-controller-manager` deployment by running the following command: + diff --git a/modules/nw-bpfman-operator-deploy.adoc b/modules/nw-bpfman-operator-deploy.adoc index 06ae8655d6..564f5796e9 100644 --- a/modules/nw-bpfman-operator-deploy.adoc +++ b/modules/nw-bpfman-operator-deploy.adoc @@ -89,19 +89,5 @@ Replace `` with the name of an XDP program pod, such as `go-xdp-counte ---- 2024/08/13 15:20:06 15016 packets received 2024/08/13 15:20:06 93581579 bytes received - -2024/08/13 15:20:09 19284 packets received -2024/08/13 15:20:09 99638680 bytes received - -2024/08/13 15:20:12 23522 packets received -2024/08/13 15:20:12 105666062 bytes received - -2024/08/13 15:20:15 27276 packets received -2024/08/13 15:20:15 112028608 bytes received - -2024/08/13 15:20:18 29470 packets received -2024/08/13 15:20:18 112732299 bytes received - -2024/08/13 15:20:21 32588 packets received -2024/08/13 15:20:21 113813781 bytes received +... ---- diff --git a/modules/nw-control-dns-records-public-hosted-zone-aws.adoc b/modules/nw-control-dns-records-public-hosted-zone-aws.adoc index 6a6596f70c..7b4264ce50 100644 --- a/modules/nw-control-dns-records-public-hosted-zone-aws.adoc +++ b/modules/nw-control-dns-records-public-hosted-zone-aws.adoc @@ -10,24 +10,22 @@ You can create DNS records on a public hosted zone for AWS by using the Red Hat .Procedure -. Check the user. The user must have access to the `kube-system` namespace. If you don’t have the credentials, as you can fetch the credentials from the `kube-system` namespace to use the cloud provider client: +. Check the user profile, such as `system:admin`, by running the following command. The user profile must have access to the `kube-system` namespace. If you do not have the credentials, you can fetch the credentials from the `kube-system` namespace to use the cloud provider client by running the following command: + [source,terminal] ---- $ oc whoami ---- -+ -.Example output -[source,terminal] ----- -system:admin ----- . Fetch the values from aws-creds secret present in `kube-system` namespace. + [source,terminal] ---- $ export AWS_ACCESS_KEY_ID=$(oc get secrets aws-creds -n kube-system --template={{.data.aws_access_key_id}} | base64 -d) +---- ++ +[source,terminal] +---- $ export AWS_SECRET_ACCESS_KEY=$(oc get secrets aws-creds -n kube-system --template={{.data.aws_secret_access_key}} | base64 -d) ---- @@ -45,7 +43,7 @@ openshift-console console console-openshift-console.apps.te openshift-console downloads downloads-openshift-console.apps.testextdnsoperator.apacshift.support downloads http edge/Redirect None ---- -. Get the list of dns zones to find the one which corresponds to the previously found route's domain: +. Get the list of DNS zones and find the DNS zone that corresponds to the domain of the route that you previously queried: + [source,terminal] ---- diff --git a/modules/nw-control-dns-records-public-managed-zone-gcp.adoc b/modules/nw-control-dns-records-public-managed-zone-gcp.adoc index c147282a82..9bba675826 100644 --- a/modules/nw-control-dns-records-public-managed-zone-gcp.adoc +++ b/modules/nw-control-dns-records-public-managed-zone-gcp.adoc @@ -57,18 +57,12 @@ openshift-console console console-openshift-console.apps.te openshift-console downloads downloads-openshift-console.apps.test.gcp.example.com downloads http edge/Redirect None ---- -. Get a list of managed zones by running the following command: +. Get a list of managed zones, such as `qe-cvs4g-private-zone test.gcp.example.com`, by running the following command: + [source,terminal] ---- $ gcloud dns managed-zones list | grep test.gcp.example.com ---- -+ -.Example output -[source,terminal] ----- -qe-cvs4g-private-zone test.gcp.example.com ----- . Create a YAML file, for example, `external-dns-sample-gcp.yaml`, that defines the `ExternalDNS` object: + diff --git a/modules/nw-dns-cache-tuning.adoc b/modules/nw-dns-cache-tuning.adoc index c4c3f1d680..3c7eb4243f 100644 --- a/modules/nw-dns-cache-tuning.adoc +++ b/modules/nw-dns-cache-tuning.adoc @@ -45,7 +45,7 @@ spec: + [source,terminal] ---- -oc get configmap/dns-default -n openshift-dns -o yaml +$ oc get configmap/dns-default -n openshift-dns -o yaml ---- . Verify that you see entries that look like the following example: diff --git a/modules/nw-dns-forward.adoc b/modules/nw-dns-forward.adoc index 8729a6f8c9..b61540aeb4 100644 --- a/modules/nw-dns-forward.adoc +++ b/modules/nw-dns-forward.adoc @@ -81,7 +81,7 @@ spec: clusterDomain: cluster.local clusterIP: x.y.z.10 conditions: - ... +... ---- <1> Must comply with the `rfc6335` service name syntax. <2> Must conform to the definition of a subdomain in the `rfc1123` service name syntax. The cluster domain, `cluster.local`, is an invalid subdomain for the `zones` field. diff --git a/modules/nw-dns-operator-managementState.adoc b/modules/nw-dns-operator-managementState.adoc index 7f3e3892c2..576356dff5 100644 --- a/modules/nw-dns-operator-managementState.adoc +++ b/modules/nw-dns-operator-managementState.adoc @@ -23,19 +23,13 @@ The following are use cases for changing the DNS Operator `managementState`: oc patch dns.operator.openshift.io default --type merge --patch '{"spec":{"managementState":"Unmanaged"}}' ---- -. Review `managementState` of the DNS Operator using the `jsonpath` command-line JSON parser: +. Review `managementState` of the DNS Operator by using the `jsonpath` command-line JSON parser: + [source,terminal] ---- $ oc get dns.operator.openshift.io default -ojsonpath='{.spec.managementState}' ---- + -.Example output -[source,terminal] ----- -"Unmanaged" ----- - [NOTE] ==== You cannot upgrade while the `managementState` is set to `Unmanaged`. diff --git a/modules/nw-dns-view.adoc b/modules/nw-dns-view.adoc index 053aa61e8c..80b6506065 100644 --- a/modules/nw-dns-view.adoc +++ b/modules/nw-dns-view.adoc @@ -37,17 +37,9 @@ qualified pod and service domain names. <2> The Cluster IP is the address pods query for name resolution. The IP is defined as the 10th address in the service CIDR range. -ifdef::openshift-enterprise,openshift-webscale,openshift-origin[] -. To find the service CIDR range of your cluster, use the `oc get` command: +. To find the service CIDR range, such as `172.30.0.0/16`, of your cluster, use the `oc get` command: + [source,terminal] ---- $ oc get networks.config/cluster -o jsonpath='{$.status.serviceNetwork}' ---- -+ -.Example output -[source,terminal] ----- -[172.30.0.0/16] ----- -endif::[] diff --git a/modules/nw-dpu-configuring-operator.adoc b/modules/nw-dpu-configuring-operator.adoc index d8c788e622..1659869aa5 100644 --- a/modules/nw-dpu-configuring-operator.adoc +++ b/modules/nw-dpu-configuring-operator.adoc @@ -38,11 +38,9 @@ $ oc apply -f dpu-operator-host-config.yaml + [source,terminal] ---- -$ oc label node dpu=true +$ oc label node dpu=true ---- + -.Example -[source,terminal] ----- -$ oc label node worker-1 dpu=true ----- \ No newline at end of file +where: ++ +`node_name`:: Refers to the name of your node, such as `worker-1`. diff --git a/modules/nw-dpu-installing-operator-cli.adoc b/modules/nw-dpu-installing-operator-cli.adoc index 9cc9daa136..85eae63274 100644 --- a/modules/nw-dpu-installing-operator-cli.adoc +++ b/modules/nw-dpu-installing-operator-cli.adoc @@ -71,20 +71,13 @@ EOF .Verification -. Check that the Operator is installed by entering the following command: +. To verify that the Operator is installed, enter the following command and then check that output shows `Succeeded` for the Operator: + [source,terminal] ---- $ oc get csv -n openshift-dpu-operator \ -o custom-columns=Name:.metadata.name,Phase:.status.phase ---- -+ -.Example output -[source,terminal,subs="attributes+"] ----- -Name Phase -dpu-operator.v.{product-version}-202503130333 Succeeded ----- . Change to the `openshift-dpu-operator` project: + diff --git a/modules/nw-dpu-operator-uninstall.adoc b/modules/nw-dpu-operator-uninstall.adoc index c95b2ae64c..848fc1e019 100644 --- a/modules/nw-dpu-operator-uninstall.adoc +++ b/modules/nw-dpu-operator-uninstall.adoc @@ -38,7 +38,7 @@ $ oc delete OperatorGroup dpu-operators -n openshift-dpu-operator . Uninstall the DPU Operator as follows: -.. Check the installed operators by running the following command: +.. Check the installed Operators by running the following command: + [source,terminal] ---- @@ -69,16 +69,10 @@ $ oc delete namespace openshift-dpu-operator .Verification -. Verify that the DPU Operator is uninstalled by running the following command: +. Verify that the DPU Operator is uninstalled by running the following command. An example of succesful command output is `No resources found in openshift-dpu-operator namespace`. + [source,terminal] ---- $ oc get csv -n openshift-dpu-operator ---- -+ -.Example output -+ -[source,terminal] ----- -No resources found in openshift-dpu-operator namespace. ----- + diff --git a/modules/nw-external-dns-operator.adoc b/modules/nw-external-dns-operator.adoc index bd26916888..a3eed665dd 100644 --- a/modules/nw-external-dns-operator.adoc +++ b/modules/nw-external-dns-operator.adoc @@ -15,18 +15,12 @@ The External DNS Operator implements the External DNS API from the `olm.openshif You can deploy the External DNS Operator on demand from the OperatorHub. Deploying the External DNS Operator creates a `Subscription` object. -. Check the name of an install plan by running the following command: +. Check the name of an install plan, such as `install-zcvlr`, by running the following command: + [source,terminal] ---- $ oc -n external-dns-operator get sub external-dns-operator -o yaml | yq '.status.installplan.name' ---- -+ -.Example output -[source,terminal] ----- -install-zcvlr ----- . Check if the status of an install plan is `Complete` by running the following command: + @@ -34,12 +28,6 @@ install-zcvlr ---- $ oc -n external-dns-operator get ip -o yaml | yq '.status.phase' ---- -+ -.Example output -[source,terminal] ----- -Complete ----- . View the status of the `external-dns-operator` deployment by running the following command: + diff --git a/modules/nw-ingress-configuring-application-domain.adoc b/modules/nw-ingress-configuring-application-domain.adoc index 61adbf5438..39284ff323 100644 --- a/modules/nw-ingress-configuring-application-domain.adoc +++ b/modules/nw-ingress-configuring-application-domain.adoc @@ -60,19 +60,13 @@ spec: Wait for the `openshift-apiserver` finish rolling updates before exposing the route. ==== + -.. Expose the route: +.. Expose the route by entering the following command. The command outputs `route.route.openshift.io/hello-openshift exposed` to designate exposure of the route. + [source,terminal] ---- $ oc expose service hello-openshift ---- + -.Example output -[source,terminal] ----- -route.route.openshift.io/hello-openshift exposed ----- -+ .. Get a list of routes by running the following command: + [source,terminal] diff --git a/modules/nw-ingress-setting-a-custom-default-certificate.adoc b/modules/nw-ingress-setting-a-custom-default-certificate.adoc index 960976ced9..55992a64a3 100644 --- a/modules/nw-ingress-setting-a-custom-default-certificate.adoc +++ b/modules/nw-ingress-setting-a-custom-default-certificate.adoc @@ -22,19 +22,12 @@ certificate authority that you configured in a custom PKI. ** The certificate uses the `subjectAltName` extension to specify a wildcard domain, such as `*.apps.ocp4.example.com`. -* You must have an `IngressController` CR. You may use the default one: +* You must have an `IngressController` CR, which includes just having the `default` `IngressController` CR. You can run the following command to check that you have an `IngressController` CR: + [source,terminal] ---- $ oc --namespace openshift-ingress-operator get ingresscontrollers ---- -+ -.Example output -[source,terminal] ----- -NAME AGE -default 10m ----- [NOTE] ==== diff --git a/modules/nw-installing-external-dns-operator-cli.adoc b/modules/nw-installing-external-dns-operator-cli.adoc index 295c270c05..9ec86a2830 100644 --- a/modules/nw-installing-external-dns-operator-cli.adoc +++ b/modules/nw-installing-external-dns-operator-cli.adoc @@ -130,25 +130,9 @@ external-dns-operator-5584585fd7-5lwqm 2/2 Running 0 11m $ oc -n external-dns-operator get subscription ---- -+ -.Example output -[source,terminal] ----- -NAME PACKAGE SOURCE CHANNEL -external-dns-operator external-dns-operator redhat-operators stable-v1 ----- - . Check the `external-dns-operator` version by running the following command: + [source,terminal] ---- $ oc -n external-dns-operator get csv ---- - -+ -.Example output -[source,terminal] ----- -NAME DISPLAY VERSION REPLACES PHASE -external-dns-operator.v<1.y.z> ExternalDNS Operator <1.y.z> Succeeded ----- diff --git a/modules/nw-metalLB-basic-upgrade-operator.adoc b/modules/nw-metalLB-basic-upgrade-operator.adoc index be621abbe9..e527f3e71f 100644 --- a/modules/nw-metalLB-basic-upgrade-operator.adoc +++ b/modules/nw-metalLB-basic-upgrade-operator.adoc @@ -48,13 +48,6 @@ spec: ---- $ oc -n metallb-system get csv ---- -+ -.Example output -[source,terminal,subs="attributes+"] ----- -NAME DISPLAY VERSION REPLACES PHASE -metallb-operator.v{product-version}.0 MetalLB Operator {product-version}.0 Succeeded ----- . Check the install plan that exists in the namespace by entering the following command. + @@ -87,16 +80,9 @@ After you edit the install plan, the upgrade operation starts. If you enter the .Verification -. Verify the upgrade was successful by entering the following command: +* To verify that the Operator is upgraded, enter the following command and then check that output shows `Succeeded` for the Operator: + [source,terminal] ---- $ oc -n metallb-system get csv ---- -+ -.Example output -[source,terminal,subs="attributes+"] ----- -NAME DISPLAY VERSION REPLACE PHASE -metallb-operator.v.0-202503102139 MetalLB Operator {product-version}.0-202503102139 metallb-operator.v{product-version}.0-202502261233 Succeeded ----- diff --git a/modules/nw-metallb-installing-operator-cli.adoc b/modules/nw-metallb-installing-operator-cli.adoc index c4e5e1a4dc..69eebc42a9 100644 --- a/modules/nw-metallb-installing-operator-cli.adoc +++ b/modules/nw-metallb-installing-operator-cli.adoc @@ -112,17 +112,10 @@ install-wzg94 metallb-operator.{product-version}.0-nnnnnnnnnnnn Automatic Installation of the Operator might take a few seconds. ==== -. To verify that the Operator is installed, enter the following command: +. To verify that the Operator is installed, enter the following command and then check that output shows `Succeeded` for the Operator: + [source,terminal] ---- $ oc get clusterserviceversion -n metallb-system \ -o custom-columns=Name:.metadata.name,Phase:.status.phase ---- -+ -.Example output -[source,terminal,subs="attributes+"] ----- -Name Phase -metallb-operator.{product-version}.0-nnnnnnnnnnnn Succeeded ----- \ No newline at end of file diff --git a/modules/nw-scaling-ingress-controller.adoc b/modules/nw-scaling-ingress-controller.adoc index 87f539683c..d3127cb6b2 100644 --- a/modules/nw-scaling-ingress-controller.adoc +++ b/modules/nw-scaling-ingress-controller.adoc @@ -23,42 +23,21 @@ Scaling is not an immediate action, as it takes time to create the desired numbe ---- $ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}' ---- -+ -.Example output -[source,terminal] ----- -2 ----- -. Scale the default `IngressController` to the desired number of replicas using -the `oc patch` command. The following example scales the default `IngressController` -to 3 replicas: +. Scale the default `IngressController` to the desired number of replicas by using the `oc patch` command. The following example scales the default `IngressController` to 3 replicas. + [source,terminal] ---- $ oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 3}}' --type=merge ---- -+ -.Example output -[source,terminal] ----- -ingresscontroller.operator.openshift.io/default patched ----- -. Verify that the default `IngressController` scaled to the number of replicas -that you specified: +. Verify that the default `IngressController` scaled to the number of replicas that you specified: + [source,terminal] ---- $ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}' ---- + -.Example output -[source,terminal] ----- -3 ----- -+ [TIP] ==== You can alternatively apply the following YAML to scale an Ingress Controller to three replicas: diff --git a/modules/nw-sriov-installing-operator.adoc b/modules/nw-sriov-installing-operator.adoc index bcf09e9511..6aa6e07757 100644 --- a/modules/nw-sriov-installing-operator.adoc +++ b/modules/nw-sriov-installing-operator.adoc @@ -89,20 +89,13 @@ EOF .Verification -* Check that the Operator is installed by entering the following command: +* To verify that the Operator is installed, enter the following command and then check that output shows `Succeeded` for the Operator: + [source,terminal] ---- $ oc get csv -n openshift-sriov-network-operator \ -o custom-columns=Name:.metadata.name,Phase:.status.phase ---- -+ -.Example output -[source,terminal,subs="attributes+"] ----- -Name Phase -sriov-network-operator.{product-version}.0-202406131906 Succeeded ----- [id="install-operator-web-console_{context}"] == Web console: Installing the SR-IOV Network Operator