1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

OSDOCS-15470: Removed low-level example output blocks from Network Operator docs

This commit is contained in:
dfitzmau
2025-07-22 16:15:32 +01:00
parent ef57ff4c42
commit 4f1681f511
23 changed files with 34 additions and 217 deletions

View File

@@ -33,15 +33,10 @@ $ oc -n external-dns-operator patch subscription external-dns-operator --type='j
.Verification
* After the deployment of the External DNS Operator is completed, verify that the trusted CA environment variable is added to the `external-dns-operator` deployment by running the following command:
* After the deployment of the External DNS Operator is completed, verify that the trusted CA environment variable is added, outputted as `trusted-ca`, to the `external-dns-operator` deployment by running the following command:
+
[source,terminal]
----
$ oc -n external-dns-operator exec deploy/external-dns-operator -c external-dns-operator -- printenv TRUSTED_CA_CONFIGMAP_NAME
----
+
.Example output
[source,terminal]
----
trusted-ca
----

View File

@@ -72,13 +72,6 @@ EOF
$ oc get clusterserviceversion -n openshift-nmstate \
-o custom-columns=Name:.metadata.name,Phase:.status.phase
----
+
.Example output
[source,terminal,subs="attributes+"]
----
Name Phase
kubernetes-nmstate-operator.{product-version}.0-202210210157 Succeeded
----
. Create an instance of the `nmstate` Operator:
+
@@ -116,19 +109,10 @@ $ oc apply -f <filename>.yaml
.Verification
. Verify that all pods for the NMState Operator are in a `Running` state:
* Verify that all pods for the NMState Operator have the `Running` status by entering the following command:
+
[source,terminal]
----
$ oc get pod -n openshift-nmstate
----
+
.Example output
[source,terminal,subs="attributes+"]
----
Name Ready Status Restarts Age
pod/nmstate-handler-wn55p 1/1 Running 0 77s
pod/nmstate-operator-f6bb869b6-v5m92 1/1 Running 0 4m51s
...
----

View File

@@ -7,7 +7,7 @@
[id="installing-the-kubernetes-nmstate-operator-web-console_{context}"]
= Installing the Kubernetes NMState Operator by using the web console
You can install the Kubernetes NMState Operator by using the web console. After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes.
You can install the Kubernetes NMState Operator by using the web console. After you install the Kubernetes NMState Operator, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes.
.Prerequisites
@@ -38,6 +38,3 @@ The name restriction is a known issue. The instance is a singleton for the entir
. Accept the default settings and click *Create* to create the instance.
.Summary
After you install the Kubernetes NMState Operator, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes.

View File

@@ -193,18 +193,12 @@ $ oc apply -f ingress-autoscaler.yaml
.Verification
* Verify that the default Ingress Controller is scaled out to match the value returned by the `kube-state-metrics` query by running the following commands:
** Use the `grep` command to search the Ingress Controller YAML file for replicas:
** Use the `grep` command to search the Ingress Controller YAML file for the number of replicas:
+
[source,terminal]
----
$ oc get -n openshift-ingress-operator ingresscontroller/default -o yaml | grep replicas:
----
+
.Example output
[source,terminal]
----
replicas: 3
----
** Get the pods in the `openshift-ingress` project:
+

View File

@@ -15,18 +15,12 @@ The AWS Load Balancer Operator supports the Kubernetes service resource of type
.Procedure
. You can deploy the AWS Load Balancer Operator on demand from OperatorHub, by creating a `Subscription` object by running the following command:
. To deploy the AWS Load Balancer Operator on-demand from OperatorHub, create a `Subscription` object by running the following command:
+
[source,terminal]
----
$ oc -n aws-load-balancer-operator get sub aws-load-balancer-operator --template='{{.status.installplan.name}}{{"\n"}}'
----
+
.Example output
[source,terminal]
----
install-zlfbt
----
. Check if the status of an install plan is `Complete` by running the following command:
+
@@ -34,12 +28,6 @@ install-zlfbt
----
$ oc -n aws-load-balancer-operator get ip <install_plan_name> --template='{{.status.phase}}{{"\n"}}'
----
+
.Example output
[source,terminal]
----
Complete
----
. View the status of the `aws-load-balancer-operator-controller-manager` deployment by running the following command:
+

View File

@@ -89,19 +89,5 @@ Replace `<pod_name>` with the name of an XDP program pod, such as `go-xdp-counte
----
2024/08/13 15:20:06 15016 packets received
2024/08/13 15:20:06 93581579 bytes received
2024/08/13 15:20:09 19284 packets received
2024/08/13 15:20:09 99638680 bytes received
2024/08/13 15:20:12 23522 packets received
2024/08/13 15:20:12 105666062 bytes received
2024/08/13 15:20:15 27276 packets received
2024/08/13 15:20:15 112028608 bytes received
2024/08/13 15:20:18 29470 packets received
2024/08/13 15:20:18 112732299 bytes received
2024/08/13 15:20:21 32588 packets received
2024/08/13 15:20:21 113813781 bytes received
...
----

View File

@@ -10,24 +10,22 @@ You can create DNS records on a public hosted zone for AWS by using the Red Hat
.Procedure
. Check the user. The user must have access to the `kube-system` namespace. If you dont have the credentials, as you can fetch the credentials from the `kube-system` namespace to use the cloud provider client:
. Check the user profile, such as `system:admin`, by running the following command. The user profile must have access to the `kube-system` namespace. If you do not have the credentials, you can fetch the credentials from the `kube-system` namespace to use the cloud provider client by running the following command:
+
[source,terminal]
----
$ oc whoami
----
+
.Example output
[source,terminal]
----
system:admin
----
. Fetch the values from aws-creds secret present in `kube-system` namespace.
+
[source,terminal]
----
$ export AWS_ACCESS_KEY_ID=$(oc get secrets aws-creds -n kube-system --template={{.data.aws_access_key_id}} | base64 -d)
----
+
[source,terminal]
----
$ export AWS_SECRET_ACCESS_KEY=$(oc get secrets aws-creds -n kube-system --template={{.data.aws_secret_access_key}} | base64 -d)
----
@@ -45,7 +43,7 @@ openshift-console console console-openshift-console.apps.te
openshift-console downloads downloads-openshift-console.apps.testextdnsoperator.apacshift.support downloads http edge/Redirect None
----
. Get the list of dns zones to find the one which corresponds to the previously found route's domain:
. Get the list of DNS zones and find the DNS zone that corresponds to the domain of the route that you previously queried:
+
[source,terminal]
----

View File

@@ -57,18 +57,12 @@ openshift-console console console-openshift-console.apps.te
openshift-console downloads downloads-openshift-console.apps.test.gcp.example.com downloads http edge/Redirect None
----
. Get a list of managed zones by running the following command:
. Get a list of managed zones, such as `qe-cvs4g-private-zone test.gcp.example.com`, by running the following command:
+
[source,terminal]
----
$ gcloud dns managed-zones list | grep test.gcp.example.com
----
+
.Example output
[source,terminal]
----
qe-cvs4g-private-zone test.gcp.example.com
----
. Create a YAML file, for example, `external-dns-sample-gcp.yaml`, that defines the `ExternalDNS` object:
+

View File

@@ -45,7 +45,7 @@ spec:
+
[source,terminal]
----
oc get configmap/dns-default -n openshift-dns -o yaml
$ oc get configmap/dns-default -n openshift-dns -o yaml
----
. Verify that you see entries that look like the following example:

View File

@@ -81,7 +81,7 @@ spec:
clusterDomain: cluster.local
clusterIP: x.y.z.10
conditions:
...
...
----
<1> Must comply with the `rfc6335` service name syntax.
<2> Must conform to the definition of a subdomain in the `rfc1123` service name syntax. The cluster domain, `cluster.local`, is an invalid subdomain for the `zones` field.

View File

@@ -23,19 +23,13 @@ The following are use cases for changing the DNS Operator `managementState`:
oc patch dns.operator.openshift.io default --type merge --patch '{"spec":{"managementState":"Unmanaged"}}'
----
. Review `managementState` of the DNS Operator using the `jsonpath` command-line JSON parser:
. Review `managementState` of the DNS Operator by using the `jsonpath` command-line JSON parser:
+
[source,terminal]
----
$ oc get dns.operator.openshift.io default -ojsonpath='{.spec.managementState}'
----
+
.Example output
[source,terminal]
----
"Unmanaged"
----
[NOTE]
====
You cannot upgrade while the `managementState` is set to `Unmanaged`.

View File

@@ -37,17 +37,9 @@ qualified pod and service domain names.
<2> The Cluster IP is the address pods query for name resolution. The IP is
defined as the 10th address in the service CIDR range.
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
. To find the service CIDR range of your cluster, use the `oc get` command:
. To find the service CIDR range, such as `172.30.0.0/16`, of your cluster, use the `oc get` command:
+
[source,terminal]
----
$ oc get networks.config/cluster -o jsonpath='{$.status.serviceNetwork}'
----
+
.Example output
[source,terminal]
----
[172.30.0.0/16]
----
endif::[]

View File

@@ -38,11 +38,9 @@ $ oc apply -f dpu-operator-host-config.yaml
+
[source,terminal]
----
$ oc label node <node-name> dpu=true
$ oc label node <node_name> dpu=true
----
+
.Example
[source,terminal]
----
$ oc label node worker-1 dpu=true
----
where:
+
`node_name`:: Refers to the name of your node, such as `worker-1`.

View File

@@ -71,20 +71,13 @@ EOF
.Verification
. Check that the Operator is installed by entering the following command:
. To verify that the Operator is installed, enter the following command and then check that output shows `Succeeded` for the Operator:
+
[source,terminal]
----
$ oc get csv -n openshift-dpu-operator \
-o custom-columns=Name:.metadata.name,Phase:.status.phase
----
+
.Example output
[source,terminal,subs="attributes+"]
----
Name Phase
dpu-operator.v.{product-version}-202503130333 Succeeded
----
. Change to the `openshift-dpu-operator` project:
+

View File

@@ -38,7 +38,7 @@ $ oc delete OperatorGroup dpu-operators -n openshift-dpu-operator
. Uninstall the DPU Operator as follows:
.. Check the installed operators by running the following command:
.. Check the installed Operators by running the following command:
+
[source,terminal]
----
@@ -69,16 +69,10 @@ $ oc delete namespace openshift-dpu-operator
.Verification
. Verify that the DPU Operator is uninstalled by running the following command:
. Verify that the DPU Operator is uninstalled by running the following command. An example of succesful command output is `No resources found in openshift-dpu-operator namespace`.
+
[source,terminal]
----
$ oc get csv -n openshift-dpu-operator
----
+
.Example output
+
[source,terminal]
----
No resources found in openshift-dpu-operator namespace.
----

View File

@@ -15,18 +15,12 @@ The External DNS Operator implements the External DNS API from the `olm.openshif
You can deploy the External DNS Operator on demand from the OperatorHub. Deploying the External DNS Operator creates a `Subscription` object.
. Check the name of an install plan by running the following command:
. Check the name of an install plan, such as `install-zcvlr`, by running the following command:
+
[source,terminal]
----
$ oc -n external-dns-operator get sub external-dns-operator -o yaml | yq '.status.installplan.name'
----
+
.Example output
[source,terminal]
----
install-zcvlr
----
. Check if the status of an install plan is `Complete` by running the following command:
+
@@ -34,12 +28,6 @@ install-zcvlr
----
$ oc -n external-dns-operator get ip <install_plan_name> -o yaml | yq '.status.phase'
----
+
.Example output
[source,terminal]
----
Complete
----
. View the status of the `external-dns-operator` deployment by running the following command:
+

View File

@@ -60,19 +60,13 @@ spec:
Wait for the `openshift-apiserver` finish rolling updates before exposing the route.
====
+
.. Expose the route:
.. Expose the route by entering the following command. The command outputs `route.route.openshift.io/hello-openshift exposed` to designate exposure of the route.
+
[source,terminal]
----
$ oc expose service hello-openshift
----
+
.Example output
[source,terminal]
----
route.route.openshift.io/hello-openshift exposed
----
+
.. Get a list of routes by running the following command:
+
[source,terminal]

View File

@@ -22,19 +22,12 @@ certificate authority that you configured in a custom PKI.
** The certificate uses the `subjectAltName` extension to specify a wildcard domain, such as `*.apps.ocp4.example.com`.
* You must have an `IngressController` CR. You may use the default one:
* You must have an `IngressController` CR, which includes just having the `default` `IngressController` CR. You can run the following command to check that you have an `IngressController` CR:
+
[source,terminal]
----
$ oc --namespace openshift-ingress-operator get ingresscontrollers
----
+
.Example output
[source,terminal]
----
NAME AGE
default 10m
----
[NOTE]
====

View File

@@ -130,25 +130,9 @@ external-dns-operator-5584585fd7-5lwqm 2/2 Running 0 11m
$ oc -n external-dns-operator get subscription
----
+
.Example output
[source,terminal]
----
NAME PACKAGE SOURCE CHANNEL
external-dns-operator external-dns-operator redhat-operators stable-v1
----
. Check the `external-dns-operator` version by running the following command:
+
[source,terminal]
----
$ oc -n external-dns-operator get csv
----
+
.Example output
[source,terminal]
----
NAME DISPLAY VERSION REPLACES PHASE
external-dns-operator.v<1.y.z> ExternalDNS Operator <1.y.z> Succeeded
----

View File

@@ -48,13 +48,6 @@ spec:
----
$ oc -n metallb-system get csv
----
+
.Example output
[source,terminal,subs="attributes+"]
----
NAME DISPLAY VERSION REPLACES PHASE
metallb-operator.v{product-version}.0 MetalLB Operator {product-version}.0 Succeeded
----
. Check the install plan that exists in the namespace by entering the following command.
+
@@ -87,16 +80,9 @@ After you edit the install plan, the upgrade operation starts. If you enter the
.Verification
. Verify the upgrade was successful by entering the following command:
* To verify that the Operator is upgraded, enter the following command and then check that output shows `Succeeded` for the Operator:
+
[source,terminal]
----
$ oc -n metallb-system get csv
----
+
.Example output
[source,terminal,subs="attributes+"]
----
NAME DISPLAY VERSION REPLACE PHASE
metallb-operator.v<latest>.0-202503102139 MetalLB Operator {product-version}.0-202503102139 metallb-operator.v{product-version}.0-202502261233 Succeeded
----

View File

@@ -112,17 +112,10 @@ install-wzg94 metallb-operator.{product-version}.0-nnnnnnnnnnnn Automatic
Installation of the Operator might take a few seconds.
====
. To verify that the Operator is installed, enter the following command:
. To verify that the Operator is installed, enter the following command and then check that output shows `Succeeded` for the Operator:
+
[source,terminal]
----
$ oc get clusterserviceversion -n metallb-system \
-o custom-columns=Name:.metadata.name,Phase:.status.phase
----
+
.Example output
[source,terminal,subs="attributes+"]
----
Name Phase
metallb-operator.{product-version}.0-nnnnnnnnnnnn Succeeded
----

View File

@@ -23,42 +23,21 @@ Scaling is not an immediate action, as it takes time to create the desired numbe
----
$ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}'
----
+
.Example output
[source,terminal]
----
2
----
. Scale the default `IngressController` to the desired number of replicas using
the `oc patch` command. The following example scales the default `IngressController`
to 3 replicas:
. Scale the default `IngressController` to the desired number of replicas by using the `oc patch` command. The following example scales the default `IngressController` to 3 replicas.
+
[source,terminal]
----
$ oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 3}}' --type=merge
----
+
.Example output
[source,terminal]
----
ingresscontroller.operator.openshift.io/default patched
----
. Verify that the default `IngressController` scaled to the number of replicas
that you specified:
. Verify that the default `IngressController` scaled to the number of replicas that you specified:
+
[source,terminal]
----
$ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}'
----
+
.Example output
[source,terminal]
----
3
----
+
[TIP]
====
You can alternatively apply the following YAML to scale an Ingress Controller to three replicas:

View File

@@ -89,20 +89,13 @@ EOF
.Verification
* Check that the Operator is installed by entering the following command:
* To verify that the Operator is installed, enter the following command and then check that output shows `Succeeded` for the Operator:
+
[source,terminal]
----
$ oc get csv -n openshift-sriov-network-operator \
-o custom-columns=Name:.metadata.name,Phase:.status.phase
----
+
.Example output
[source,terminal,subs="attributes+"]
----
Name Phase
sriov-network-operator.{product-version}.0-202406131906 Succeeded
----
[id="install-operator-web-console_{context}"]
== Web console: Installing the SR-IOV Network Operator