diff --git a/_unused_topics/microshift-man-config-ovs-bridge.adoc b/_unused_topics/microshift-man-config-ovs-bridge.adoc index bf6226f997..5cd9ef8453 100644 --- a/_unused_topics/microshift-man-config-ovs-bridge.adoc +++ b/_unused_topics/microshift-man-config-ovs-bridge.adoc @@ -7,31 +7,31 @@ //* Initiate OVS: //+ -//[source, terminal] +//[source,terminal] //---- //$ sudo systemctl enable openvswitch --now //---- //* Add the network bridge: //+ -//[source, terminal] +//[source,terminal] //---- //$ sudo ovs-vsctl add-br br-ex //---- //* Add the interface to the network bridge: //+ -//[source, terminal] +//[source,terminal] //---- //$ sudo ovs-vsctl add-port br-ex //---- //The `` is the network interface name where the node IP address is assigned. //* Get the bridge up and running: //+ -//[source, terminal] +//[source,terminal] //---- //$ sudo ip link set br-ex up //---- //* After `br-ex up` is running, assign the node IP address to `br-ex` bridge: -//[source, terminal] +//[source,terminal] //---- //$ sudo ... //---- diff --git a/_unused_topics/microshift-nodeport-unreachable-workaround.adoc b/_unused_topics/microshift-nodeport-unreachable-workaround.adoc index 4bef2a62fc..163eb10897 100644 --- a/_unused_topics/microshift-nodeport-unreachable-workaround.adoc +++ b/_unused_topics/microshift-nodeport-unreachable-workaround.adoc @@ -21,21 +21,21 @@ Run the commands listed in each step that follows to restore the `NodePort` serv . Find the name of the ovn-master pod that you want to restart by running the following command: + -[source, terminal] +[source,terminal] ---- $ pod=$(oc get pods -n openshift-ovn-kubernetes | grep ovnkube-master | awk -F " " '{print $1}') ---- . Force a restart of the of the ovnkube-master pod by running the following command: + -[source, terminal] +[source,terminal] ---- $ oc -n openshift-ovn-kubernetes delete pod $pod ---- . Optional: To confirm that the ovnkube-master pod restarted, run the following command: + -[source, terminal] +[source,terminal] ---- $ oc get pods -n openshift-ovn-kubernetes ---- diff --git a/microshift_troubleshooting/microshift-troubleshoot-backup-restore.adoc b/microshift_troubleshooting/microshift-troubleshoot-backup-restore.adoc index c6e33541d0..eb7b0f9430 100644 --- a/microshift_troubleshooting/microshift-troubleshoot-backup-restore.adoc +++ b/microshift_troubleshooting/microshift-troubleshoot-backup-restore.adoc @@ -24,7 +24,7 @@ Data backups are automatic on `rpm-ostree` systems. If you are not using an `rpm * Logs print to the console during manual backups. * Logs are automatically generated for `rpm-ostree` system automated backups as part of the {product-title} journal logs. You can check the logs by running the following command: + -[source, terminal] +[source,terminal] ---- $ sudo journalctl -u microshift ---- diff --git a/modules/architecture-rhcos-updating-bootloader.adoc b/modules/architecture-rhcos-updating-bootloader.adoc index 694af67ecf..3aead5e861 100644 --- a/modules/architecture-rhcos-updating-bootloader.adoc +++ b/modules/architecture-rhcos-updating-bootloader.adoc @@ -39,7 +39,7 @@ Component EFI ifndef::openshift-origin[] + .Example output for `aarch64` -[source, terminal] +[source,terminal] ---- Component EFI Installed: grub2-efi-aa64-1:2.02-99.el8_4.1.aarch64,shim-aa64-15.4-2.el8_1.aarch64 diff --git a/modules/cleaning-crio-storage.adoc b/modules/cleaning-crio-storage.adoc index ce9827b9ad..ef90e22787 100644 --- a/modules/cleaning-crio-storage.adoc +++ b/modules/cleaning-crio-storage.adoc @@ -6,14 +6,14 @@ You can manually clear the CRI-O ephemeral storage if you experience the following issues: * A node cannot run on any pods and this error appears: -[source, terminal] +[source,terminal] + ---- Failed to create pod sandbox: rpc error: code = Unknown desc = failed to mount container XXX: error recreating the missing symlinks: error reading name of symlink for XXX: open /var/lib/containers/storage/overlay/XXX/link: no such file or directory ---- + * You cannot create a new container on a working node and the “can’t stat lower layer” error appears: -[source, terminal] +[source,terminal] + ---- can't stat lower layer ... because it does not exist. Going through storage to recreate the missing symlinks. @@ -35,14 +35,14 @@ Follow this process to completely wipe the CRI-O storage and resolve the errors. .Procedure . Use `cordon` on the node. This is to avoid any workload getting scheduled if the node gets into the `Ready` status. You will know that scheduling is disabled when `SchedulingDisabled` is in your Status section: -[source, terminal] +[source,terminal] + ---- $ oc adm cordon ---- + . Drain the node as the cluster-admin user: -[source, terminal] +[source,terminal] + ---- $ oc adm drain --ignore-daemonsets --delete-emptydir-data @@ -54,7 +54,7 @@ The `terminationGracePeriodSeconds` attribute of a pod or pod template controls ==== . When the node returns, connect back to the node via SSH or Console. Then connect to the root user: -[source, terminal] +[source,terminal] + ---- $ ssh core@node1.example.com @@ -62,7 +62,7 @@ $ sudo -i ---- + . Manually stop the kubelet: -[source, terminal] +[source,terminal] + ---- # systemctl stop kubelet @@ -71,35 +71,35 @@ $ sudo -i . Stop the containers and pods: .. Use the following command to stop the pods that are not in the `HostNetwork`. They must be removed first because their removal relies on the networking plugin pods, which are in the `HostNetwork`. -[source, terminal] +[source,terminal] + ---- .. for pod in $(crictl pods -q); do if [[ "$(crictl inspectp $pod | jq -r .status.linux.namespaces.options.network)" != "NODE" ]]; then crictl rmp -f $pod; fi; done ---- .. Stop all other pods: -[source, terminal] +[source,terminal] + ---- # crictl rmp -fa ---- + . Manually stop the crio services: -[source, terminal] +[source,terminal] + ---- # systemctl stop crio ---- + . After you run those commands, you can completely wipe the ephemeral storage: -[source, terminal] +[source,terminal] + ---- # crio wipe -f ---- + . Start the crio and kubelet service: -[source, terminal] +[source,terminal] + ---- # systemctl start crio @@ -107,14 +107,14 @@ $ sudo -i ---- + . You will know if the clean up worked if the crio and kubelet services are started, and the node is in the `Ready` status: -[source, terminal] +[source,terminal] + ---- $ oc get nodes ---- + .Example output -[source, terminal] +[source,terminal] + ---- NAME STATUS ROLES AGE VERSION @@ -122,14 +122,14 @@ ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready, SchedulingDisabled master 133m v ---- + . Mark the node schedulable. You will know that the scheduling is enabled when `SchedulingDisabled` is no longer in status: -[source, terminal] +[source,terminal] + ---- $ oc adm uncordon ---- + .Example output -[source, terminal] +[source,terminal] + ---- NAME STATUS ROLES AGE VERSION diff --git a/modules/cnf-logging-associated-with-adjusting-nic-queues.adoc b/modules/cnf-logging-associated-with-adjusting-nic-queues.adoc index 898d72bd4f..a28052fb7c 100644 --- a/modules/cnf-logging-associated-with-adjusting-nic-queues.adoc +++ b/modules/cnf-logging-associated-with-adjusting-nic-queues.adoc @@ -9,13 +9,13 @@ Log messages detailing the assigned devices are recorded in the respective Tuned * An `INFO` message is recorded detailing the successfully assigned devices: + -[source, terminal] +[source,terminal] ---- INFO tuned.plugins.base: instance net_test (net): assigning devices ens1, ens2, ens3 ---- * A `WARNING` message is recorded if none of the devices can be assigned: + -[source, terminal] +[source,terminal] ---- WARNING tuned.plugins.base: instance net_test: no matching devices available ---- diff --git a/modules/cnf-performing-end-to-end-tests-running-cyclictest.adoc b/modules/cnf-performing-end-to-end-tests-running-cyclictest.adoc index 608533e137..8634b30019 100644 --- a/modules/cnf-performing-end-to-end-tests-running-cyclictest.adoc +++ b/modules/cnf-performing-end-to-end-tests-running-cyclictest.adoc @@ -78,7 +78,7 @@ FAIL The same output can indicate different results for different workloads. For example, spikes up to 18μs are acceptable for 4G DU workloads, but not for 5G DU workloads. .Example of good results -[source, terminal] +[source,terminal] ---- running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m # Histogram @@ -111,7 +111,7 @@ More histogram entries ... ---- .Example of bad results -[source, terminal] +[source,terminal] ---- running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m # Histogram diff --git a/modules/cnf-performing-end-to-end-tests-running-hwlatdetect.adoc b/modules/cnf-performing-end-to-end-tests-running-hwlatdetect.adoc index 110a0d1f28..2b5e65a2f1 100644 --- a/modules/cnf-performing-end-to-end-tests-running-hwlatdetect.adoc +++ b/modules/cnf-performing-end-to-end-tests-running-hwlatdetect.adoc @@ -122,7 +122,7 @@ You can capture the following types of results: * The combined set of the rough tests with the best results and configuration settings. .Example of good results -[source, terminal] +[source,terminal] ---- hwlatdetect: test duration 3600 seconds detector: tracer @@ -142,7 +142,7 @@ Samples recorded: 0 The `hwlatdetect` tool only provides output if the sample exceeds the specified threshold. .Example of bad results -[source, terminal] +[source,terminal] ---- hwlatdetect: test duration 3600 seconds detector: tracer diff --git a/modules/configuring-default-seccomp-profile.adoc b/modules/configuring-default-seccomp-profile.adoc index 17b3a5adfa..373376b5f4 100644 --- a/modules/configuring-default-seccomp-profile.adoc +++ b/modules/configuring-default-seccomp-profile.adoc @@ -15,21 +15,21 @@ .. Verify what pods are running in the namespace: + -[source, terminal] +[source,terminal] ---- $ oc get pods -n ---- + For example, to verify what pods are running in the `workshop` namespace run the following: + -[source, terminal] +[source,terminal] ---- $ oc get pods -n workshop ---- + .Example output + -[source, terminal] +[source,terminal] ---- NAME READY STATUS RESTARTS AGE parksmap-1-4xkwf 1/1 Running 0 2m17s @@ -38,14 +38,14 @@ parksmap-1-deploy 0/1 Completed 0 2m22s + .. Inspect the pods: + -[source, terminal] +[source,terminal] ---- $ oc get pod parksmap-1-4xkwf -n workshop -o yaml ---- + .Example output + -[source, terminal] +[source,terminal] ---- apiVersion: v1 kind: Pod @@ -97,13 +97,13 @@ Conversely with a workload that requires `privilegeEscalation: true` this worklo [id="newly_installed_{context}"] == Newly installed cluster -For newly installed {product-title} 4.11 or later clusters, the `restricted-v2` replaces the `restricted` SCC as an SCC that is available to be used by any authenticated user. A workload with `privilegeEscalation: true`, is not admitted into the cluster since `restricted-v2` is the only SCC available for authenticated users by default. +For newly installed {product-title} 4.11 or later clusters, the `restricted-v2` replaces the `restricted` SCC as an SCC that is available to be used by any authenticated user. A workload with `privilegeEscalation: true`, is not admitted into the cluster since `restricted-v2` is the only SCC available for authenticated users by default. The feature `privilegeEscalation` is allowed by `restricted` but not by `restricted-v2`. More features are denied by `restricted-v2` than were allowed by `restricted` SCC. A workload with `privilegeEscalation: true` may be admitted into a newly installed {product-title} 4.11 or later cluster. To give access to the `restricted` SCC to the ServiceAccount running the workload (or any other SCC that can admit this workload) using a RoleBinding run the following command: -[source, terminal] +[source,terminal] ---- $ oc -n adm policy add-scc-to-user -z ---- diff --git a/modules/configuring-haproxy-interval.adoc b/modules/configuring-haproxy-interval.adoc index f2cc805105..6cf4ce07a1 100644 --- a/modules/configuring-haproxy-interval.adoc +++ b/modules/configuring-haproxy-interval.adoc @@ -21,7 +21,7 @@ Setting a large value for the minimum HAProxy reload interval can cause latency * Change the minimum HAProxy reload interval of the default Ingress Controller to 15 seconds by running the following command: + -[source, terminal] +[source,terminal] ---- $ oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"tuningOptions":{"reloadInterval":"15s"}}}' ---- diff --git a/modules/core-user-password.adoc b/modules/core-user-password.adoc index b96cff38cb..af06e3c935 100644 --- a/modules/core-user-password.adoc +++ b/modules/core-user-password.adoc @@ -6,7 +6,7 @@ [id="core-user-password_{context}"] = Changing the core user password for node access -By default, {op-system-first} creates a user named `core` on the nodes in your cluster. You can use the `core` user to access the node through a cloud provider serial console or a bare metal baseboard controller manager (BMC). This can be helpful, for example, if a node is down and you cannot access that node by using SSH or the `oc debug node` command. However, by default, there is no password for this user, so you cannot log in without creating one. +By default, {op-system-first} creates a user named `core` on the nodes in your cluster. You can use the `core` user to access the node through a cloud provider serial console or a bare metal baseboard controller manager (BMC). This can be helpful, for example, if a node is down and you cannot access that node by using SSH or the `oc debug node` command. However, by default, there is no password for this user, so you cannot log in without creating one. You can create a password for the `core` user by using a machine config. The Machine Config Operator (MCO) assigns the password and injects the password into the `/etc/shadow` file, allowing you to log in with the `core` user. The MCO does not examine the password hash. As such, the MCO cannot report if there is a problem with the password. @@ -17,7 +17,7 @@ You can create a password for the `core` user by using a machine config. The Mac * If you have a machine config that includes an `/etc/shadow` file or a systemd unit that sets a password, it takes precedence over the password hash. ==== -You can change the password, if needed, by editing the machine config you used to create the password. Also, you can remove the password by deleting the machine config. Deleting the machine config does not remove the user account. +You can change the password, if needed, by editing the machine config you used to create the password. Also, you can remove the password by deleting the machine config. Deleting the machine config does not remove the user account. .Prerequisites @@ -49,7 +49,7 @@ spec: . Create the machine config by running the following command: + -[source,yaml] +[source,terminal] ---- $ oc create -f .yaml ---- diff --git a/modules/create-a-containerruntimeconfig-crd.adoc b/modules/create-a-containerruntimeconfig-crd.adoc index 508e16593d..dc1d089efa 100644 --- a/modules/create-a-containerruntimeconfig-crd.adoc +++ b/modules/create-a-containerruntimeconfig-crd.adoc @@ -47,7 +47,7 @@ $ oc get ctrcfg ---- .Example output -[source, terminal] +[source,terminal] ---- NAME AGE ctr-pid 24m @@ -62,7 +62,7 @@ $ oc get mc | grep container ---- .Example output -[source, terminal] +[source,terminal] ---- ... 01-master-container-runtime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 57m @@ -181,12 +181,12 @@ worker rendered-worker-169 False True False 3 1 .. Open an `oc debug` session to a node in the machine config pool and run `chroot /host`. + -[source, terminal] +[source,terminal] ---- $ oc debug node/ ---- + -[source, terminal] +[source,terminal] ---- sh-4.4# chroot /host ---- diff --git a/modules/create-a-kubeletconfig-crd-to-edit-kubelet-parameters.adoc b/modules/create-a-kubeletconfig-crd-to-edit-kubelet-parameters.adoc index 5206bed8c7..012720cd4e 100644 --- a/modules/create-a-kubeletconfig-crd-to-edit-kubelet-parameters.adoc +++ b/modules/create-a-kubeletconfig-crd-to-edit-kubelet-parameters.adoc @@ -35,7 +35,7 @@ If you have a machine config with a `kubelet-9` suffix, and you create another ` $ oc get kubeletconfig ---- -[source, terminal] +[source,terminal] ---- NAME AGE set-max-pods 15m @@ -47,7 +47,7 @@ set-max-pods 15m $ oc get mc | grep kubelet ---- -[source, terminal] +[source,terminal] ---- ... 99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m @@ -194,7 +194,7 @@ $ oc get kubeletconfig ---- + .Example output -[source, terminal] +[source,terminal] ---- NAME AGE set-max-pods 15m diff --git a/modules/etcd-tuning-parameters.adoc b/modules/etcd-tuning-parameters.adoc index dec7efbdbb..8b6cda7105 100644 --- a/modules/etcd-tuning-parameters.adoc +++ b/modules/etcd-tuning-parameters.adoc @@ -24,7 +24,7 @@ To change the hardware speed tolerance for etcd, complete the following steps. . Check to see what the current value is by entering the following command: + -[source, terminal] +[source,terminal] ---- $ oc describe etcd/cluster | grep "Control Plane Hardware Speed" ---- @@ -74,7 +74,7 @@ The Etcd "cluster" is invalid: spec.controlPlaneHardwareSpeed: Unsupported value . Verify that the value was changed by entering the following command: + -[source, terminal] +[source,terminal] ---- $ oc describe etcd/cluster | grep "Control Plane Hardware Speed" ---- diff --git a/modules/gitops-monitoring-argo-cd-health-using-prometheus-metrics.adoc b/modules/gitops-monitoring-argo-cd-health-using-prometheus-metrics.adoc index 90cd36cafc..80bd63fde4 100644 --- a/modules/gitops-monitoring-argo-cd-health-using-prometheus-metrics.adoc +++ b/modules/gitops-monitoring-argo-cd-health-using-prometheus-metrics.adoc @@ -15,7 +15,7 @@ You can monitor the health status of an Argo CD application by running Prometheu . To check the health status of your Argo CD application, enter the Prometheus Query Language (PromQL) query similar to the following example in the *Expression* field: + .Example -[source, terminal] +[source,terminal] ---- sum(argocd_app_info{dest_namespace=~"",health_status!=""}) by (health_status) <1> ---- diff --git a/modules/hosted-control-planes-troubleshooting.adoc b/modules/hosted-control-planes-troubleshooting.adoc index 2c0be7dc0b..94c69ea4e1 100644 --- a/modules/hosted-control-planes-troubleshooting.adoc +++ b/modules/hosted-control-planes-troubleshooting.adoc @@ -54,7 +54,7 @@ $ CLUSTERNS="clusters" $ mkdir clusterDump-${CLUSTERNS}-${CLUSTERNAME} ---- + -[source, terminal] +[source,terminal] ---- $ hypershift dump cluster \ --name ${CLUSTERNAME} \ @@ -71,7 +71,7 @@ $ hypershift dump cluster \ 2023-06-06T12:18:21+02:00 INFO Successfully archived dump {"duration": "1.519376292s"} ---- -* To configure the command-line interface so that it impersonates all of the queries against the management cluster by using a username or service account, enter the `hypershift dump cluster` command with the `--as` flag. +* To configure the command-line interface so that it impersonates all of the queries against the management cluster by using a username or service account, enter the `hypershift dump cluster` command with the `--as` flag. + The service account must have enough permissions to query all of the objects from the namespaces, so the `cluster-admin` role is recommended to make sure you have enough permissions. The service account must be located in or have permissions to query the namespace of the `HostedControlPlane` resource. + diff --git a/modules/images-cluster-sample-imagestream-import.adoc b/modules/images-cluster-sample-imagestream-import.adoc index 14277214d9..ec55f1cbd8 100644 --- a/modules/images-cluster-sample-imagestream-import.adoc +++ b/modules/images-cluster-sample-imagestream-import.adoc @@ -18,20 +18,20 @@ oc get imagestreams -nopenshift . Fetch the tags for every imagestream in the `openshift` namespace by running the following command: + -[source, terminal] +[source,terminal] ---- $ oc get is -o jsonpath="{range .spec.tags[*]}{.name}{'\t'}{.from.name}{'\n'}{end}" -nopenshift ---- + For example: + -[source, terminal] +[source,terminal] ---- $ oc get is ubi8-openjdk-17 -o jsonpath="{range .spec.tags[*]}{.name}{'\t'}{.from.name}{'\n'}{end}" -nopenshift ---- + .Example output -[source, terminal] +[source,terminal] ---- 1.11 registry.access.redhat.com/ubi8/openjdk-17:1.11 1.12 registry.access.redhat.com/ubi8/openjdk-17:1.12 diff --git a/modules/images-configuration-registry-mirror.adoc b/modules/images-configuration-registry-mirror.adoc index 9f5d767a6e..54a42564cf 100644 --- a/modules/images-configuration-registry-mirror.adoc +++ b/modules/images-configuration-registry-mirror.adoc @@ -61,13 +61,13 @@ The following procedure creates a post-installation mirror configuration, where * Ensure that there are no `ImageContentSourcePolicy` objects on your cluster. For example, you can use the following command: + -[source, terminal] +[source,terminal] ---- $ oc get ImageContentSourcePolicy ---- + .Example output -[source, terminal] +[source,terminal] ---- No resources found ---- diff --git a/modules/ingress-liveness-readiness-startup-probes.adoc b/modules/ingress-liveness-readiness-startup-probes.adoc index 1459013e1e..6d7a7565c2 100644 --- a/modules/ingress-liveness-readiness-startup-probes.adoc +++ b/modules/ingress-liveness-readiness-startup-probes.adoc @@ -34,13 +34,13 @@ The timeout configuration option is an advanced tuning technique that can be use The following example demonstrates how you can directly patch the default router deployment to set a 5-second timeout for the liveness and readiness probes: -[source, terminal] +[source,terminal] ---- $ oc -n openshift-ingress patch deploy/router-default --type=strategic --patch='{"spec":{"template":{"spec":{"containers":[{"name":"router","livenessProbe":{"timeoutSeconds":5},"readinessProbe":{"timeoutSeconds":5}}]}}}}' ---- .Verification -[source, terminal] +[source,terminal] ---- $ oc -n openshift-ingress describe deploy/router-default | grep -e Liveness: -e Readiness: Liveness: http-get http://:1936/healthz delay=0s timeout=5s period=10s #success=1 #failure=3 diff --git a/modules/installation-nutanix-download-rhcos.adoc b/modules/installation-nutanix-download-rhcos.adoc index 8e8088fa2a..3f0de08620 100644 --- a/modules/installation-nutanix-download-rhcos.adoc +++ b/modules/installation-nutanix-download-rhcos.adoc @@ -23,7 +23,7 @@ $ ./openshift-install coreos print-stream-json . Use the output of the command to find the location of the Nutanix image, and click the link to download it. + .Example output -[source, terminal] +[source,terminal] ---- "nutanix": { "release": "411.86.202210041459-0", diff --git a/modules/installation-uninstall-clouds.adoc b/modules/installation-uninstall-clouds.adoc index bc829e992c..1ae86f64c5 100644 --- a/modules/installation-uninstall-clouds.adoc +++ b/modules/installation-uninstall-clouds.adoc @@ -67,7 +67,7 @@ In which case, the PVCs are not removed when uninstalling the cluster, which mig .. Log in to the IBM Cloud using the CLI. .. To list the PVCs, run the following command: + -[source, terminal] +[source,terminal] ---- $ ibmcloud is volumes --resource-group-name ---- @@ -76,7 +76,7 @@ For more information about listing volumes, see the link:https://cloud.ibm.com/d .. To delete the PVCs, run the following command: + -[source, terminal] +[source,terminal] ---- $ ibmcloud is volume-delete --force ---- diff --git a/modules/ipi-install-bmc-addressing.adoc b/modules/ipi-install-bmc-addressing.adoc index f025e2f7b0..a7644d7052 100644 --- a/modules/ipi-install-bmc-addressing.adoc +++ b/modules/ipi-install-bmc-addressing.adoc @@ -77,25 +77,25 @@ You need to ensure that your BMC supports all of the redfish APIs before install List of redfish APIs:: * Power on + -[source, terminal] +[source,terminal] ---- curl -u $USER:$PASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{"Action": "Reset", "ResetType": "On"}' https://$SERVER/redfish/v1/Systems/$SystemID/Actions/ComputerSystem.Reset ---- * Power off + -[source, terminal] +[source,terminal] ---- curl -u $USER:$PASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{"Action": "Reset", "ResetType": "ForceOff"}' https://$SERVER/redfish/v1/Systems/$SystemID/Actions/ComputerSystem.Reset ---- -* Temporary boot using `pxe` +* Temporary boot using `pxe` + -[source, terminal] +[source,terminal] ---- curl -u $USER:$PASS -X PATCH -H "Content-Type: application/json" https://$Server/redfish/v1/Systems/$SystemID/ -d '{"Boot": {"BootSourceOverrideTarget": "pxe", "BootSourceOverrideEnabled": "Once"}} ---- * Set BIOS boot mode using `Legacy` or `UEFI` + -[source, terminal] +[source,terminal] ---- curl -u $USER:$PASS -X PATCH -H "Content-Type: application/json" https://$Server/redfish/v1/Systems/$SystemID/ -d '{"Boot": {"BootSourceOverrideMode":"UEFI"}} ---- @@ -103,13 +103,13 @@ curl -u $USER:$PASS -X PATCH -H "Content-Type: application/json" https://$Serve List of redfish-virtualmedia APIs:: * Set temporary boot device using `cd` or `dvd` + -[source, terminal] +[source,terminal] ---- curl -u $USER:$PASS -X PATCH -H "Content-Type: application/json" https://$Server/redfish/v1/Systems/$SystemID/ -d '{"Boot": {"BootSourceOverrideTarget": "cd", "BootSourceOverrideEnabled": "Once"}}' ---- * Mount virtual media + -[source, terminal] +[source,terminal] ---- curl -u $USER:$PASS -X PATCH -H "Content-Type: application/json" -H "If-Match: *" https://$Server/redfish/v1/Managers/$ManagerID/VirtualMedia/$VmediaId -d '{"Image": "https://example.com/test.iso", "TransferProtocolType": "HTTPS", "UserName": "", "Password":""}' ---- diff --git a/modules/microshift-backing-up-manually.adoc b/modules/microshift-backing-up-manually.adoc index 0b6596f513..f5705675fe 100644 --- a/modules/microshift-backing-up-manually.adoc +++ b/modules/microshift-backing-up-manually.adoc @@ -20,13 +20,13 @@ On `rpm-ostree` systems, {product-title} creates an automatic backup on every st .Procedure . Manually create a backup by using the default name and parent directory, `/var/lib/microshift-backups/`, by running the following command: + -[source, terminal] +[source,terminal] ---- $ sudo microshift backup ---- .Example output + -[source, terminal] +[source,terminal] ---- ??? I0829 07:32:12.313961 6586 run_check.go:28] "Service state" service="microshift.service" state="inactive" ??? I0829 07:32:12.318803 6586 run_check.go:28] "Service state" service="microshift-etcd.scope" state="inactive" @@ -40,21 +40,21 @@ $ sudo microshift backup . Optional: Manually create a backup with a specific name in the default directory by running the following command: + -[source, terminal] +[source,terminal] ---- $ sudo microshift backup --name ---- . Optional: Manually create a backup in a specific parent directory by running the following command: + -[source, terminal] +[source,terminal] ---- $ sudo microshift backup --storage /var/lib/ ---- . Optional: Manually create a backup in a specific parent directory with a custom name by running the following command: + -[source, terminal] +[source,terminal] ---- $ sudo microshift backup --storage /var/lib// --name ---- diff --git a/modules/microshift-config-etcd.adoc b/modules/microshift-config-etcd.adoc index ee11287004..b8c0a3dde2 100644 --- a/modules/microshift-config-etcd.adoc +++ b/modules/microshift-config-etcd.adoc @@ -27,14 +27,14 @@ The minimum permissible value for `memoryLimitMB` on {product-title} is 128 MB. . After modifying the `memoryLimitMB` value in `/etc/microshift/config.yaml`, restart {product-title} by running the following command: + -[source, terminal] +[source,terminal] ---- $ sudo systemctl restart microshift ---- . Verify the new `memoryLimitMB` value is in use by running the following command: + -[source, terminal] +[source,terminal] ---- $ systemctl show --property=MemoryHigh microshift-etcd.scope ---- diff --git a/modules/microshift-firewall-allow-traffic.adoc b/modules/microshift-firewall-allow-traffic.adoc index 79d1723fb7..ca7d0e4ad6 100644 --- a/modules/microshift-firewall-allow-traffic.adoc +++ b/modules/microshift-firewall-allow-traffic.adoc @@ -28,7 +28,7 @@ $ sudo firewall-offline-cmd --permanent --zone=trusted --add-source=/dev/null [Journal] diff --git a/modules/microshift-greenboot-testing-workload-script.adoc b/modules/microshift-greenboot-testing-workload-script.adoc index 96a3926e21..7377ec941c 100644 --- a/modules/microshift-greenboot-testing-workload-script.adoc +++ b/modules/microshift-greenboot-testing-workload-script.adoc @@ -17,14 +17,14 @@ . To test that greenboot is running a health check script file, reboot the host by running the following command: + -[source, terminal] +[source,terminal] ---- $ sudo reboot ---- . Examine the output of greenboot health checks by running the following command: + -[source, terminal] +[source,terminal] ---- $ sudo journalctl -o cat -u greenboot-healthcheck.service ---- @@ -36,7 +36,7 @@ $ sudo journalctl -o cat -u greenboot-healthcheck.service + .Example output -[source, terminal] +[source,terminal] ---- GRUB boot variables: boot_success=0 diff --git a/modules/microshift-greenboot-workloads-validation.adoc b/modules/microshift-greenboot-workloads-validation.adoc index becaf8b7da..f98bbf964f 100644 --- a/modules/microshift-greenboot-workloads-validation.adoc +++ b/modules/microshift-greenboot-workloads-validation.adoc @@ -12,13 +12,13 @@ After a successful start, greenboot sets the variable `boot_success=` to `1` in * To access the overall status of system health checks, run the following command: + -[source, terminal] +[source,terminal] ---- $ sudo grub2-editenv - list | grep ^boot_success ---- .Example output for a successful system start -[source, terminal] +[source,terminal] ---- boot_success=1 ---- \ No newline at end of file diff --git a/modules/microshift-ki-cni-iptables-deleted.adoc b/modules/microshift-ki-cni-iptables-deleted.adoc index ebb47c1a9e..637c855eba 100644 --- a/modules/microshift-ki-cni-iptables-deleted.adoc +++ b/modules/microshift-ki-cni-iptables-deleted.adoc @@ -25,14 +25,14 @@ Run the commands listed in each step that follows to restore the iptable rules. . Find the name of the ovnkube-master pod that you want to restart by running the following command: + -[source, terminal] +[source,terminal] ---- $ pod=$(oc get pods -n openshift-ovn-kubernetes | grep ovnkube-master | awk -F " " '{print $1}') ---- . Delete the ovnkube-master pod: + -[source, terminal] +[source,terminal] ---- $ oc -n openshift-ovn-kubernetes delete pod $pod ---- @@ -41,7 +41,7 @@ This command causes the daemon set pod to be automatically restarted, causing a . Confirm that the iptables have reconciled by running the following command: + -[source, terminal] +[source,terminal] ---- $ sudo iptables-save | grep NODEPORT :OVN-KUBE-NODEPORT - [0:0] @@ -53,7 +53,7 @@ $ sudo iptables-save | grep NODEPORT . You can also confirm that a new ovnkube-master pod has been started by running the following command: + -[source, terminal] +[source,terminal] ---- $ oc get pods -n openshift-ovn-kubernetes ---- diff --git a/modules/microshift-lvmd-yaml-creating.adoc b/modules/microshift-lvmd-yaml-creating.adoc index 6a75da0259..f7c045c399 100644 --- a/modules/microshift-lvmd-yaml-creating.adoc +++ b/modules/microshift-lvmd-yaml-creating.adoc @@ -12,7 +12,7 @@ When {product-title} runs, it uses LVMS configuration from `/etc/microshift/lvmd * To create the `lvmd.yaml` configuration file, run the following command: + -[source, terminal] +[source,terminal] ---- $ sudo cp /etc/microshift/lvmd.yaml.default /etc/microshift/lvmd.yaml ---- diff --git a/modules/microshift-nodeport-unreachable-workaround.adoc b/modules/microshift-nodeport-unreachable-workaround.adoc index 4bef2a62fc..163eb10897 100644 --- a/modules/microshift-nodeport-unreachable-workaround.adoc +++ b/modules/microshift-nodeport-unreachable-workaround.adoc @@ -21,21 +21,21 @@ Run the commands listed in each step that follows to restore the `NodePort` serv . Find the name of the ovn-master pod that you want to restart by running the following command: + -[source, terminal] +[source,terminal] ---- $ pod=$(oc get pods -n openshift-ovn-kubernetes | grep ovnkube-master | awk -F " " '{print $1}') ---- . Force a restart of the of the ovnkube-master pod by running the following command: + -[source, terminal] +[source,terminal] ---- $ oc -n openshift-ovn-kubernetes delete pod $pod ---- . Optional: To confirm that the ovnkube-master pod restarted, run the following command: + -[source, terminal] +[source,terminal] ---- $ oc get pods -n openshift-ovn-kubernetes ---- diff --git a/modules/microshift-oc-apis-errors.adoc b/modules/microshift-oc-apis-errors.adoc index 63b4b9abfc..60caf81f9f 100644 --- a/modules/microshift-oc-apis-errors.adoc +++ b/modules/microshift-oc-apis-errors.adoc @@ -12,21 +12,21 @@ Not all OpenShift CLI (oc) tool commands are relevant for {product-title} deploy For example, when the following `new-project` command is run: -[source, terminal] +[source,terminal] ---- $ oc new-project test ---- The following error message can be generated: -[source, terminal] +[source,terminal] ---- Error from server (NotFound): the server could not find the requested resource (get projectrequests.project.openshift.io) ---- And when the `get projects` command is run, another error can be generated as follows: -[source, terminal] +[source,terminal] ---- $ oc get projects error: the server doesn't have a resource type "projects" diff --git a/modules/microshift-troubleshooting-nodeport.adoc b/modules/microshift-troubleshooting-nodeport.adoc index 9c7b955363..be727d1422 100644 --- a/modules/microshift-troubleshooting-nodeport.adoc +++ b/modules/microshift-troubleshooting-nodeport.adoc @@ -12,13 +12,13 @@ OVN-Kubernetes sets up an iptable chain in the network address translation (NAT) . View the iptable rules for the NodePort service by running the following command: + -[source, terminal] +[source,terminal] ---- $ iptables-save | grep NODEPORT ---- + .Example output -[source, terminal] +[source,terminal] ---- -A OUTPUT -j OVN-KUBE-NODEPORT -A OVN-KUBE-NODEPORT -p tcp -m addrtype --dst-type LOCAL -m tcp --dport 30326 -j DNAT --to-destination 10.43.95.170:80 @@ -27,13 +27,13 @@ OVN-Kubernetes configures the `OVN-KUBE-NODEPORT` iptable chain in the NAT table . Route the packet through the network with routing rules by running the following command: + -[source, terminal] +[source,terminal] ---- $ ip route ---- + .Example output -[source, terminal] +[source,terminal] ---- 10.43.0.0/16 via 192.168.122.1 dev br-ex mtu 1400 ---- diff --git a/modules/move-etcd-different-disk.adoc b/modules/move-etcd-different-disk.adoc index affb0b3f58..c43fe6cb11 100644 --- a/modules/move-etcd-different-disk.adoc +++ b/modules/move-etcd-different-disk.adoc @@ -134,7 +134,7 @@ $ oc login -u ${ADMIN} -p ${ADMINPASSWORD} ${API} [... output omitted ...] ---- + -[source, terminal] +[source,terminal] ---- $ oc create -f etcd-mc.yml machineconfig.machineconfiguration.openshift.io/98-var-lib-etcd created ---- diff --git a/modules/multi-architecture-creating-arm64-bootimage.adoc b/modules/multi-architecture-creating-arm64-bootimage.adoc index 369af56226..8bb69274dc 100644 --- a/modules/multi-architecture-creating-arm64-bootimage.adoc +++ b/modules/multi-architecture-creating-arm64-bootimage.adoc @@ -6,29 +6,29 @@ [id="multi-architecture-creating-arm64-bootimage_{context}"] = Creating an ARM64 boot image using the Azure image gallery - -The following procedure describes how to manually generate an ARM64 boot image. - + +The following procedure describes how to manually generate an ARM64 boot image. + .Prerequisites * You installed the Azure CLI (`az`). -* You created a single-architecture Azure installer-provisioned cluster with the multi-architecture installer binary. +* You created a single-architecture Azure installer-provisioned cluster with the multi-architecture installer binary. .Procedure -. Log in to your Azure account: +. Log in to your Azure account: + [source,terminal] ---- $ az login ---- -. Create a storage account and upload the `arm64` virtual hard disk (VHD) to your storage account. The {product-title} installation program creates a resource group, however, the boot image can also be uploaded to a custom named resource group: +. Create a storage account and upload the `arm64` virtual hard disk (VHD) to your storage account. The {product-title} installation program creates a resource group, however, the boot image can also be uploaded to a custom named resource group: + [source,terminal] ---- $ az storage account create -n ${STORAGE_ACCOUNT_NAME} -g ${RESOURCE_GROUP} -l westus --sku Standard_LRS <1> ---- + -<1> The `westus` object is an example region. +<1> The `westus` object is an example region. + . Create a storage container using the storage account you generated: + @@ -50,7 +50,7 @@ $ RHCOS_VHD_ORIGIN_URL=$(oc -n openshift-machine-config-operator get configmap/c ---- $ BLOB_NAME=rhcos-$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".release')-azure.aarch64.vhd ---- -. Generate a shared access signature (SAS) token. Use this token to upload the {op-system} VHD to your storage container with the following commands: +. Generate a shared access signature (SAS) token. Use this token to upload the {op-system} VHD to your storage container with the following commands: + [source,terminal] ---- @@ -63,7 +63,7 @@ $ sas=`az storage container generate-sas -n ${CONTAINER_NAME} --account-name ${S ---- . Copy the {op-system} VHD into the storage container: + -[source, terminal] +[source,terminal] ---- $ az storage blob copy start --account-name ${STORAGE_ACCOUNT_NAME} --sas-token "$sas" \ --source-uri "${RHCOS_VHD_ORIGIN_URL}" \ @@ -92,21 +92,21 @@ $ az storage blob show -c ${CONTAINER_NAME} -n ${BLOB_NAME} --account-name ${STO } ---- + -<1> If the status parameter displays the `success` object, the copying process is complete. - +<1> If the status parameter displays the `success` object, the copying process is complete. + . Create an image gallery using the following command: + [source,terminal] ---- $ az sig create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME} ---- -Use the image gallery to create an image definition. In the following example command, `rhcos-arm64` is the name of the image definition. +Use the image gallery to create an image definition. In the following example command, `rhcos-arm64` is the name of the image definition. + [source,terminal] ---- $ az sig image-definition create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME} --gallery-image-definition rhcos-arm64 --publisher RedHat --offer arm --sku arm64 --os-type linux --architecture Arm64 --hyper-v-generation V2 ---- -. To get the URL of the VHD and set it to `RHCOS_VHD_URL` as the file name, run the following command: +. To get the URL of the VHD and set it to `RHCOS_VHD_URL` as the file name, run the following command: + [source,terminal] ---- @@ -118,7 +118,7 @@ $ RHCOS_VHD_URL=$(az storage blob url --account-name ${STORAGE_ACCOUNT_NAME} -c ---- $ az sig image-version create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account ${STORAGE_ACCOUNT_NAME} --os-vhd-uri ${RHCOS_VHD_URL} ---- -. Your `arm64` boot image is now generated. You can access the ID of your image with the following command: +. Your `arm64` boot image is now generated. You can access the ID of your image with the following command: + [source,terminal] ---- diff --git a/modules/multi-architecture-modify-machine-set.adoc b/modules/multi-architecture-modify-machine-set.adoc index 0557e97eb1..2ba6a1b88b 100644 --- a/modules/multi-architecture-modify-machine-set.adoc +++ b/modules/multi-architecture-modify-machine-set.adoc @@ -5,13 +5,13 @@ :_content-type: PROCEDURE [id="multi-architecture-modify-machine-set_{context}"] -= Adding a multi-architecture compute machine set to your cluster += Adding a multi-architecture compute machine set to your cluster -To add ARM64 compute nodes to your cluster, you must create an Azure compute machine set that uses the ARM64 boot image. To create your own custom compute machine set on Azure, see "Creating a compute machine set on Azure". +To add ARM64 compute nodes to your cluster, you must create an Azure compute machine set that uses the ARM64 boot image. To create your own custom compute machine set on Azure, see "Creating a compute machine set on Azure". -.Prerequisites +.Prerequisites -* You installed the OpenShift CLI (`oc`). +* You installed the OpenShift CLI (`oc`). .Procedure * Create a compute machine set and modify the `resourceID` and `vmSize` parameters with the following command. This compute machine set will control the `arm64` worker nodes in your cluster: @@ -20,7 +20,7 @@ To add ARM64 compute nodes to your cluster, you must create an Azure compute mac ---- $ oc create -f arm64-machine-set-0.yaml ---- -.Sample YAML compute machine set with `arm64` boot image +.Sample YAML compute machine set with `arm64` boot image + [source,yaml] ---- @@ -81,12 +81,12 @@ spec: vmSize: Standard_D4ps_v5 <2> vnet: -vnet zone: "" ----- +---- <1> Set the `resourceID` parameter to the `arm64` boot image. <2> Set the `vmSize` parameter to the instance type used in your installation. Some example instance types are `Standard_D4ps_v5` or `D8ps`. .Verification -. Verify that the new ARM64 machines are running by entering the following command: +. Verify that the new ARM64 machines are running by entering the following command: + [source,terminal] ---- @@ -101,7 +101,7 @@ NAME DESIRED CURRENT READY AVA ---- . You can check that the nodes are ready and scheduable with the following command: + -[source, terminal] +[source,terminal] ---- -$ oc get nodes +$ oc get nodes ---- \ No newline at end of file diff --git a/modules/network-observability-multitenancy.adoc b/modules/network-observability-multitenancy.adoc index b5ca0e1445..9bbb4c8bd7 100644 --- a/modules/network-observability-multitenancy.adoc +++ b/modules/network-observability-multitenancy.adoc @@ -8,15 +8,15 @@ Multi-tenancy in the Network Observability Operator allows and restricts individual user access, or group access, to the flows stored in Loki. Access is enabled for project admins. Project admins who have limited access to some namespaces can access flows for only those namespaces. .Prerequisite -* You have installed link:https://catalog.redhat.com/software/containers/openshift-logging/loki-rhel8-operator/622b46bcae289285d6fcda39[Loki Operator version 5.7] +* You have installed link:https://catalog.redhat.com/software/containers/openshift-logging/loki-rhel8-operator/622b46bcae289285d6fcda39[Loki Operator version 5.7] * The `FlowCollector` `spec.loki.authToken` configuration must be set to `FORWARD`. * You must be logged in as a project administrator .Procedure -. Authorize reading permission to `user1` by running the following command: +. Authorize reading permission to `user1` by running the following command: + -[source, terminal] +[source,terminal] ---- $ oc adm policy add-cluster-role-to-user netobserv-reader user1 ---- diff --git a/modules/node-tuning-hosted-cluster.adoc b/modules/node-tuning-hosted-cluster.adoc index d6e22bd5e8..e13068f74c 100644 --- a/modules/node-tuning-hosted-cluster.adoc +++ b/modules/node-tuning-hosted-cluster.adoc @@ -52,8 +52,8 @@ If you do not add any labels to an entry in the `spec.recommend` section of the . Create the `ConfigMap` object in the management cluster: + -[source, terminal] ----- +[source,terminal] +---- $ oc --kubeconfig="$MGMT_KUBECONFIG" create -f tuned-1.yaml ---- @@ -100,7 +100,7 @@ default 7m36s rendered 7m36s tuned-1 65s ---- - + . List the `Profile` objects in the hosted cluster: + [source,terminal] diff --git a/modules/nodes-containers-remote-commands-protocol.adoc b/modules/nodes-containers-remote-commands-protocol.adoc index 8767ff06e1..cc253b6579 100644 --- a/modules/nodes-containers-remote-commands-protocol.adoc +++ b/modules/nodes-containers-remote-commands-protocol.adoc @@ -8,7 +8,7 @@ Clients initiate the execution of a remote command in a container by issuing a request to the Kubernetes API server: -[source, terminal] +[source,terminal] ---- /proxy/nodes//exec///?command= ---- @@ -23,7 +23,7 @@ In the above URL: For example: -[source, terminal] +[source,terminal] ---- /proxy/nodes/node123.openshift.com/exec/myns/mypod/mycontainer?command=date ---- diff --git a/modules/nw-controlling-dns-pod-placement.adoc b/modules/nw-controlling-dns-pod-placement.adoc index 1f55458275..dd4f733a19 100644 --- a/modules/nw-controlling-dns-pod-placement.adoc +++ b/modules/nw-controlling-dns-pod-placement.adoc @@ -22,7 +22,7 @@ As a cluster administrator, you can use a custom node selector to configure the . Modify the DNS Operator object named `default`: + -[source, terminal] +[source,terminal] ---- $ oc edit dns.operator/default ---- diff --git a/modules/nw-egress-router-about.adoc b/modules/nw-egress-router-about.adoc index 64d9807ccc..6ed93f1c9a 100644 --- a/modules/nw-egress-router-about.adoc +++ b/modules/nw-egress-router-about.adoc @@ -39,7 +39,7 @@ The egress router image is not compatible with Amazon AWS, Azure Cloud, or any o In _redirect mode_, an egress router pod configures `iptables` rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be configured to access the service for the egress router rather than connecting directly to the destination IP. You can access the destination service and port from the application pod by using the `curl` command. For example: -[source, terminal] +[source,terminal] ---- $ curl ---- diff --git a/modules/nw-egress-router-redirect-mode.adoc b/modules/nw-egress-router-redirect-mode.adoc index 242aa0f6f8..9cfed010e9 100644 --- a/modules/nw-egress-router-redirect-mode.adoc +++ b/modules/nw-egress-router-redirect-mode.adoc @@ -8,7 +8,7 @@ In _redirect mode_, an egress router pod sets up iptables rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be configured to access the service for the egress router rather than connecting directly to the destination IP. You can access the destination service and port from the application pod by using the `curl` command. For example: -[source, terminal] +[source,terminal] ---- $ curl ---- diff --git a/modules/nw-osp-configuring-external-load-balancer.adoc b/modules/nw-osp-configuring-external-load-balancer.adoc index 0beb7e7dee..054b6559a1 100644 --- a/modules/nw-osp-configuring-external-load-balancer.adoc +++ b/modules/nw-osp-configuring-external-load-balancer.adoc @@ -133,7 +133,7 @@ If the configuration is correct, you receive a JSON object in response: You can also verify application accessibility by opening the {product-title} console in a web browser. ==== + -[source, terminal] +[source,terminal] ---- $ curl http://console-openshift-console.apps.. -I -L --insecure ---- diff --git a/modules/oadp-ceph-cephfs-back-up.adoc b/modules/oadp-ceph-cephfs-back-up.adoc index 5cb6eac727..7a8e51db93 100644 --- a/modules/oadp-ceph-cephfs-back-up.adoc +++ b/modules/oadp-ceph-cephfs-back-up.adoc @@ -44,7 +44,7 @@ spec: . Monitor the progress of the `VolumeSnapshotBackup` CRs by completing the following steps: .. To check the progress of all the `VolumeSnapshotBackup` CRs, run the following command: + -[source, terminal] +[source,terminal] ---- $ oc get vsb -n ---- diff --git a/modules/oadp-ceph-cephfs-restore.adoc b/modules/oadp-ceph-cephfs-restore.adoc index f87a3e8755..81343ec2ae 100644 --- a/modules/oadp-ceph-cephfs-restore.adoc +++ b/modules/oadp-ceph-cephfs-restore.adoc @@ -56,7 +56,7 @@ spec: . Monitor the progress of the `VolumeSnapshotRestore` CRs by doing the following: .. To check the progress of all the `VolumeSnapshotRestore` CRs, run the following command: + -[source, terminal] +[source,terminal] ---- $ oc get vsr -n ---- diff --git a/modules/op-running-pipeline-and-task-run-pods-with-privileged-security-context.adoc b/modules/op-running-pipeline-and-task-run-pods-with-privileged-security-context.adoc index 8b4562a67d..db6a443921 100644 --- a/modules/op-running-pipeline-and-task-run-pods-with-privileged-security-context.adoc +++ b/modules/op-running-pipeline-and-task-run-pods-with-privileged-security-context.adoc @@ -11,7 +11,7 @@ To run a pod (resulting from pipeline run or task run) with the `privileged` sec * Configure the associated user account or service account to have an explicit SCC. You can perform the configuration using any of the following methods: ** Run the following command: + -[source, terminal] +[source,terminal] ---- $ oc adm policy add-scc-to-user -z ---- diff --git a/modules/ossm-cert-manager-installation.adoc b/modules/ossm-cert-manager-installation.adoc index 931c351104..fab5b90b47 100644 --- a/modules/ossm-cert-manager-installation.adoc +++ b/modules/ossm-cert-manager-installation.adoc @@ -12,12 +12,12 @@ You can install the `cert-manager` tool to manage the lifecycle of TLS certifica . Create the root cluster issuer: + -[source, terminal] +[source,terminal] ---- $ oc apply -f cluster-issuer.yaml ---- + -[source, terminal] +[source,terminal] ---- $ oc apply -n istio-system -f istio-ca.yaml ---- @@ -98,7 +98,7 @@ The namespace of the `selfsigned-root-issuer` issuer and `root-ca` certificate i . Install `istio-csr`: + -[source, terminal] +[source,terminal] ---- $ helm install istio-csr jetstack/cert-manager-istio-csr \ -n istio-system \ @@ -142,7 +142,7 @@ app: . Deploy SMCP: + -[source, terminal] +[source,terminal] ---- $ oc apply -f mesh.yaml -n istio-system ---- @@ -199,24 +199,24 @@ Use the sample `httpbin` service and `sleep` app to check mTLS traffic from ingr . Deploy the HTTP and `sleep` apps: + -[source, terminal] +[source,terminal] ---- $ oc new-project ---- + -[source, terminal] +[source,terminal] ---- $ oc apply -f https://raw.githubusercontent.com/maistra/istio/maistra-2.4/samples/httpbin/httpbin.yaml ---- + -[source, terminal] +[source,terminal] ---- $ oc apply -f https://raw.githubusercontent.com/maistra/istio/maistra-2.4/samples/sleep/sleep.yaml ---- . Verify that `sleep` can access the `httpbin` service: + -[source, terminal] +[source,terminal] ---- $ oc exec "$(oc get pod -l app=sleep -n \ -o jsonpath={.items..metadata.name})" -c sleep -n -- \ @@ -225,28 +225,28 @@ $ oc exec "$(oc get pod -l app=sleep -n \ ---- + .Example output: -[source, terminal] +[source,terminal] ---- 200 ---- . Check mTLS traffic from the ingress gateway to the `httpbin` service: + -[source, terminal] +[source,terminal] ---- $ oc apply -n -f https://raw.githubusercontent.com/maistra/istio/maistra-2.4/samples/httpbin/httpbin-gateway.yaml ---- . Get the `istio-ingressgateway` route: + -[source, terminal] +[source,terminal] ---- INGRESS_HOST=$(oc -n istio-system get routes istio-ingressgateway -o jsonpath='{.spec.host}') ---- . Verify mTLS traffic from the ingress gateway to the `httpbin` service: + -[source, terminal] +[source,terminal] ---- $ curl -s -I http://$INGRESS_HOST/headers -o /dev/null -w "%{http_code}" -s ---- diff --git a/modules/persistent-storage-csi-azure-disk-sc-zrs.adoc b/modules/persistent-storage-csi-azure-disk-sc-zrs.adoc index e59f5df32e..b03ba08f5c 100644 --- a/modules/persistent-storage-csi-azure-disk-sc-zrs.adoc +++ b/modules/persistent-storage-csi-azure-disk-sc-zrs.adoc @@ -17,7 +17,7 @@ Both ZRS and PremiumV2_LRS have some region limitations. For information about t .Prerequisites -* Access to an {product-title} cluster with administrator rights +* Access to an {product-title} cluster with administrator rights .Procedure @@ -25,7 +25,7 @@ Use the following steps to create a storage class with a storage account type. . Create a storage class designating the storage account type using a YAML file similar to the following: + -[source, terminal] +[source,terminal] -- $ oc create -f - << EOF apiVersion: storage.k8s.io/v1 diff --git a/modules/persistent-storage-csi-efs-sts.adoc b/modules/persistent-storage-csi-efs-sts.adoc index 62b4133527..6d34dd58a3 100644 --- a/modules/persistent-storage-csi-efs-sts.adoc +++ b/modules/persistent-storage-csi-efs-sts.adoc @@ -8,7 +8,7 @@ This procedure explains how to configure the AWS EFS CSI Driver Operator with {product-title} on AWS Security Token Service (STS). -Perform this procedure before you have installed the AWS EFS CSI Operator, but not yet installed the AWS EFS CSI driver as part of the _Installing the AWS EFS CSI Driver Operator_ procedure. +Perform this procedure before you have installed the AWS EFS CSI Operator, but not yet installed the AWS EFS CSI driver as part of the _Installing the AWS EFS CSI Driver Operator_ procedure. [IMPORTANT] ==== @@ -56,7 +56,7 @@ spec: . Run the `ccoctl` tool to generate a new IAM role in AWS, and create a YAML file for it in the local file system (`/manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml`). + -[source, terminal] +[source,terminal] ---- $ ccoctl aws create-iam-roles --name= --region= --credentials-requests-dir=/credrequests --identity-provider-arn=arn:aws:iam:::oidc-provider/-oidc.s3..amazonaws.com ---- @@ -71,14 +71,14 @@ $ ccoctl aws create-iam-roles --name= --region= --credentials- + .Example + -[source, terminal] +[source,terminal] ---- $ ccoctl aws create-iam-roles --name my-aws-efs --credentials-requests-dir credrequests --identity-provider-arn arn:aws:iam::123456789012:oidc-provider/my-aws-efs-oidc.s3.us-east-2.amazonaws.com ---- + .Example output + -[source, terminal] +[source,terminal] ---- 2022/03/21 06:24:44 Role arn:aws:iam::123456789012:role/my-aws-efs -openshift-cluster-csi-drivers-aws-efs-cloud- created 2022/03/21 06:24:44 Saved credentials configuration to: /manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml @@ -87,21 +87,21 @@ $ ccoctl aws create-iam-roles --name my-aws-efs --credentials-requests-dir credr . Create the AWS EFS cloud credentials and secret: + -[source, terminal] +[source,terminal] ---- $ oc create -f /manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml ---- + .Example + -[source, terminal] +[source,terminal] ---- $ oc create -f /manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml ---- + .Example output + -[source, terminal] +[source,terminal] ---- secret/aws-efs-cloud-credentials created ---- diff --git a/modules/persistent-storage-csi-efs-troubleshooting.adoc b/modules/persistent-storage-csi-efs-troubleshooting.adoc index f4aabef4c8..81ee9943a3 100644 --- a/modules/persistent-storage-csi-efs-troubleshooting.adoc +++ b/modules/persistent-storage-csi-efs-troubleshooting.adoc @@ -13,7 +13,7 @@ The following information provides guidance on how to troubleshoot issues with A * To initiate gathering of logs of the AWS EFS Operator and CSI driver, run the following command: + -[source, terminal] +[source,terminal] ---- $ oc adm must-gather [must-gather ] OUT Using must-gather plugin-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 @@ -24,14 +24,14 @@ $ oc adm must-gather * To show AWS EFS Operator errors, view the `ClusterCSIDriver` status: + -[source, terminal] +[source,terminal] ---- $ oc get clustercsidriver efs.csi.aws.com -o yaml ---- * If a volume cannot be mounted to a pod (as shown in the output of the following command): + -[source, terminal] +[source,terminal] ---- $ oc describe pod ... diff --git a/modules/persistent-storage-csi-sc-managing-cli.adoc b/modules/persistent-storage-csi-sc-managing-cli.adoc index fd8bbf4541..2c82a69b27 100644 --- a/modules/persistent-storage-csi-sc-managing-cli.adoc +++ b/modules/persistent-storage-csi-sc-managing-cli.adoc @@ -14,10 +14,10 @@ To manage the storage class using the CLI, run the following command: -[source, terminal] +[source,terminal] ---- oc patch clustercsidriver $DRIVERNAME --type=merge -p "{\"spec\":{\"storageClassState\":\"${STATE}\"}}" <1> ---- -<1> Where `${STATE}` is "Removed" or "Managed" or "Unmanaged". +<1> Where `${STATE}` is "Removed" or "Managed" or "Unmanaged". + Where `$DRIVERNAME` is the provisioner name. You can find the provisioner name by running the command `oc get sc`. diff --git a/modules/persistent-storage-csi-tp-enable.adoc b/modules/persistent-storage-csi-tp-enable.adoc index f019eefd99..4cb31b7844 100644 --- a/modules/persistent-storage-csi-tp-enable.adoc +++ b/modules/persistent-storage-csi-tp-enable.adoc @@ -38,7 +38,7 @@ After turning Technology Preview features on by using feature gates, they cannot $ oc get co storage ---- + -[source, terminal] +[source,terminal] ---- NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE storage 4.10.0-0.nightly-2021-11-15-034648 True False False 4m36s @@ -56,7 +56,7 @@ $ oc get pod -n openshift-cluster-csi-drivers ---- + ifdef::vsphere[] -[source, terminal] +[source,terminal] ---- NAME READY STATUS RESTARTS AGE vmware-vsphere-csi-driver-controller-5646dbbf54-cnsx7 9/9 Running 0 4h29m @@ -69,7 +69,7 @@ vmware-vsphere-csi-driver-operator-7c7fc474c-p544t 1/1 Running 0 ---- endif::vsphere[] ifdef::azure[] -[source, terminal] +[source,terminal] ---- NAME READY STATUS RESTARTS AGE azure-disk-csi-driver-controller-5949bf45fd-pm4qb 11/11 Running 0 39m @@ -83,7 +83,7 @@ azure-disk-csi-driver-operator-7d966fc6c5-x74x5 1/1 Running 0 ---- endif::azure[] ifdef::azure_file[] -[source, terminal] +[source,terminal] ---- NAME READY STATUS RESTARTS AGE azure-file-csi-driver-controller-5949bf45fd-pm4qb 11/11 Running 0 39m diff --git a/modules/persistent-storage-csi-vol-detach-non-graceful-shutdown-procedure.adoc b/modules/persistent-storage-csi-vol-detach-non-graceful-shutdown-procedure.adoc index d88eed861c..39782b29d3 100644 --- a/modules/persistent-storage-csi-vol-detach-non-graceful-shutdown-procedure.adoc +++ b/modules/persistent-storage-csi-vol-detach-non-graceful-shutdown-procedure.adoc @@ -19,7 +19,7 @@ To allow volumes to detach automatically from a node after a non-graceful node s . Ensure that the node is shutdown by running the following command and checking the status: + -[source, terminal] +[source,terminal] ---- oc get node <1> ---- @@ -32,7 +32,7 @@ If the node is not completely shut down, do not proceed with tainting the node. + . Taint the corresponding node object by running the following command: + -[source, terminal] +[source,terminal] ---- oc adm taint node node.kubernetes.io/out-of-service=nodeshutdown:NoExecute <1> ---- diff --git a/modules/persistent-storage-csi-vsphere-top-aware-infra-top.adoc b/modules/persistent-storage-csi-vsphere-top-aware-infra-top.adoc index fce61513be..8151265534 100644 --- a/modules/persistent-storage-csi-vsphere-top-aware-infra-top.adoc +++ b/modules/persistent-storage-csi-vsphere-top-aware-infra-top.adoc @@ -23,14 +23,14 @@ For more information about vSphere categories and tags, see the VMware vSphere d * Specify the `openshift-zone` and `openshift-region` categories that you created earlier. * Set `driverType` to `vSphere`. + -[source, terminal] +[source,terminal] ---- ~ $ oc edit clustercsidriver csi.vsphere.vmware.com -o yaml ---- + .Example output + -[source, terminal] +[source,terminal] ---- apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver @@ -42,7 +42,7 @@ spec: observedConfig: null operatorLogLevel: Normal unsupportedConfigOverrides: null - driverConfig: + driverConfig: driverType: vSphere <1> vSphere: topologyCategories: <2> @@ -54,14 +54,14 @@ spec: . Verify that `CSINode` object has topology keys by running the following commands: + -[source, terminal] +[source,terminal] ---- ~ $ oc get csinode ---- + .Example output + -[source, terminal] +[source,terminal] ---- NAME DRIVERS AGE co8-4s88d-infra-2m5vd 1 27m @@ -73,14 +73,14 @@ co8-4s88d-worker-mbb46 1 47m co8-4s88d-worker-zlk7d 1 47m ---- + -[source, terminal] +[source,terminal] ---- ~ $ oc get csinode co8-4s88d-worker-j2hmg -o yaml ---- + .Example output + -[source, terminal] +[source,terminal] ---- ... spec: @@ -102,9 +102,9 @@ spec: . Create a tag to assign to datastores across failure domains: + -When an {product-title} spans more than one failure domain, the datastore might not be shared across those failure domains, which is where topology-aware provisioning of persistent volumes (PVs) is useful. +When an {product-title} spans more than one failure domain, the datastore might not be shared across those failure domains, which is where topology-aware provisioning of persistent volumes (PVs) is useful. + -.. In vCenter, create a category for tagging the datastores. For example, `openshift-zonal-datastore-cat`. You can use any other category name, provided the category uniquely is used for tagging datastores participating in {product-title} cluster. Also, ensure that `StoragePod`, `Datastore`, and `Folder` are selected as Associable Entities for the created category. +.. In vCenter, create a category for tagging the datastores. For example, `openshift-zonal-datastore-cat`. You can use any other category name, provided the category uniquely is used for tagging datastores participating in {product-title} cluster. Also, ensure that `StoragePod`, `Datastore`, and `Folder` are selected as Associable Entities for the created category. .. In vCenter, create a tag that uses the previously created category. This example uses the tag name `openshift-zonal-datastore`. .. Assign the previously created tag (in this example `openshift-zonal-datastore`) to each datastore in a failure domain that would be considered for dynamic provisioning. + @@ -119,14 +119,14 @@ You can use any names you like for categories and tags. The names used in this e .. Click *CREATE*. .. Type a name for the storage policy. .. For the rules, choose Tag Placement rules and select the tag and category that targets the desired datastores (in this example, the `openshift-zonal-datastore` tag). -+ ++ The datastores are listed in the storage compatibility table. . Create a new storage class that uses the new zoned storage policy: .. Click *Storage* > *StorageClasses*. .. On the *StorageClasses* page, click *Create StorageClass*. .. Type a name for the new storage class in *Name*. -.. Under *Provisioner*, select *csi.vsphere.vmware.com*. +.. Under *Provisioner*, select *csi.vsphere.vmware.com*. .. Under *Additional parameters*, for the StoragePolicyName parameter, set *Value* to the name of the new zoned storage policy that you created earlier. .. Click *Create*. + diff --git a/modules/persistent-storage-csi-vsphere-top-aware-results.adoc b/modules/persistent-storage-csi-vsphere-top-aware-results.adoc index 1e56d38db9..b97cec2e94 100644 --- a/modules/persistent-storage-csi-vsphere-top-aware-results.adoc +++ b/modules/persistent-storage-csi-vsphere-top-aware-results.adoc @@ -9,14 +9,14 @@ Creating persistent volume claims (PVCs) and PVs from the topology aware storage class are truly zonal, and should use the datastore in their respective zone depending on how pods are scheduled: -[source, terminal] +[source,terminal] ---- ~ $ oc get pv -o yaml ---- .Example output -[source, terminal] +[source,terminal] ---- ... nodeAffinity: @@ -24,7 +24,7 @@ nodeAffinity: nodeSelectorTerms: - matchExpressions: - key: topology.csi.vmware.com/openshift-zone <1> - operator: In + operator: In values: - -key: topology.csi.vmware.com/openshift-region <1> diff --git a/modules/restarting-installation.adoc b/modules/restarting-installation.adoc index fae729304b..7283e7f82c 100644 --- a/modules/restarting-installation.adoc +++ b/modules/restarting-installation.adoc @@ -13,7 +13,7 @@ For a user-provisioned infrastructure (UPI) installation, you must manually dest . Destroy the cluster and remove all the resources associated with the cluster, including the hidden installer state files in the installation directory: + -[source, terminal] +[source,terminal] ---- $ ./openshift-install destroy cluster --dir <1> ---- @@ -22,7 +22,7 @@ definition files that the installation program creates. . Before reinstalling the cluster, delete the installation directory: + -[source, terminal] +[source,terminal] ---- $ rm -rf ---- diff --git a/modules/rosa-accessing-your-cluster.adoc b/modules/rosa-accessing-your-cluster.adoc index d7b0f1dbd6..c9ce30dd25 100644 --- a/modules/rosa-accessing-your-cluster.adoc +++ b/modules/rosa-accessing-your-cluster.adoc @@ -22,7 +22,7 @@ To access your cluster using an IDP account: . Add an IDP. .. The following command creates an IDP backed by GitHub. After running the command, follow the interactive prompts from the output to access your link:https://github.com/settings/developers[GitHub developer settings] and configure a new OAuth application. + -[source, terminal] +[source,terminal] ---- $ rosa create idp --cluster= --interactive ---- diff --git a/modules/rosa-aws-privatelink-create-cluster.adoc b/modules/rosa-aws-privatelink-create-cluster.adoc index 1316788999..bddc7b47a0 100644 --- a/modules/rosa-aws-privatelink-create-cluster.adoc +++ b/modules/rosa-aws-privatelink-create-cluster.adoc @@ -45,7 +45,7 @@ $ rosa create cluster --private-link --multi-az --cluster-name= [- . Enter the following command to check the status of your cluster. During cluster creation, the `State` field from the output will transition from `pending` to `installing`, and finally to `ready`. + -[source, terminal] +[source,terminal] ---- $ rosa describe cluster --cluster= ---- @@ -57,7 +57,7 @@ If installation fails or the `State` field does not change to `ready` after 40 m . Enter the following command to follow the OpenShift installer logs to track the progress of your cluster: + -[source, terminal] +[source,terminal] ---- $ rosa logs install --cluster= --watch ---- diff --git a/modules/rosa-creating-cluster.adoc b/modules/rosa-creating-cluster.adoc index 44a99b320b..dc58574dbb 100644 --- a/modules/rosa-creating-cluster.adoc +++ b/modules/rosa-creating-cluster.adoc @@ -30,7 +30,7 @@ Multiple availability zones (AZ) are recommended for production workloads. The d + * To create your cluster with the default cluster settings: + -[source, terminal] +[source,terminal] ---- $ rosa create cluster --cluster-name= ---- @@ -46,7 +46,7 @@ I: To determine when your cluster is Ready, run `rosa describe cluster rh-rosa-t ---- * To create a cluster using interactive prompts: + -[source, terminal] +[source,terminal] ---- $ rosa create cluster --interactive ---- @@ -58,7 +58,7 @@ $ rosa create cluster --interactive . Enter the following command to check the status of your cluster. During cluster creation, the `State` field from the output will transition from `pending` to `installing`, and finally to `ready`. + -[source, terminal] +[source,terminal] ---- $ rosa describe cluster --cluster= ---- @@ -91,7 +91,7 @@ If installation fails or the `State` field does not change to `ready` after 40 m . Track the progress of the cluster creation by watching the OpenShift installer logs: + -[source, terminal] +[source,terminal] ---- $ rosa logs install --cluster= --watch ---- diff --git a/modules/rosa-deleting-cluster.adoc b/modules/rosa-deleting-cluster.adoc index 95608a11e0..6ceef7607a 100644 --- a/modules/rosa-deleting-cluster.adoc +++ b/modules/rosa-deleting-cluster.adoc @@ -95,7 +95,7 @@ ifndef::sts[] . Enter the following command to delete a cluster and watch the logs, replacing `` with the name or ID of your cluster: endif::sts[] + -[source, terminal] +[source,terminal] ---- $ rosa delete cluster --cluster= --watch ---- @@ -110,7 +110,7 @@ endif::sts[] ifndef::sts[] . To clean up your CloudFormation stack, enter the following command: + -[source, terminal] +[source,terminal] ---- $ rosa init --delete ---- diff --git a/modules/rosa-enable-private-cluster-existing.adoc b/modules/rosa-enable-private-cluster-existing.adoc index fbfcc0aeef..e6dbfb7cb5 100644 --- a/modules/rosa-enable-private-cluster-existing.adoc +++ b/modules/rosa-enable-private-cluster-existing.adoc @@ -22,7 +22,7 @@ AWS VPC Peering, VPN, DirectConnect, or link:https://docs.aws.amazon.com/whitepa Enter the following command to enable the `--private` option on an existing cluster. -[source, terminal] +[source,terminal] ---- $ rosa edit cluster --cluster= --private ---- diff --git a/modules/rosa-enable-private-cluster-new.adoc b/modules/rosa-enable-private-cluster-new.adoc index b7704c5e8e..d18446fe25 100644 --- a/modules/rosa-enable-private-cluster-new.adoc +++ b/modules/rosa-enable-private-cluster-new.adoc @@ -22,7 +22,7 @@ AWS VPC Peering, VPN, DirectConnect, or link:https://docs.aws.amazon.com/whitepa Enter the following command to create a new private cluster. -[source, terminal] +[source,terminal] ---- $ rosa create cluster --cluster-name= --private ---- diff --git a/modules/rosa-getting-started-deleting-a-cluster.adoc b/modules/rosa-getting-started-deleting-a-cluster.adoc index c0fb3677d0..8aee7e60b1 100644 --- a/modules/rosa-getting-started-deleting-a-cluster.adoc +++ b/modules/rosa-getting-started-deleting-a-cluster.adoc @@ -33,7 +33,7 @@ endif::[] . Delete a cluster and watch the logs, replacing `` with the name or ID of your cluster: + -[source, terminal] +[source,terminal] ---- $ rosa delete cluster --cluster= --watch ---- diff --git a/modules/rosa-nodes-machine-pools-local-zones.adoc b/modules/rosa-nodes-machine-pools-local-zones.adoc index 5bae62394e..59a8974fac 100644 --- a/modules/rosa-nodes-machine-pools-local-zones.adoc +++ b/modules/rosa-nodes-machine-pools-local-zones.adoc @@ -46,7 +46,7 @@ $ rosa create machinepool -c -i + . Add the subnet and instance type for the machine pool in the ROSA CLI. After several minutes, the cluster will provision the nodes. + -[source, terminal] +[source,terminal] ---- I: Enabling interactive mode <1> ? Machine pool name: xx-lz-xx <2> diff --git a/modules/rosa-quickstart-instructions.adoc b/modules/rosa-quickstart-instructions.adoc index 250d1dc7f7..c2905c7373 100644 --- a/modules/rosa-quickstart-instructions.adoc +++ b/modules/rosa-quickstart-instructions.adoc @@ -8,7 +8,7 @@ If you have already created your first cluster and users, this list can serve as a command quick reference list when creating additional clusters and users. -[source, terminal] +[source,terminal] ---- ## Configures your AWS account and ensures everything is setup correctly $ rosa init diff --git a/modules/running-compliance-scans.adoc b/modules/running-compliance-scans.adoc index e5d56b236b..059bc729a5 100644 --- a/modules/running-compliance-scans.adoc +++ b/modules/running-compliance-scans.adoc @@ -194,7 +194,7 @@ settingsRef: . Create the `ScanSettingBinding` object by running: + -[source, terminal] +[source,terminal] ---- $ oc create -f .yaml -n openshift-compliance ---- diff --git a/modules/samples-operator-crd.adoc b/modules/samples-operator-crd.adoc index ea523aff94..7e4eb29727 100644 --- a/modules/samples-operator-crd.adoc +++ b/modules/samples-operator-crd.adoc @@ -17,7 +17,7 @@ You can configure the Cluster Samples Operator by editing the file with the prov * Access the Cluster Samples Operator configuration: + -[source, terminal] +[source,terminal] ---- $ oc edit configs.samples.operator.openshift.io/cluster -o yaml ---- diff --git a/modules/serverless-kafka-source-kn.adoc b/modules/serverless-kafka-source-kn.adoc index 6386f58f3b..fb4b59d110 100644 --- a/modules/serverless-kafka-source-kn.adoc +++ b/modules/serverless-kafka-source-kn.adoc @@ -21,7 +21,7 @@ You can use the `kn source kafka create` command to create a Kafka source by usi . To verify that the Kafka event source is working, create a Knative service that dumps incoming events into the service logs: + -[source, terminal] +[source,terminal] ---- $ kn service create event-display \ --image quay.io/openshift-knative/knative-eventing-sources-event-display @@ -46,13 +46,13 @@ The `--servers`, `--topics`, and `--consumergroup` options specify the connectio . Optional: View details about the `KafkaSource` CR you created: + -[source, terminal] +[source,terminal] ---- $ kn source kafka describe ---- + .Example output -[source, terminal] +[source,terminal] ---- Name: example-kafka-source Namespace: kafka diff --git a/modules/serverless-kafka-source-yaml.adoc b/modules/serverless-kafka-source-yaml.adoc index 34b4703f52..593e79d48f 100644 --- a/modules/serverless-kafka-source-yaml.adoc +++ b/modules/serverless-kafka-source-yaml.adoc @@ -74,13 +74,13 @@ $ oc apply -f * Verify that the Kafka event source was created by entering the following command: + -[source, terminal] +[source,terminal] ---- $ oc get pods ---- + .Example output -[source, terminal] +[source,terminal] ---- NAME READY STATUS RESTARTS AGE kafkasource-kafka-source-5ca0248f-... 1/1 Running 0 13m diff --git a/modules/serverless-services-network-policies-enabling-comms.adoc b/modules/serverless-services-network-policies-enabling-comms.adoc index 455db7f969..768afa8f14 100644 --- a/modules/serverless-services-network-policies-enabling-comms.adoc +++ b/modules/serverless-services-network-policies-enabling-comms.adoc @@ -26,28 +26,28 @@ If you do not want to allow access to your Knative application from all namespac .. Label the `knative-serving` namespace: + -[source, terminal] +[source,terminal] ---- $ oc label namespace knative-serving knative.openshift.io/system-namespace=true ---- .. Label the `knative-serving-ingress` namespace: + -[source, terminal] +[source,terminal] ---- $ oc label namespace knative-serving-ingress knative.openshift.io/system-namespace=true ---- .. Label the `knative-eventing` namespace: + -[source, terminal] +[source,terminal] ---- $ oc label namespace knative-eventing knative.openshift.io/system-namespace=true ---- .. Label the `knative-kafka` namespace: + -[source, terminal] +[source,terminal] ---- $ oc label namespace knative-kafka knative.openshift.io/system-namespace=true ---- diff --git a/modules/setting-up-cpu-manager.adoc b/modules/setting-up-cpu-manager.adoc index fa2b884d11..8ca686bab4 100644 --- a/modules/setting-up-cpu-manager.adoc +++ b/modules/setting-up-cpu-manager.adoc @@ -214,7 +214,7 @@ tasks 32706 ---- + .Example output -[source, terminal] +[source,terminal] ---- ... Capacity: @@ -245,7 +245,7 @@ Allocated resources: + This VM has two CPU cores. The `system-reserved` setting reserves 500 millicores, meaning that half of one core is subtracted from the total capacity of the node to arrive at the `Node Allocatable` amount. You can see that `Allocatable CPU` is 1500 millicores. This means you can run one of the CPU Manager pods since each will take one whole core. A whole core is equivalent to 1000 millicores. If you try to schedule a second pod, the system will accept the pod, but it will never be scheduled: + -[source, terminal] +[source,terminal] ---- NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m diff --git a/modules/update-manifest-application.adoc b/modules/update-manifest-application.adoc index e73b568e45..400a9767cf 100644 --- a/modules/update-manifest-application.adoc +++ b/modules/update-manifest-application.adoc @@ -13,14 +13,14 @@ The Cluster Version Operator (CVO) implements this logical order through the con These dependencies are encoded in the filenames of the manifests in the release image: -[source, terminal] +[source,terminal] ---- 0000___.yaml ---- For example: -[source, terminal] +[source,terminal] ---- 0000_03_config-operator_01_proxy.crd.yaml ---- diff --git a/modules/virt-accessing-rdp-console.adoc b/modules/virt-accessing-rdp-console.adoc index 16663f729c..9f5d42596a 100644 --- a/modules/virt-accessing-rdp-console.adoc +++ b/modules/virt-accessing-rdp-console.adoc @@ -76,7 +76,7 @@ $ oc create -f .yaml . Start the VM. If the VM is already running, restart it. . Query the `Service` object to verify that it is available: + -[source, terminal] +[source,terminal] ---- $ oc get service -n example-namespace ---- diff --git a/modules/virt-adding-virtio-drivers-vm-yaml.adoc b/modules/virt-adding-virtio-drivers-vm-yaml.adoc index d0803fde43..fe84db9049 100644 --- a/modules/virt-adding-virtio-drivers-vm-yaml.adoc +++ b/modules/virt-adding-virtio-drivers-vm-yaml.adoc @@ -42,14 +42,14 @@ volumes: . Apply the changes: * If the VM is not running, run the following command: + -[source, terminal] +[source,terminal] ---- $ `virtctl start ` ---- * If the VM is running, reboot the VM or run the following command: + -[source, terminal] +[source,terminal] ---- $ run `oc apply -f ` ---- diff --git a/modules/virt-binding-devices-vfio-driver.adoc b/modules/virt-binding-devices-vfio-driver.adoc index 17539f3522..66cd8ffe2f 100644 --- a/modules/virt-binding-devices-vfio-driver.adoc +++ b/modules/virt-binding-devices-vfio-driver.adoc @@ -13,13 +13,13 @@ To bind PCI devices to the VFIO (Virtual Function I/O) driver, obtain the values .Procedure . Run the `lspci` command to obtain the `vendor-ID` and the `device-ID` for the PCI device. + -[source, terminal] +[source,terminal] ---- $ lspci -nnv | grep -i nvidia ---- + .Example output -[source, terminal] +[source,terminal] ---- 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1) ---- @@ -80,7 +80,7 @@ $ oc get MachineConfig ---- + .Example output -[source, terminal] +[source,terminal] ---- NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h