1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

OSDOCS-7596: correct space in attribute code block notation

This commit is contained in:
“Shauna Diaz”
2023-09-06 10:14:35 -04:00
committed by openshift-cherrypick-robot
parent 8911b2e63e
commit 0e34ed2c0c
80 changed files with 234 additions and 234 deletions

View File

@@ -7,31 +7,31 @@
//* Initiate OVS:
//+
//[source, terminal]
//[source,terminal]
//----
//$ sudo systemctl enable openvswitch --now
//----
//* Add the network bridge:
//+
//[source, terminal]
//[source,terminal]
//----
//$ sudo ovs-vsctl add-br br-ex
//----
//* Add the interface to the network bridge:
//+
//[source, terminal]
//[source,terminal]
//----
//$ sudo ovs-vsctl add-port br-ex <physical-interface-name>
//----
//The `<physical-interface-name>` is the network interface name where the node IP address is assigned.
//* Get the bridge up and running:
//+
//[source, terminal]
//[source,terminal]
//----
//$ sudo ip link set br-ex up
//----
//* After `br-ex up` is running, assign the node IP address to `br-ex` bridge:
//[source, terminal]
//[source,terminal]
//----
//$ sudo ...
//----

View File

@@ -21,21 +21,21 @@ Run the commands listed in each step that follows to restore the `NodePort` serv
. Find the name of the ovn-master pod that you want to restart by running the following command:
+
[source, terminal]
[source,terminal]
----
$ pod=$(oc get pods -n openshift-ovn-kubernetes | grep ovnkube-master | awk -F " " '{print $1}')
----
. Force a restart of the of the ovnkube-master pod by running the following command:
+
[source, terminal]
[source,terminal]
----
$ oc -n openshift-ovn-kubernetes delete pod $pod
----
. Optional: To confirm that the ovnkube-master pod restarted, run the following command:
+
[source, terminal]
[source,terminal]
----
$ oc get pods -n openshift-ovn-kubernetes
----

View File

@@ -24,7 +24,7 @@ Data backups are automatic on `rpm-ostree` systems. If you are not using an `rpm
* Logs print to the console during manual backups.
* Logs are automatically generated for `rpm-ostree` system automated backups as part of the {product-title} journal logs. You can check the logs by running the following command:
+
[source, terminal]
[source,terminal]
----
$ sudo journalctl -u microshift
----

View File

@@ -39,7 +39,7 @@ Component EFI
ifndef::openshift-origin[]
+
.Example output for `aarch64`
[source, terminal]
[source,terminal]
----
Component EFI
Installed: grub2-efi-aa64-1:2.02-99.el8_4.1.aarch64,shim-aa64-15.4-2.el8_1.aarch64

View File

@@ -6,14 +6,14 @@
You can manually clear the CRI-O ephemeral storage if you experience the following issues:
* A node cannot run on any pods and this error appears:
[source, terminal]
[source,terminal]
+
----
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to mount container XXX: error recreating the missing symlinks: error reading name of symlink for XXX: open /var/lib/containers/storage/overlay/XXX/link: no such file or directory
----
+
* You cannot create a new container on a working node and the “cant stat lower layer” error appears:
[source, terminal]
[source,terminal]
+
----
can't stat lower layer ... because it does not exist. Going through storage to recreate the missing symlinks.
@@ -35,14 +35,14 @@ Follow this process to completely wipe the CRI-O storage and resolve the errors.
.Procedure
. Use `cordon` on the node. This is to avoid any workload getting scheduled if the node gets into the `Ready` status. You will know that scheduling is disabled when `SchedulingDisabled` is in your Status section:
[source, terminal]
[source,terminal]
+
----
$ oc adm cordon <nodename>
----
+
. Drain the node as the cluster-admin user:
[source, terminal]
[source,terminal]
+
----
$ oc adm drain <nodename> --ignore-daemonsets --delete-emptydir-data
@@ -54,7 +54,7 @@ The `terminationGracePeriodSeconds` attribute of a pod or pod template controls
====
. When the node returns, connect back to the node via SSH or Console. Then connect to the root user:
[source, terminal]
[source,terminal]
+
----
$ ssh core@node1.example.com
@@ -62,7 +62,7 @@ $ sudo -i
----
+
. Manually stop the kubelet:
[source, terminal]
[source,terminal]
+
----
# systemctl stop kubelet
@@ -71,35 +71,35 @@ $ sudo -i
. Stop the containers and pods:
.. Use the following command to stop the pods that are not in the `HostNetwork`. They must be removed first because their removal relies on the networking plugin pods, which are in the `HostNetwork`.
[source, terminal]
[source,terminal]
+
----
.. for pod in $(crictl pods -q); do if [[ "$(crictl inspectp $pod | jq -r .status.linux.namespaces.options.network)" != "NODE" ]]; then crictl rmp -f $pod; fi; done
----
.. Stop all other pods:
[source, terminal]
[source,terminal]
+
----
# crictl rmp -fa
----
+
. Manually stop the crio services:
[source, terminal]
[source,terminal]
+
----
# systemctl stop crio
----
+
. After you run those commands, you can completely wipe the ephemeral storage:
[source, terminal]
[source,terminal]
+
----
# crio wipe -f
----
+
. Start the crio and kubelet service:
[source, terminal]
[source,terminal]
+
----
# systemctl start crio
@@ -107,14 +107,14 @@ $ sudo -i
----
+
. You will know if the clean up worked if the crio and kubelet services are started, and the node is in the `Ready` status:
[source, terminal]
[source,terminal]
+
----
$ oc get nodes
----
+
.Example output
[source, terminal]
[source,terminal]
+
----
NAME STATUS ROLES AGE VERSION
@@ -122,14 +122,14 @@ ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready, SchedulingDisabled master 133m v
----
+
. Mark the node schedulable. You will know that the scheduling is enabled when `SchedulingDisabled` is no longer in status:
[source, terminal]
[source,terminal]
+
----
$ oc adm uncordon <nodename>
----
+
.Example output
[source, terminal]
[source,terminal]
+
----
NAME STATUS ROLES AGE VERSION

View File

@@ -9,13 +9,13 @@ Log messages detailing the assigned devices are recorded in the respective Tuned
* An `INFO` message is recorded detailing the successfully assigned devices:
+
[source, terminal]
[source,terminal]
----
INFO tuned.plugins.base: instance net_test (net): assigning devices ens1, ens2, ens3
----
* A `WARNING` message is recorded if none of the devices can be assigned:
+
[source, terminal]
[source,terminal]
----
WARNING tuned.plugins.base: instance net_test: no matching devices available
----

View File

@@ -78,7 +78,7 @@ FAIL
The same output can indicate different results for different workloads. For example, spikes up to 18μs are acceptable for 4G DU workloads, but not for 5G DU workloads.
.Example of good results
[source, terminal]
[source,terminal]
----
running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m
# Histogram
@@ -111,7 +111,7 @@ More histogram entries ...
----
.Example of bad results
[source, terminal]
[source,terminal]
----
running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m
# Histogram

View File

@@ -122,7 +122,7 @@ You can capture the following types of results:
* The combined set of the rough tests with the best results and configuration settings.
.Example of good results
[source, terminal]
[source,terminal]
----
hwlatdetect: test duration 3600 seconds
detector: tracer
@@ -142,7 +142,7 @@ Samples recorded: 0
The `hwlatdetect` tool only provides output if the sample exceeds the specified threshold.
.Example of bad results
[source, terminal]
[source,terminal]
----
hwlatdetect: test duration 3600 seconds
detector: tracer

View File

@@ -15,21 +15,21 @@
.. Verify what pods are running in the namespace:
+
[source, terminal]
[source,terminal]
----
$ oc get pods -n <namespace>
----
+
For example, to verify what pods are running in the `workshop` namespace run the following:
+
[source, terminal]
[source,terminal]
----
$ oc get pods -n workshop
----
+
.Example output
+
[source, terminal]
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
parksmap-1-4xkwf 1/1 Running 0 2m17s
@@ -38,14 +38,14 @@ parksmap-1-deploy 0/1 Completed 0 2m22s
+
.. Inspect the pods:
+
[source, terminal]
[source,terminal]
----
$ oc get pod parksmap-1-4xkwf -n workshop -o yaml
----
+
.Example output
+
[source, terminal]
[source,terminal]
----
apiVersion: v1
kind: Pod
@@ -97,13 +97,13 @@ Conversely with a workload that requires `privilegeEscalation: true` this worklo
[id="newly_installed_{context}"]
== Newly installed cluster
For newly installed {product-title} 4.11 or later clusters, the `restricted-v2` replaces the `restricted` SCC as an SCC that is available to be used by any authenticated user. A workload with `privilegeEscalation: true`, is not admitted into the cluster since `restricted-v2` is the only SCC available for authenticated users by default.
For newly installed {product-title} 4.11 or later clusters, the `restricted-v2` replaces the `restricted` SCC as an SCC that is available to be used by any authenticated user. A workload with `privilegeEscalation: true`, is not admitted into the cluster since `restricted-v2` is the only SCC available for authenticated users by default.
The feature `privilegeEscalation` is allowed by `restricted` but not by `restricted-v2`. More features are denied by `restricted-v2` than were allowed by `restricted` SCC.
A workload with `privilegeEscalation: true` may be admitted into a newly installed {product-title} 4.11 or later cluster. To give access to the `restricted` SCC to the ServiceAccount running the workload (or any other SCC that can admit this workload) using a RoleBinding run the following command:
[source, terminal]
[source,terminal]
----
$ oc -n <workload-namespace> adm policy add-scc-to-user <scc-name> -z <serviceaccount_name>
----

View File

@@ -21,7 +21,7 @@ Setting a large value for the minimum HAProxy reload interval can cause latency
* Change the minimum HAProxy reload interval of the default Ingress Controller to 15 seconds by running the following command:
+
[source, terminal]
[source,terminal]
----
$ oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"tuningOptions":{"reloadInterval":"15s"}}}'
----

View File

@@ -6,7 +6,7 @@
[id="core-user-password_{context}"]
= Changing the core user password for node access
By default, {op-system-first} creates a user named `core` on the nodes in your cluster. You can use the `core` user to access the node through a cloud provider serial console or a bare metal baseboard controller manager (BMC). This can be helpful, for example, if a node is down and you cannot access that node by using SSH or the `oc debug node` command. However, by default, there is no password for this user, so you cannot log in without creating one.
By default, {op-system-first} creates a user named `core` on the nodes in your cluster. You can use the `core` user to access the node through a cloud provider serial console or a bare metal baseboard controller manager (BMC). This can be helpful, for example, if a node is down and you cannot access that node by using SSH or the `oc debug node` command. However, by default, there is no password for this user, so you cannot log in without creating one.
You can create a password for the `core` user by using a machine config. The Machine Config Operator (MCO) assigns the password and injects the password into the `/etc/shadow` file, allowing you to log in with the `core` user. The MCO does not examine the password hash. As such, the MCO cannot report if there is a problem with the password.
@@ -17,7 +17,7 @@ You can create a password for the `core` user by using a machine config. The Mac
* If you have a machine config that includes an `/etc/shadow` file or a systemd unit that sets a password, it takes precedence over the password hash.
====
You can change the password, if needed, by editing the machine config you used to create the password. Also, you can remove the password by deleting the machine config. Deleting the machine config does not remove the user account.
You can change the password, if needed, by editing the machine config you used to create the password. Also, you can remove the password by deleting the machine config. Deleting the machine config does not remove the user account.
.Prerequisites
@@ -49,7 +49,7 @@ spec:
. Create the machine config by running the following command:
+
[source,yaml]
[source,terminal]
----
$ oc create -f <file-name>.yaml
----

View File

@@ -47,7 +47,7 @@ $ oc get ctrcfg
----
.Example output
[source, terminal]
[source,terminal]
----
NAME AGE
ctr-pid 24m
@@ -62,7 +62,7 @@ $ oc get mc | grep container
----
.Example output
[source, terminal]
[source,terminal]
----
...
01-master-container-runtime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 57m
@@ -181,12 +181,12 @@ worker rendered-worker-169 False True False 3 1
.. Open an `oc debug` session to a node in the machine config pool and run `chroot /host`.
+
[source, terminal]
[source,terminal]
----
$ oc debug node/<node_name>
----
+
[source, terminal]
[source,terminal]
----
sh-4.4# chroot /host
----

View File

@@ -35,7 +35,7 @@ If you have a machine config with a `kubelet-9` suffix, and you create another `
$ oc get kubeletconfig
----
[source, terminal]
[source,terminal]
----
NAME AGE
set-max-pods 15m
@@ -47,7 +47,7 @@ set-max-pods 15m
$ oc get mc | grep kubelet
----
[source, terminal]
[source,terminal]
----
...
99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m
@@ -194,7 +194,7 @@ $ oc get kubeletconfig
----
+
.Example output
[source, terminal]
[source,terminal]
----
NAME AGE
set-max-pods 15m

View File

@@ -24,7 +24,7 @@ To change the hardware speed tolerance for etcd, complete the following steps.
. Check to see what the current value is by entering the following command:
+
[source, terminal]
[source,terminal]
----
$ oc describe etcd/cluster | grep "Control Plane Hardware Speed"
----
@@ -74,7 +74,7 @@ The Etcd "cluster" is invalid: spec.controlPlaneHardwareSpeed: Unsupported value
. Verify that the value was changed by entering the following command:
+
[source, terminal]
[source,terminal]
----
$ oc describe etcd/cluster | grep "Control Plane Hardware Speed"
----

View File

@@ -15,7 +15,7 @@ You can monitor the health status of an Argo CD application by running Prometheu
. To check the health status of your Argo CD application, enter the Prometheus Query Language (PromQL) query similar to the following example in the *Expression* field:
+
.Example
[source, terminal]
[source,terminal]
----
sum(argocd_app_info{dest_namespace=~"<your_defined_namespace>",health_status!=""}) by (health_status) <1>
----

View File

@@ -54,7 +54,7 @@ $ CLUSTERNS="clusters"
$ mkdir clusterDump-${CLUSTERNS}-${CLUSTERNAME}
----
+
[source, terminal]
[source,terminal]
----
$ hypershift dump cluster \
--name ${CLUSTERNAME} \
@@ -71,7 +71,7 @@ $ hypershift dump cluster \
2023-06-06T12:18:21+02:00 INFO Successfully archived dump {"duration": "1.519376292s"}
----
* To configure the command-line interface so that it impersonates all of the queries against the management cluster by using a username or service account, enter the `hypershift dump cluster` command with the `--as` flag.
* To configure the command-line interface so that it impersonates all of the queries against the management cluster by using a username or service account, enter the `hypershift dump cluster` command with the `--as` flag.
+
The service account must have enough permissions to query all of the objects from the namespaces, so the `cluster-admin` role is recommended to make sure you have enough permissions. The service account must be located in or have permissions to query the namespace of the `HostedControlPlane` resource.
+

View File

@@ -18,20 +18,20 @@ oc get imagestreams -nopenshift
. Fetch the tags for every imagestream in the `openshift` namespace by running the following command:
+
[source, terminal]
[source,terminal]
----
$ oc get is <image-stream-name> -o jsonpath="{range .spec.tags[*]}{.name}{'\t'}{.from.name}{'\n'}{end}" -nopenshift
----
+
For example:
+
[source, terminal]
[source,terminal]
----
$ oc get is ubi8-openjdk-17 -o jsonpath="{range .spec.tags[*]}{.name}{'\t'}{.from.name}{'\n'}{end}" -nopenshift
----
+
.Example output
[source, terminal]
[source,terminal]
----
1.11 registry.access.redhat.com/ubi8/openjdk-17:1.11
1.12 registry.access.redhat.com/ubi8/openjdk-17:1.12

View File

@@ -61,13 +61,13 @@ The following procedure creates a post-installation mirror configuration, where
* Ensure that there are no `ImageContentSourcePolicy` objects on your cluster. For example, you can use the following command:
+
[source, terminal]
[source,terminal]
----
$ oc get ImageContentSourcePolicy
----
+
.Example output
[source, terminal]
[source,terminal]
----
No resources found
----

View File

@@ -34,13 +34,13 @@ The timeout configuration option is an advanced tuning technique that can be use
The following example demonstrates how you can directly patch the default router deployment to set a 5-second timeout for the liveness and readiness probes:
[source, terminal]
[source,terminal]
----
$ oc -n openshift-ingress patch deploy/router-default --type=strategic --patch='{"spec":{"template":{"spec":{"containers":[{"name":"router","livenessProbe":{"timeoutSeconds":5},"readinessProbe":{"timeoutSeconds":5}}]}}}}'
----
.Verification
[source, terminal]
[source,terminal]
----
$ oc -n openshift-ingress describe deploy/router-default | grep -e Liveness: -e Readiness:
Liveness: http-get http://:1936/healthz delay=0s timeout=5s period=10s #success=1 #failure=3

View File

@@ -23,7 +23,7 @@ $ ./openshift-install coreos print-stream-json
. Use the output of the command to find the location of the Nutanix image, and click the link to download it.
+
.Example output
[source, terminal]
[source,terminal]
----
"nutanix": {
"release": "411.86.202210041459-0",

View File

@@ -67,7 +67,7 @@ In which case, the PVCs are not removed when uninstalling the cluster, which mig
.. Log in to the IBM Cloud using the CLI.
.. To list the PVCs, run the following command:
+
[source, terminal]
[source,terminal]
----
$ ibmcloud is volumes --resource-group-name <infrastructure_id>
----
@@ -76,7 +76,7 @@ For more information about listing volumes, see the link:https://cloud.ibm.com/d
.. To delete the PVCs, run the following command:
+
[source, terminal]
[source,terminal]
----
$ ibmcloud is volume-delete --force <volume_id>
----

View File

@@ -77,25 +77,25 @@ You need to ensure that your BMC supports all of the redfish APIs before install
List of redfish APIs::
* Power on
+
[source, terminal]
[source,terminal]
----
curl -u $USER:$PASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{"Action": "Reset", "ResetType": "On"}' https://$SERVER/redfish/v1/Systems/$SystemID/Actions/ComputerSystem.Reset
----
* Power off
+
[source, terminal]
[source,terminal]
----
curl -u $USER:$PASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{"Action": "Reset", "ResetType": "ForceOff"}' https://$SERVER/redfish/v1/Systems/$SystemID/Actions/ComputerSystem.Reset
----
* Temporary boot using `pxe`
* Temporary boot using `pxe`
+
[source, terminal]
[source,terminal]
----
curl -u $USER:$PASS -X PATCH -H "Content-Type: application/json" https://$Server/redfish/v1/Systems/$SystemID/ -d '{"Boot": {"BootSourceOverrideTarget": "pxe", "BootSourceOverrideEnabled": "Once"}}
----
* Set BIOS boot mode using `Legacy` or `UEFI`
+
[source, terminal]
[source,terminal]
----
curl -u $USER:$PASS -X PATCH -H "Content-Type: application/json" https://$Server/redfish/v1/Systems/$SystemID/ -d '{"Boot": {"BootSourceOverrideMode":"UEFI"}}
----
@@ -103,13 +103,13 @@ curl -u $USER:$PASS -X PATCH -H "Content-Type: application/json" https://$Serve
List of redfish-virtualmedia APIs::
* Set temporary boot device using `cd` or `dvd`
+
[source, terminal]
[source,terminal]
----
curl -u $USER:$PASS -X PATCH -H "Content-Type: application/json" https://$Server/redfish/v1/Systems/$SystemID/ -d '{"Boot": {"BootSourceOverrideTarget": "cd", "BootSourceOverrideEnabled": "Once"}}'
----
* Mount virtual media
+
[source, terminal]
[source,terminal]
----
curl -u $USER:$PASS -X PATCH -H "Content-Type: application/json" -H "If-Match: *" https://$Server/redfish/v1/Managers/$ManagerID/VirtualMedia/$VmediaId -d '{"Image": "https://example.com/test.iso", "TransferProtocolType": "HTTPS", "UserName": "", "Password":""}'
----

View File

@@ -20,13 +20,13 @@ On `rpm-ostree` systems, {product-title} creates an automatic backup on every st
.Procedure
. Manually create a backup by using the default name and parent directory, `/var/lib/microshift-backups/<default-backup-name>`, by running the following command:
+
[source, terminal]
[source,terminal]
----
$ sudo microshift backup
----
.Example output
+
[source, terminal]
[source,terminal]
----
??? I0829 07:32:12.313961 6586 run_check.go:28] "Service state" service="microshift.service" state="inactive"
??? I0829 07:32:12.318803 6586 run_check.go:28] "Service state" service="microshift-etcd.scope" state="inactive"
@@ -40,21 +40,21 @@ $ sudo microshift backup
. Optional: Manually create a backup with a specific name in the default directory by running the following command:
+
[source, terminal]
[source,terminal]
----
$ sudo microshift backup --name <my-custom-backup>
----
. Optional: Manually create a backup in a specific parent directory by running the following command:
+
[source, terminal]
[source,terminal]
----
$ sudo microshift backup --storage /var/lib/<custom-storage-location>
----
. Optional: Manually create a backup in a specific parent directory with a custom name by running the following command:
+
[source, terminal]
[source,terminal]
----
$ sudo microshift backup --storage /var/lib/<custom-storage-location>/ --name <my-custom-backup>
----

View File

@@ -27,14 +27,14 @@ The minimum permissible value for `memoryLimitMB` on {product-title} is 128 MB.
. After modifying the `memoryLimitMB` value in `/etc/microshift/config.yaml`, restart {product-title} by running the following command:
+
[source, terminal]
[source,terminal]
----
$ sudo systemctl restart microshift
----
. Verify the new `memoryLimitMB` value is in use by running the following command:
+
[source, terminal]
[source,terminal]
----
$ systemctl show --property=MemoryHigh microshift-etcd.scope
----

View File

@@ -28,7 +28,7 @@ $ sudo firewall-offline-cmd --permanent --zone=trusted --add-source=<custom IP r
. To allow internal traffic from pods through the network gateway, run the following command:
+
[source, terminal]
[source,terminal]
----
$ sudo firewall-offline-cmd --permanent --zone=trusted --add-source=169.254.169.1
----

View File

@@ -53,19 +53,19 @@ The following are examples of commands used when requiring external access throu
* Configuring a port for the {product-title} API server:
+
[source, terminal]
[source,terminal]
----
$ sudo firewall-cmd --permanent --zone=public --add-port=6443/tcp
----
* Configuring ports for applications exposed through the router:
+
[source, terminal]
[source,terminal]
----
$ sudo firewall-cmd --permanent --zone=public --add-port=80/tcp
----
+
[source, terminal]
[source,terminal]
----
$ sudo firewall-cmd --permanent --zone=public --add-port=443/tcp
----

View File

@@ -30,14 +30,14 @@ The following are examples of commands for settings that are mandatory for firew
* Configure host network pod access to other pods:
+
[source, terminal]
[source,terminal]
----
$ sudo firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16
----
* Configure host network pod access to services backed by Host endpoints, such as the {product-title} API:
+
[source, terminal]
[source,terminal]
----
$ sudo firewall-cmd --permanent --zone=trusted --add-source=169.254.169.1
----

View File

@@ -12,14 +12,14 @@ Access the output of health check scripts in the system log after an update by u
* To access the result of update checks, run the following command:
+
[source, terminal]
[source,terminal]
----
$ sudo grub2-editenv - list | grep ^boot_success
----
.Example output for a successful update
[source, terminal]
[source,terminal]
----
boot_success=1
----

View File

@@ -12,13 +12,13 @@ You can manually access the output of health checks in the system log by using t
* To access the results of a health check, run the following command:
+
[source, terminal]
[source,terminal]
----
$ sudo journalctl -o cat -u greenboot-healthcheck.service
----
.Example output of a failed health check
[source, terminal]
[source,terminal]
----
...
...

View File

@@ -13,14 +13,14 @@ You can access the output of health check scripts in the system log. For example
* To access the results of a prerollback script, run the following command:
+
[source, terminal]
[source,terminal]
----
$ sudo journalctl -o cat -u redboot-task-runner.service
----
.Example output of a prerollback script
[source, terminal]
[source,terminal]
----
...
...

View File

@@ -12,14 +12,14 @@ The default configuration of the `systemd` journal service stores the data in th
. Make the directory by running the following command:
+
[source, terminal]
[source,terminal]
----
$ sudo mkdir -p /etc/systemd/journald.conf.d
----
. Create the configuration file by running the following command:
+
[source, terminal]
[source,terminal]
----
cat <<EOF | sudo tee /etc/systemd/journald.conf.d/microshift.conf &>/dev/null
[Journal]

View File

@@ -17,14 +17,14 @@
. To test that greenboot is running a health check script file, reboot the host by running the following command:
+
[source, terminal]
[source,terminal]
----
$ sudo reboot
----
. Examine the output of greenboot health checks by running the following command:
+
[source, terminal]
[source,terminal]
----
$ sudo journalctl -o cat -u greenboot-healthcheck.service
----
@@ -36,7 +36,7 @@ $ sudo journalctl -o cat -u greenboot-healthcheck.service
+
.Example output
[source, terminal]
[source,terminal]
----
GRUB boot variables:
boot_success=0

View File

@@ -12,13 +12,13 @@ After a successful start, greenboot sets the variable `boot_success=` to `1` in
* To access the overall status of system health checks, run the following command:
+
[source, terminal]
[source,terminal]
----
$ sudo grub2-editenv - list | grep ^boot_success
----
.Example output for a successful system start
[source, terminal]
[source,terminal]
----
boot_success=1
----

View File

@@ -25,14 +25,14 @@ Run the commands listed in each step that follows to restore the iptable rules.
. Find the name of the ovnkube-master pod that you want to restart by running the following command:
+
[source, terminal]
[source,terminal]
----
$ pod=$(oc get pods -n openshift-ovn-kubernetes | grep ovnkube-master | awk -F " " '{print $1}')
----
. Delete the ovnkube-master pod:
+
[source, terminal]
[source,terminal]
----
$ oc -n openshift-ovn-kubernetes delete pod $pod
----
@@ -41,7 +41,7 @@ This command causes the daemon set pod to be automatically restarted, causing a
. Confirm that the iptables have reconciled by running the following command:
+
[source, terminal]
[source,terminal]
----
$ sudo iptables-save | grep NODEPORT
:OVN-KUBE-NODEPORT - [0:0]
@@ -53,7 +53,7 @@ $ sudo iptables-save | grep NODEPORT
. You can also confirm that a new ovnkube-master pod has been started by running the following command:
+
[source, terminal]
[source,terminal]
----
$ oc get pods -n openshift-ovn-kubernetes
----

View File

@@ -12,7 +12,7 @@ When {product-title} runs, it uses LVMS configuration from `/etc/microshift/lvmd
* To create the `lvmd.yaml` configuration file, run the following command:
+
[source, terminal]
[source,terminal]
----
$ sudo cp /etc/microshift/lvmd.yaml.default /etc/microshift/lvmd.yaml
----

View File

@@ -21,21 +21,21 @@ Run the commands listed in each step that follows to restore the `NodePort` serv
. Find the name of the ovn-master pod that you want to restart by running the following command:
+
[source, terminal]
[source,terminal]
----
$ pod=$(oc get pods -n openshift-ovn-kubernetes | grep ovnkube-master | awk -F " " '{print $1}')
----
. Force a restart of the of the ovnkube-master pod by running the following command:
+
[source, terminal]
[source,terminal]
----
$ oc -n openshift-ovn-kubernetes delete pod $pod
----
. Optional: To confirm that the ovnkube-master pod restarted, run the following command:
+
[source, terminal]
[source,terminal]
----
$ oc get pods -n openshift-ovn-kubernetes
----

View File

@@ -12,21 +12,21 @@ Not all OpenShift CLI (oc) tool commands are relevant for {product-title} deploy
For example, when the following `new-project` command is run:
[source, terminal]
[source,terminal]
----
$ oc new-project test
----
The following error message can be generated:
[source, terminal]
[source,terminal]
----
Error from server (NotFound): the server could not find the requested resource (get projectrequests.project.openshift.io)
----
And when the `get projects` command is run, another error can be generated as follows:
[source, terminal]
[source,terminal]
----
$ oc get projects
error: the server doesn't have a resource type "projects"

View File

@@ -12,13 +12,13 @@ OVN-Kubernetes sets up an iptable chain in the network address translation (NAT)
. View the iptable rules for the NodePort service by running the following command:
+
[source, terminal]
[source,terminal]
----
$ iptables-save | grep NODEPORT
----
+
.Example output
[source, terminal]
[source,terminal]
----
-A OUTPUT -j OVN-KUBE-NODEPORT
-A OVN-KUBE-NODEPORT -p tcp -m addrtype --dst-type LOCAL -m tcp --dport 30326 -j DNAT --to-destination 10.43.95.170:80
@@ -27,13 +27,13 @@ OVN-Kubernetes configures the `OVN-KUBE-NODEPORT` iptable chain in the NAT table
. Route the packet through the network with routing rules by running the following command:
+
[source, terminal]
[source,terminal]
----
$ ip route
----
+
.Example output
[source, terminal]
[source,terminal]
----
10.43.0.0/16 via 192.168.122.1 dev br-ex mtu 1400
----

View File

@@ -134,7 +134,7 @@ $ oc login -u ${ADMIN} -p ${ADMINPASSWORD} ${API}
[... output omitted ...]
----
+
[source, terminal]
[source,terminal]
----
$ oc create -f etcd-mc.yml machineconfig.machineconfiguration.openshift.io/98-var-lib-etcd created
----

View File

@@ -6,29 +6,29 @@
[id="multi-architecture-creating-arm64-bootimage_{context}"]
= Creating an ARM64 boot image using the Azure image gallery
The following procedure describes how to manually generate an ARM64 boot image.
The following procedure describes how to manually generate an ARM64 boot image.
.Prerequisites
* You installed the Azure CLI (`az`).
* You created a single-architecture Azure installer-provisioned cluster with the multi-architecture installer binary.
* You created a single-architecture Azure installer-provisioned cluster with the multi-architecture installer binary.
.Procedure
. Log in to your Azure account:
. Log in to your Azure account:
+
[source,terminal]
----
$ az login
----
. Create a storage account and upload the `arm64` virtual hard disk (VHD) to your storage account. The {product-title} installation program creates a resource group, however, the boot image can also be uploaded to a custom named resource group:
. Create a storage account and upload the `arm64` virtual hard disk (VHD) to your storage account. The {product-title} installation program creates a resource group, however, the boot image can also be uploaded to a custom named resource group:
+
[source,terminal]
----
$ az storage account create -n ${STORAGE_ACCOUNT_NAME} -g ${RESOURCE_GROUP} -l westus --sku Standard_LRS <1>
----
+
<1> The `westus` object is an example region.
<1> The `westus` object is an example region.
+
. Create a storage container using the storage account you generated:
+
@@ -50,7 +50,7 @@ $ RHCOS_VHD_ORIGIN_URL=$(oc -n openshift-machine-config-operator get configmap/c
----
$ BLOB_NAME=rhcos-$(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".release')-azure.aarch64.vhd
----
. Generate a shared access signature (SAS) token. Use this token to upload the {op-system} VHD to your storage container with the following commands:
. Generate a shared access signature (SAS) token. Use this token to upload the {op-system} VHD to your storage container with the following commands:
+
[source,terminal]
----
@@ -63,7 +63,7 @@ $ sas=`az storage container generate-sas -n ${CONTAINER_NAME} --account-name ${S
----
. Copy the {op-system} VHD into the storage container:
+
[source, terminal]
[source,terminal]
----
$ az storage blob copy start --account-name ${STORAGE_ACCOUNT_NAME} --sas-token "$sas" \
--source-uri "${RHCOS_VHD_ORIGIN_URL}" \
@@ -92,21 +92,21 @@ $ az storage blob show -c ${CONTAINER_NAME} -n ${BLOB_NAME} --account-name ${STO
}
----
+
<1> If the status parameter displays the `success` object, the copying process is complete.
<1> If the status parameter displays the `success` object, the copying process is complete.
. Create an image gallery using the following command:
+
[source,terminal]
----
$ az sig create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME}
----
Use the image gallery to create an image definition. In the following example command, `rhcos-arm64` is the name of the image definition.
Use the image gallery to create an image definition. In the following example command, `rhcos-arm64` is the name of the image definition.
+
[source,terminal]
----
$ az sig image-definition create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME} --gallery-image-definition rhcos-arm64 --publisher RedHat --offer arm --sku arm64 --os-type linux --architecture Arm64 --hyper-v-generation V2
----
. To get the URL of the VHD and set it to `RHCOS_VHD_URL` as the file name, run the following command:
. To get the URL of the VHD and set it to `RHCOS_VHD_URL` as the file name, run the following command:
+
[source,terminal]
----
@@ -118,7 +118,7 @@ $ RHCOS_VHD_URL=$(az storage blob url --account-name ${STORAGE_ACCOUNT_NAME} -c
----
$ az sig image-version create --resource-group ${RESOURCE_GROUP} --gallery-name ${GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account ${STORAGE_ACCOUNT_NAME} --os-vhd-uri ${RHCOS_VHD_URL}
----
. Your `arm64` boot image is now generated. You can access the ID of your image with the following command:
. Your `arm64` boot image is now generated. You can access the ID of your image with the following command:
+
[source,terminal]
----

View File

@@ -5,13 +5,13 @@
:_content-type: PROCEDURE
[id="multi-architecture-modify-machine-set_{context}"]
= Adding a multi-architecture compute machine set to your cluster
= Adding a multi-architecture compute machine set to your cluster
To add ARM64 compute nodes to your cluster, you must create an Azure compute machine set that uses the ARM64 boot image. To create your own custom compute machine set on Azure, see "Creating a compute machine set on Azure".
To add ARM64 compute nodes to your cluster, you must create an Azure compute machine set that uses the ARM64 boot image. To create your own custom compute machine set on Azure, see "Creating a compute machine set on Azure".
.Prerequisites
.Prerequisites
* You installed the OpenShift CLI (`oc`).
* You installed the OpenShift CLI (`oc`).
.Procedure
* Create a compute machine set and modify the `resourceID` and `vmSize` parameters with the following command. This compute machine set will control the `arm64` worker nodes in your cluster:
@@ -20,7 +20,7 @@ To add ARM64 compute nodes to your cluster, you must create an Azure compute mac
----
$ oc create -f arm64-machine-set-0.yaml
----
.Sample YAML compute machine set with `arm64` boot image
.Sample YAML compute machine set with `arm64` boot image
+
[source,yaml]
----
@@ -81,12 +81,12 @@ spec:
vmSize: Standard_D4ps_v5 <2>
vnet: <infrastructure_id>-vnet
zone: "<zone>"
----
----
<1> Set the `resourceID` parameter to the `arm64` boot image.
<2> Set the `vmSize` parameter to the instance type used in your installation. Some example instance types are `Standard_D4ps_v5` or `D8ps`.
.Verification
. Verify that the new ARM64 machines are running by entering the following command:
. Verify that the new ARM64 machines are running by entering the following command:
+
[source,terminal]
----
@@ -101,7 +101,7 @@ NAME DESIRED CURRENT READY AVA
----
. You can check that the nodes are ready and scheduable with the following command:
+
[source, terminal]
[source,terminal]
----
$ oc get nodes
$ oc get nodes
----

View File

@@ -8,15 +8,15 @@
Multi-tenancy in the Network Observability Operator allows and restricts individual user access, or group access, to the flows stored in Loki. Access is enabled for project admins. Project admins who have limited access to some namespaces can access flows for only those namespaces.
.Prerequisite
* You have installed link:https://catalog.redhat.com/software/containers/openshift-logging/loki-rhel8-operator/622b46bcae289285d6fcda39[Loki Operator version 5.7]
* You have installed link:https://catalog.redhat.com/software/containers/openshift-logging/loki-rhel8-operator/622b46bcae289285d6fcda39[Loki Operator version 5.7]
* The `FlowCollector` `spec.loki.authToken` configuration must be set to `FORWARD`.
* You must be logged in as a project administrator
.Procedure
. Authorize reading permission to `user1` by running the following command:
. Authorize reading permission to `user1` by running the following command:
+
[source, terminal]
[source,terminal]
----
$ oc adm policy add-cluster-role-to-user netobserv-reader user1
----

View File

@@ -52,8 +52,8 @@ If you do not add any labels to an entry in the `spec.recommend` section of the
. Create the `ConfigMap` object in the management cluster:
+
[source, terminal]
----
[source,terminal]
----
$ oc --kubeconfig="$MGMT_KUBECONFIG" create -f tuned-1.yaml
----
@@ -100,7 +100,7 @@ default 7m36s
rendered 7m36s
tuned-1 65s
----
. List the `Profile` objects in the hosted cluster:
+
[source,terminal]

View File

@@ -8,7 +8,7 @@
Clients initiate the execution of a remote command in a container by issuing a
request to the Kubernetes API server:
[source, terminal]
[source,terminal]
----
/proxy/nodes/<node_name>/exec/<namespace>/<pod>/<container>?command=<command>
----
@@ -23,7 +23,7 @@ In the above URL:
For example:
[source, terminal]
[source,terminal]
----
/proxy/nodes/node123.openshift.com/exec/myns/mypod/mycontainer?command=date
----

View File

@@ -22,7 +22,7 @@ As a cluster administrator, you can use a custom node selector to configure the
. Modify the DNS Operator object named `default`:
+
[source, terminal]
[source,terminal]
----
$ oc edit dns.operator/default
----

View File

@@ -39,7 +39,7 @@ The egress router image is not compatible with Amazon AWS, Azure Cloud, or any o
In _redirect mode_, an egress router pod configures `iptables` rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be configured to access the service for the egress router rather than connecting directly to the destination IP. You can access the destination service and port from the application pod by using the `curl` command. For example:
[source, terminal]
[source,terminal]
----
$ curl <router_service_IP> <port>
----

View File

@@ -8,7 +8,7 @@
In _redirect mode_, an egress router pod sets up iptables rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be configured to access the service for the egress router rather than connecting directly to the destination IP. You can access the destination service and port from the application pod by using the `curl` command. For example:
[source, terminal]
[source,terminal]
----
$ curl <router_service_IP> <port>
----

View File

@@ -133,7 +133,7 @@ If the configuration is correct, you receive a JSON object in response:
You can also verify application accessibility by opening the {product-title} console in a web browser.
====
+
[source, terminal]
[source,terminal]
----
$ curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure
----

View File

@@ -44,7 +44,7 @@ spec:
. Monitor the progress of the `VolumeSnapshotBackup` CRs by completing the following steps:
.. To check the progress of all the `VolumeSnapshotBackup` CRs, run the following command:
+
[source, terminal]
[source,terminal]
----
$ oc get vsb -n <app_ns>
----

View File

@@ -56,7 +56,7 @@ spec:
. Monitor the progress of the `VolumeSnapshotRestore` CRs by doing the following:
.. To check the progress of all the `VolumeSnapshotRestore` CRs, run the following command:
+
[source, terminal]
[source,terminal]
----
$ oc get vsr -n <app_ns>
----

View File

@@ -11,7 +11,7 @@ To run a pod (resulting from pipeline run or task run) with the `privileged` sec
* Configure the associated user account or service account to have an explicit SCC. You can perform the configuration using any of the following methods:
** Run the following command:
+
[source, terminal]
[source,terminal]
----
$ oc adm policy add-scc-to-user <scc-name> -z <service-account-name>
----

View File

@@ -12,12 +12,12 @@ You can install the `cert-manager` tool to manage the lifecycle of TLS certifica
. Create the root cluster issuer:
+
[source, terminal]
[source,terminal]
----
$ oc apply -f cluster-issuer.yaml
----
+
[source, terminal]
[source,terminal]
----
$ oc apply -n istio-system -f istio-ca.yaml
----
@@ -98,7 +98,7 @@ The namespace of the `selfsigned-root-issuer` issuer and `root-ca` certificate i
. Install `istio-csr`:
+
[source, terminal]
[source,terminal]
----
$ helm install istio-csr jetstack/cert-manager-istio-csr \
-n istio-system \
@@ -142,7 +142,7 @@ app:
. Deploy SMCP:
+
[source, terminal]
[source,terminal]
----
$ oc apply -f mesh.yaml -n istio-system
----
@@ -199,24 +199,24 @@ Use the sample `httpbin` service and `sleep` app to check mTLS traffic from ingr
. Deploy the HTTP and `sleep` apps:
+
[source, terminal]
[source,terminal]
----
$ oc new-project <namespace>
----
+
[source, terminal]
[source,terminal]
----
$ oc apply -f https://raw.githubusercontent.com/maistra/istio/maistra-2.4/samples/httpbin/httpbin.yaml
----
+
[source, terminal]
[source,terminal]
----
$ oc apply -f https://raw.githubusercontent.com/maistra/istio/maistra-2.4/samples/sleep/sleep.yaml
----
. Verify that `sleep` can access the `httpbin` service:
+
[source, terminal]
[source,terminal]
----
$ oc exec "$(oc get pod -l app=sleep -n <namespace> \
-o jsonpath={.items..metadata.name})" -c sleep -n <namespace> -- \
@@ -225,28 +225,28 @@ $ oc exec "$(oc get pod -l app=sleep -n <namespace> \
----
+
.Example output:
[source, terminal]
[source,terminal]
----
200
----
. Check mTLS traffic from the ingress gateway to the `httpbin` service:
+
[source, terminal]
[source,terminal]
----
$ oc apply -n <namespace> -f https://raw.githubusercontent.com/maistra/istio/maistra-2.4/samples/httpbin/httpbin-gateway.yaml
----
. Get the `istio-ingressgateway` route:
+
[source, terminal]
[source,terminal]
----
INGRESS_HOST=$(oc -n istio-system get routes istio-ingressgateway -o jsonpath='{.spec.host}')
----
. Verify mTLS traffic from the ingress gateway to the `httpbin` service:
+
[source, terminal]
[source,terminal]
----
$ curl -s -I http://$INGRESS_HOST/headers -o /dev/null -w "%{http_code}" -s
----

View File

@@ -17,7 +17,7 @@ Both ZRS and PremiumV2_LRS have some region limitations. For information about t
.Prerequisites
* Access to an {product-title} cluster with administrator rights
* Access to an {product-title} cluster with administrator rights
.Procedure
@@ -25,7 +25,7 @@ Use the following steps to create a storage class with a storage account type.
. Create a storage class designating the storage account type using a YAML file similar to the following:
+
[source, terminal]
[source,terminal]
--
$ oc create -f - << EOF
apiVersion: storage.k8s.io/v1

View File

@@ -8,7 +8,7 @@
This procedure explains how to configure the AWS EFS CSI Driver Operator with {product-title} on AWS Security Token Service (STS).
Perform this procedure before you have installed the AWS EFS CSI Operator, but not yet installed the AWS EFS CSI driver as part of the _Installing the AWS EFS CSI Driver Operator_ procedure.
Perform this procedure before you have installed the AWS EFS CSI Operator, but not yet installed the AWS EFS CSI driver as part of the _Installing the AWS EFS CSI Driver Operator_ procedure.
[IMPORTANT]
====
@@ -56,7 +56,7 @@ spec:
. Run the `ccoctl` tool to generate a new IAM role in AWS, and create a YAML file for it in the local file system (`<path_to_ccoctl_output_dir>/manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml`).
+
[source, terminal]
[source,terminal]
----
$ ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com
----
@@ -71,14 +71,14 @@ $ ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-
+
.Example
+
[source, terminal]
[source,terminal]
----
$ ccoctl aws create-iam-roles --name my-aws-efs --credentials-requests-dir credrequests --identity-provider-arn arn:aws:iam::123456789012:oidc-provider/my-aws-efs-oidc.s3.us-east-2.amazonaws.com
----
+
.Example output
+
[source, terminal]
[source,terminal]
----
2022/03/21 06:24:44 Role arn:aws:iam::123456789012:role/my-aws-efs -openshift-cluster-csi-drivers-aws-efs-cloud- created
2022/03/21 06:24:44 Saved credentials configuration to: /manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml
@@ -87,21 +87,21 @@ $ ccoctl aws create-iam-roles --name my-aws-efs --credentials-requests-dir credr
. Create the AWS EFS cloud credentials and secret:
+
[source, terminal]
[source,terminal]
----
$ oc create -f <path_to_ccoctl_output_dir>/manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml
----
+
.Example
+
[source, terminal]
[source,terminal]
----
$ oc create -f /manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml
----
+
.Example output
+
[source, terminal]
[source,terminal]
----
secret/aws-efs-cloud-credentials created
----

View File

@@ -13,7 +13,7 @@ The following information provides guidance on how to troubleshoot issues with A
* To initiate gathering of logs of the AWS EFS Operator and CSI driver, run the following command:
+
[source, terminal]
[source,terminal]
----
$ oc adm must-gather
[must-gather ] OUT Using must-gather plugin-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5
@@ -24,14 +24,14 @@ $ oc adm must-gather
* To show AWS EFS Operator errors, view the `ClusterCSIDriver` status:
+
[source, terminal]
[source,terminal]
----
$ oc get clustercsidriver efs.csi.aws.com -o yaml
----
* If a volume cannot be mounted to a pod (as shown in the output of the following command):
+
[source, terminal]
[source,terminal]
----
$ oc describe pod
...

View File

@@ -14,10 +14,10 @@
To manage the storage class using the CLI, run the following command:
[source, terminal]
[source,terminal]
----
oc patch clustercsidriver $DRIVERNAME --type=merge -p "{\"spec\":{\"storageClassState\":\"${STATE}\"}}" <1>
----
<1> Where `${STATE}` is "Removed" or "Managed" or "Unmanaged".
<1> Where `${STATE}` is "Removed" or "Managed" or "Unmanaged".
+
Where `$DRIVERNAME` is the provisioner name. You can find the provisioner name by running the command `oc get sc`.

View File

@@ -38,7 +38,7 @@ After turning Technology Preview features on by using feature gates, they cannot
$ oc get co storage
----
+
[source, terminal]
[source,terminal]
----
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
storage 4.10.0-0.nightly-2021-11-15-034648 True False False 4m36s
@@ -56,7 +56,7 @@ $ oc get pod -n openshift-cluster-csi-drivers
----
+
ifdef::vsphere[]
[source, terminal]
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
vmware-vsphere-csi-driver-controller-5646dbbf54-cnsx7 9/9 Running 0 4h29m
@@ -69,7 +69,7 @@ vmware-vsphere-csi-driver-operator-7c7fc474c-p544t 1/1 Running 0
----
endif::vsphere[]
ifdef::azure[]
[source, terminal]
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
azure-disk-csi-driver-controller-5949bf45fd-pm4qb 11/11 Running 0 39m
@@ -83,7 +83,7 @@ azure-disk-csi-driver-operator-7d966fc6c5-x74x5 1/1 Running 0
----
endif::azure[]
ifdef::azure_file[]
[source, terminal]
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
azure-file-csi-driver-controller-5949bf45fd-pm4qb 11/11 Running 0 39m

View File

@@ -19,7 +19,7 @@ To allow volumes to detach automatically from a node after a non-graceful node s
. Ensure that the node is shutdown by running the following command and checking the status:
+
[source, terminal]
[source,terminal]
----
oc get node <node name> <1>
----
@@ -32,7 +32,7 @@ If the node is not completely shut down, do not proceed with tainting the node.
+
. Taint the corresponding node object by running the following command:
+
[source, terminal]
[source,terminal]
----
oc adm taint node <node name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute <1>
----

View File

@@ -23,14 +23,14 @@ For more information about vSphere categories and tags, see the VMware vSphere d
* Specify the `openshift-zone` and `openshift-region` categories that you created earlier.
* Set `driverType` to `vSphere`.
+
[source, terminal]
[source,terminal]
----
~ $ oc edit clustercsidriver csi.vsphere.vmware.com -o yaml
----
+
.Example output
+
[source, terminal]
[source,terminal]
----
apiVersion: operator.openshift.io/v1
kind: ClusterCSIDriver
@@ -42,7 +42,7 @@ spec:
observedConfig: null
operatorLogLevel: Normal
unsupportedConfigOverrides: null
driverConfig:
driverConfig:
driverType: vSphere <1>
vSphere:
topologyCategories: <2>
@@ -54,14 +54,14 @@ spec:
. Verify that `CSINode` object has topology keys by running the following commands:
+
[source, terminal]
[source,terminal]
----
~ $ oc get csinode
----
+
.Example output
+
[source, terminal]
[source,terminal]
----
NAME DRIVERS AGE
co8-4s88d-infra-2m5vd 1 27m
@@ -73,14 +73,14 @@ co8-4s88d-worker-mbb46 1 47m
co8-4s88d-worker-zlk7d 1 47m
----
+
[source, terminal]
[source,terminal]
----
~ $ oc get csinode co8-4s88d-worker-j2hmg -o yaml
----
+
.Example output
+
[source, terminal]
[source,terminal]
----
...
spec:
@@ -102,9 +102,9 @@ spec:
. Create a tag to assign to datastores across failure domains:
+
When an {product-title} spans more than one failure domain, the datastore might not be shared across those failure domains, which is where topology-aware provisioning of persistent volumes (PVs) is useful.
When an {product-title} spans more than one failure domain, the datastore might not be shared across those failure domains, which is where topology-aware provisioning of persistent volumes (PVs) is useful.
+
.. In vCenter, create a category for tagging the datastores. For example, `openshift-zonal-datastore-cat`. You can use any other category name, provided the category uniquely is used for tagging datastores participating in {product-title} cluster. Also, ensure that `StoragePod`, `Datastore`, and `Folder` are selected as Associable Entities for the created category.
.. In vCenter, create a category for tagging the datastores. For example, `openshift-zonal-datastore-cat`. You can use any other category name, provided the category uniquely is used for tagging datastores participating in {product-title} cluster. Also, ensure that `StoragePod`, `Datastore`, and `Folder` are selected as Associable Entities for the created category.
.. In vCenter, create a tag that uses the previously created category. This example uses the tag name `openshift-zonal-datastore`.
.. Assign the previously created tag (in this example `openshift-zonal-datastore`) to each datastore in a failure domain that would be considered for dynamic provisioning.
+
@@ -119,14 +119,14 @@ You can use any names you like for categories and tags. The names used in this e
.. Click *CREATE*.
.. Type a name for the storage policy.
.. For the rules, choose Tag Placement rules and select the tag and category that targets the desired datastores (in this example, the `openshift-zonal-datastore` tag).
+
+
The datastores are listed in the storage compatibility table.
. Create a new storage class that uses the new zoned storage policy:
.. Click *Storage* > *StorageClasses*.
.. On the *StorageClasses* page, click *Create StorageClass*.
.. Type a name for the new storage class in *Name*.
.. Under *Provisioner*, select *csi.vsphere.vmware.com*.
.. Under *Provisioner*, select *csi.vsphere.vmware.com*.
.. Under *Additional parameters*, for the StoragePolicyName parameter, set *Value* to the name of the new zoned storage policy that you created earlier.
.. Click *Create*.
+

View File

@@ -9,14 +9,14 @@
Creating persistent volume claims (PVCs) and PVs from the topology aware storage class are truly zonal, and should use the datastore in their respective zone depending on how pods are scheduled:
[source, terminal]
[source,terminal]
----
~ $ oc get pv <pv-name> -o yaml
----
.Example output
[source, terminal]
[source,terminal]
----
...
nodeAffinity:
@@ -24,7 +24,7 @@ nodeAffinity:
nodeSelectorTerms:
- matchExpressions:
- key: topology.csi.vmware.com/openshift-zone <1>
operator: In
operator: In
values:
- <openshift-zone>
-key: topology.csi.vmware.com/openshift-region <1>

View File

@@ -13,7 +13,7 @@ For a user-provisioned infrastructure (UPI) installation, you must manually dest
. Destroy the cluster and remove all the resources associated with the cluster, including the hidden installer state files in the installation directory:
+
[source, terminal]
[source,terminal]
----
$ ./openshift-install destroy cluster --dir <installation_directory> <1>
----
@@ -22,7 +22,7 @@ definition files that the installation program creates.
. Before reinstalling the cluster, delete the installation directory:
+
[source, terminal]
[source,terminal]
----
$ rm -rf <installation_directory>
----

View File

@@ -22,7 +22,7 @@ To access your cluster using an IDP account:
. Add an IDP.
.. The following command creates an IDP backed by GitHub. After running the command, follow the interactive prompts from the output to access your link:https://github.com/settings/developers[GitHub developer settings] and configure a new OAuth application.
+
[source, terminal]
[source,terminal]
----
$ rosa create idp --cluster=<cluster_name> --interactive
----

View File

@@ -45,7 +45,7 @@ $ rosa create cluster --private-link --multi-az --cluster-name=<cluster-name> [-
. Enter the following command to check the status of your cluster. During cluster creation, the `State` field from the output will transition from `pending` to `installing`, and finally to `ready`.
+
[source, terminal]
[source,terminal]
----
$ rosa describe cluster --cluster=<cluster_name>
----
@@ -57,7 +57,7 @@ If installation fails or the `State` field does not change to `ready` after 40 m
. Enter the following command to follow the OpenShift installer logs to track the progress of your cluster:
+
[source, terminal]
[source,terminal]
----
$ rosa logs install --cluster=<cluster_name> --watch
----

View File

@@ -30,7 +30,7 @@ Multiple availability zones (AZ) are recommended for production workloads. The d
+
* To create your cluster with the default cluster settings:
+
[source, terminal]
[source,terminal]
----
$ rosa create cluster --cluster-name=<cluster_name>
----
@@ -46,7 +46,7 @@ I: To determine when your cluster is Ready, run `rosa describe cluster rh-rosa-t
----
* To create a cluster using interactive prompts:
+
[source, terminal]
[source,terminal]
----
$ rosa create cluster --interactive
----
@@ -58,7 +58,7 @@ $ rosa create cluster --interactive
. Enter the following command to check the status of your cluster. During cluster creation, the `State` field from the output will transition from `pending` to `installing`, and finally to `ready`.
+
[source, terminal]
[source,terminal]
----
$ rosa describe cluster --cluster=<cluster_name>
----
@@ -91,7 +91,7 @@ If installation fails or the `State` field does not change to `ready` after 40 m
. Track the progress of the cluster creation by watching the OpenShift installer logs:
+
[source, terminal]
[source,terminal]
----
$ rosa logs install --cluster=<cluster_name> --watch
----

View File

@@ -95,7 +95,7 @@ ifndef::sts[]
. Enter the following command to delete a cluster and watch the logs, replacing `<cluster_name>` with the name or ID of your cluster:
endif::sts[]
+
[source, terminal]
[source,terminal]
----
$ rosa delete cluster --cluster=<cluster_name> --watch
----
@@ -110,7 +110,7 @@ endif::sts[]
ifndef::sts[]
. To clean up your CloudFormation stack, enter the following command:
+
[source, terminal]
[source,terminal]
----
$ rosa init --delete
----

View File

@@ -22,7 +22,7 @@ AWS VPC Peering, VPN, DirectConnect, or link:https://docs.aws.amazon.com/whitepa
Enter the following command to enable the `--private` option on an existing cluster.
[source, terminal]
[source,terminal]
----
$ rosa edit cluster --cluster=<cluster_name> --private
----

View File

@@ -22,7 +22,7 @@ AWS VPC Peering, VPN, DirectConnect, or link:https://docs.aws.amazon.com/whitepa
Enter the following command to create a new private cluster.
[source, terminal]
[source,terminal]
----
$ rosa create cluster --cluster-name=<cluster_name> --private
----

View File

@@ -33,7 +33,7 @@ endif::[]
. Delete a cluster and watch the logs, replacing `<cluster_name>` with the name or ID of your cluster:
+
[source, terminal]
[source,terminal]
----
$ rosa delete cluster --cluster=<cluster_name> --watch
----

View File

@@ -46,7 +46,7 @@ $ rosa create machinepool -c <cluster-name> -i
+
. Add the subnet and instance type for the machine pool in the ROSA CLI. After several minutes, the cluster will provision the nodes.
+
[source, terminal]
[source,terminal]
----
I: Enabling interactive mode <1>
? Machine pool name: xx-lz-xx <2>

View File

@@ -8,7 +8,7 @@
If you have already created your first cluster and users, this list can serve as a command quick reference list when creating additional clusters and users.
[source, terminal]
[source,terminal]
----
## Configures your AWS account and ensures everything is setup correctly
$ rosa init

View File

@@ -194,7 +194,7 @@ settingsRef:
. Create the `ScanSettingBinding` object by running:
+
[source, terminal]
[source,terminal]
----
$ oc create -f <file-name>.yaml -n openshift-compliance
----

View File

@@ -17,7 +17,7 @@ You can configure the Cluster Samples Operator by editing the file with the prov
* Access the Cluster Samples Operator configuration:
+
[source, terminal]
[source,terminal]
----
$ oc edit configs.samples.operator.openshift.io/cluster -o yaml
----

View File

@@ -21,7 +21,7 @@ You can use the `kn source kafka create` command to create a Kafka source by usi
. To verify that the Kafka event source is working, create a Knative service that dumps incoming events into the service logs:
+
[source, terminal]
[source,terminal]
----
$ kn service create event-display \
--image quay.io/openshift-knative/knative-eventing-sources-event-display
@@ -46,13 +46,13 @@ The `--servers`, `--topics`, and `--consumergroup` options specify the connectio
. Optional: View details about the `KafkaSource` CR you created:
+
[source, terminal]
[source,terminal]
----
$ kn source kafka describe <kafka_source_name>
----
+
.Example output
[source, terminal]
[source,terminal]
----
Name: example-kafka-source
Namespace: kafka

View File

@@ -74,13 +74,13 @@ $ oc apply -f <filename>
* Verify that the Kafka event source was created by entering the following command:
+
[source, terminal]
[source,terminal]
----
$ oc get pods
----
+
.Example output
[source, terminal]
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
kafkasource-kafka-source-5ca0248f-... 1/1 Running 0 13m

View File

@@ -26,28 +26,28 @@ If you do not want to allow access to your Knative application from all namespac
.. Label the `knative-serving` namespace:
+
[source, terminal]
[source,terminal]
----
$ oc label namespace knative-serving knative.openshift.io/system-namespace=true
----
.. Label the `knative-serving-ingress` namespace:
+
[source, terminal]
[source,terminal]
----
$ oc label namespace knative-serving-ingress knative.openshift.io/system-namespace=true
----
.. Label the `knative-eventing` namespace:
+
[source, terminal]
[source,terminal]
----
$ oc label namespace knative-eventing knative.openshift.io/system-namespace=true
----
.. Label the `knative-kafka` namespace:
+
[source, terminal]
[source,terminal]
----
$ oc label namespace knative-kafka knative.openshift.io/system-namespace=true
----

View File

@@ -214,7 +214,7 @@ tasks 32706
----
+
.Example output
[source, terminal]
[source,terminal]
----
...
Capacity:
@@ -245,7 +245,7 @@ Allocated resources:
+
This VM has two CPU cores. The `system-reserved` setting reserves 500 millicores, meaning that half of one core is subtracted from the total capacity of the node to arrive at the `Node Allocatable` amount. You can see that `Allocatable CPU` is 1500 millicores. This means you can run one of the CPU Manager pods since each will take one whole core. A whole core is equivalent to 1000 millicores. If you try to schedule a second pod, the system will accept the pod, but it will never be scheduled:
+
[source, terminal]
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
cpumanager-6cqz7 1/1 Running 0 33m

View File

@@ -13,14 +13,14 @@ The Cluster Version Operator (CVO) implements this logical order through the con
These dependencies are encoded in the filenames of the manifests in the release image:
[source, terminal]
[source,terminal]
----
0000_<runlevel>_<component>_<manifest-name>.yaml
----
For example:
[source, terminal]
[source,terminal]
----
0000_03_config-operator_01_proxy.crd.yaml
----

View File

@@ -76,7 +76,7 @@ $ oc create -f <service_name>.yaml
. Start the VM. If the VM is already running, restart it.
. Query the `Service` object to verify that it is available:
+
[source, terminal]
[source,terminal]
----
$ oc get service -n example-namespace
----

View File

@@ -42,14 +42,14 @@ volumes:
. Apply the changes:
* If the VM is not running, run the following command:
+
[source, terminal]
[source,terminal]
----
$ `virtctl start <vm>`
----
* If the VM is running, reboot the VM or run the following command:
+
[source, terminal]
[source,terminal]
----
$ run `oc apply -f <vm.yaml>`
----

View File

@@ -13,13 +13,13 @@ To bind PCI devices to the VFIO (Virtual Function I/O) driver, obtain the values
.Procedure
. Run the `lspci` command to obtain the `vendor-ID` and the `device-ID` for the PCI device.
+
[source, terminal]
[source,terminal]
----
$ lspci -nnv | grep -i nvidia
----
+
.Example output
[source, terminal]
[source,terminal]
----
02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)
----
@@ -80,7 +80,7 @@ $ oc get MachineConfig
----
+
.Example output
[source, terminal]
[source,terminal]
----
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE
00-master d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h