1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 21:46:22 +01:00

Virtualization docs fixes during ROSA review 2

This commit is contained in:
Michael Burke
2023-12-21 15:42:21 -05:00
committed by openshift-cherrypick-robot
parent e041e624cb
commit bee2c6fdcd
19 changed files with 97 additions and 65 deletions

View File

@@ -29,7 +29,7 @@ spec:
- <namespace> <1>
excludedNamespaces: <2>
- <namespace>
includedResources: []
includedResources:
- pods <3>
excludedResources: [] <4>
labelSelector: <5>

View File

@@ -344,7 +344,7 @@ endif::[]
. Click *Create*.
[id="verifying-oadp-installation-1-2_{context}"]
== Verifying the installation
.Verification
. Verify the installation by viewing the {oadp-first} resources by running the following command:
+
@@ -399,9 +399,9 @@ $ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'
----
$ oc get backupStorageLocation -n openshift-adp
----
.Example output
[source,yaml]
+
.Example output
[source,terminal]
----
NAME PHASE LAST VALIDATED AGE DEFAULT
dpa-sample-1 Available 1s 3d16h true

View File

@@ -369,8 +369,7 @@ endif::[]
. Click *Create*.
[id="verifying-oadp-installation-1-3_{context}"]
== Verifying the installation
.Verification
. Verify the installation by viewing the {oadp-first} resources by running the following command:
+

View File

@@ -23,7 +23,7 @@ kind: DataProtectionApplication
metadata:
name: <dpa_sample>
spec:
...
# ...
backupLocations:
- name: default
velero:
@@ -35,7 +35,7 @@ spec:
caCert: <base64_encoded_cert_string> <1>
config:
insecureSkipTLSVerify: "false" <2>
...
# ...
----
<1> Specify the Base64-encoded CA certificate string.
<2> The `insecureSkipTLSVerify` configuration can be set to either `"true"` or `"false"`. If set to `"true"`, SSL/TLS security is disabled. If set to `"false"`, SSL/TLS security is enabled.

View File

@@ -25,7 +25,7 @@ kind: DataProtectionApplication
metadata:
name: <dpa_sample>
spec:
...
# ...
configuration:
velero:
podConfig:
@@ -46,4 +46,4 @@ spec:
Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover.
Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly.
====
====

View File

@@ -65,7 +65,7 @@ endif::[]
The *VirtualMachine details* page displays the progress of the VM creation.
.Verification
. Click the *Scripts* tab on the *Configuration* tab.
* Click the *Scripts* tab on the *Configuration* tab.
+
The secret name is displayed in the *Authorized SSH key* section.
@@ -74,4 +74,4 @@ ifeval::["{context}" == "static-key"]
endif::[]
ifeval::["{context}" == "dynamic-key"]
:!dynamic-key:
endif::[]
endif::[]

View File

@@ -156,7 +156,7 @@ $ virtctl start vm example-vm
----
.Verification
. Get the VM configuration:
* Get the VM configuration:
+
[source,terminal]
----
@@ -194,4 +194,4 @@ ifeval::["{context}" == "static-key"]
endif::[]
ifeval::["{context}" == "dynamic-key"]
:!dynamic-key:
endif::[]
endif::[]

View File

@@ -20,7 +20,7 @@ You can create a virtual machine (VM) snapshot for an offline or online VM by cr
+
[source,yaml]
----
apiVersion: snapshot.kubevirt.io/v1beta1
apiVersion: snapshot.kubevirt.io/v1alpha1
kind: VirtualMachineSnapshot
metadata:
name: <snapshot_name>
@@ -76,7 +76,7 @@ $ oc describe vmsnapshot <snapshot_name>
.Example output
[source,yaml]
----
apiVersion: snapshot.kubevirt.io/v1beta1
apiVersion: snapshot.kubevirt.io/v1alpha1
kind: VirtualMachineSnapshot
metadata:
creationTimestamp: "2020-09-30T14:41:51Z"
@@ -86,7 +86,7 @@ metadata:
name: mysnap
namespace: default
resourceVersion: "3897"
selfLink: /apis/snapshot.kubevirt.io/v1beta1/namespaces/default/virtualmachinesnapshots/my-vmsnapshot
selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinesnapshots/my-vmsnapshot
uid: 28eedf08-5d6a-42c1-969c-2eda58e2a78d
spec:
source:

View File

@@ -24,15 +24,23 @@ include::snippets/technology-preview.adoc[]
.Sample guest agent ping probe
[source,yaml]
----
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
annotations:
name: fedora-vm
namespace: example-namespace
# ...
spec:
readinessProbe:
guestAgentPing: {} <1>
initialDelaySeconds: 120 <2>
periodSeconds: 20 <3>
timeoutSeconds: 10 <4>
failureThreshold: 3 <5>
successThreshold: 3 <6>
template:
spec:
readinessProbe:
guestAgentPing: {} <1>
initialDelaySeconds: 120 <2>
periodSeconds: 20 <3>
timeoutSeconds: 10 <4>
failureThreshold: 3 <5>
successThreshold: 3 <6>
# ...
----
<1> The guest agent ping probe to connect to the VM.

View File

@@ -18,18 +18,26 @@ Define an HTTP liveness probe by setting the `spec.livenessProbe.httpGet` field
.Sample liveness probe with an HTTP GET test
[source,yaml]
----
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
annotations:
name: fedora-vm
namespace: example-namespace
# ...
spec:
livenessProbe:
initialDelaySeconds: 120 <1>
periodSeconds: 20 <2>
httpGet: <3>
port: 1500 <4>
path: /healthz <5>
httpHeaders:
- name: Custom-Header
value: Awesome
timeoutSeconds: 10 <6>
template:
spec:
livenessProbe:
initialDelaySeconds: 120 <1>
periodSeconds: 20 <2>
httpGet: <3>
port: 1500 <4>
path: /healthz <5>
httpHeaders:
- name: Custom-Header
value: Awesome
timeoutSeconds: 10 <6>
# ...
----
<1> The time, in seconds, after the VM starts before the liveness probe is initiated.

View File

@@ -17,20 +17,28 @@ Define an HTTP readiness probe by setting the `spec.readinessProbe.httpGet` fiel
.Sample readiness probe with an HTTP GET test
[source,yaml]
----
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
annotations:
name: fedora-vm
namespace: example-namespace
# ...
spec:
readinessProbe:
httpGet: <1>
port: 1500 <2>
path: /healthz <3>
httpHeaders:
- name: Custom-Header
value: Awesome
initialDelaySeconds: 120 <4>
periodSeconds: 20 <5>
timeoutSeconds: 10 <6>
failureThreshold: 3 <7>
successThreshold: 3 <8>
template:
spec:
readinessProbe:
httpGet: <1>
port: 1500 <2>
path: /healthz <3>
httpHeaders:
- name: Custom-Header
value: Awesome
initialDelaySeconds: 120 <4>
periodSeconds: 20 <5>
timeoutSeconds: 10 <6>
failureThreshold: 3 <7>
successThreshold: 3 <8>
# ...
----
<1> The HTTP GET request to perform to connect to the VM.

View File

@@ -18,14 +18,22 @@ Define a TCP readiness probe by setting the `spec.readinessProbe.tcpSocket` fiel
.Sample readiness probe with a TCP socket test
[source,yaml]
----
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
annotations:
name: fedora-vm
namespace: example-namespace
# ...
spec:
readinessProbe:
initialDelaySeconds: 120 <1>
periodSeconds: 20 <2>
tcpSocket: <3>
port: 1500 <4>
timeoutSeconds: 10 <5>
template:
spec:
readinessProbe:
initialDelaySeconds: 120 <1>
periodSeconds: 20 <2>
tcpSocket: <3>
port: 1500 <4>
timeoutSeconds: 10 <5>
# ...
----
<1> The time, in seconds, after the VM starts before the readiness probe is initiated.

View File

@@ -117,7 +117,7 @@ networks:
----
$ oc create -f vmi-pxe-boot.yaml
----
+
.Example output
[source,terminal]
----
@@ -155,7 +155,7 @@ In this case, we used `eth1` for the PXE boot, without an IP address. The other
----
$ ip addr
----
+
.Example output
[source,terminal]
----
@@ -163,3 +163,4 @@ $ ip addr
3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff
----

View File

@@ -23,7 +23,7 @@ To avoid errors, do not restart a virtual machine while it has a status of *Impo
* To stay on this page, where you can perform actions on multiple virtual machines:
.. Click the Options menu {kebab} located at the far right end of the row.
.. Click the Options menu {kebab} located at the far right end of the row and click *Restart*.
* To view comprehensive information about the selected virtual machine before
you restart it:

View File

@@ -51,7 +51,7 @@ $ oc get vmrestore <vm_restore>
.Example output
[source, yaml]
----
apiVersion: snapshot.kubevirt.io/v1beta1
apiVersion: snapshot.kubevirt.io/v1alpha1
kind: VirtualMachineRestore
metadata:
creationTimestamp: "2020-09-30T14:46:27Z"
@@ -66,7 +66,7 @@ ownerReferences:
name: my-vm
uid: 355897f3-73a0-4ec4-83d3-3c2df9486f4f
resourceVersion: "5512"
selfLink: /apis/snapshot.kubevirt.io/v1beta1/namespaces/default/virtualmachinerestores/my-vmrestore
selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinerestores/my-vmrestore
uid: 71c679a8-136e-46b0-b9b5-f57175a6a041
spec:
target:

View File

@@ -45,9 +45,7 @@ example:
+
[source,terminal]
----
$ oc patch storageprofile local --type=merge -p '{"spec": \
{"claimPropertySets": [{"accessModes": ["ReadWriteOnce"], \
"volumeMode": "Filesystem"}]}}'
$ oc patch storageprofile <storage_class> --type=merge -p '{"spec": {"claimPropertySets": [{"accessModes": ["ReadWriteOnce"], "volumeMode": "Filesystem"}]}}'
----
If you cannot resolve the issue, log in to the

View File

@@ -24,7 +24,6 @@ Kubernetes also supports authentication using client certificates, instead of a
[source,terminal,subs="attributes+"]
----
$ oc patch hyperconverged kubevirt-hyperconverged -n {CNVNamespace} --type json -p '[{"op": "replace", "path": "/spec/featureGates/deployVmConsoleProxy", "value": true}]'
# ...
----
. Generate a token by running the following command:

View File

@@ -14,10 +14,11 @@ Snapshot indications are contextual information about online virtual machine (VM
.Procedure
. Display the output from the snapshot indications by doing one of the following:
* For snapshots created by using the command line, view indicator output in the `status` stanza of the `VirtualMachineSnapshot` object YAML.
* For snapshots created by using the web console, click *VirtualMachineSnapshot* -> *Status* in the *Snapshot details* screen.
. Display the output from the snapshot indications by performing one of the following actions:
* Use the command line to view indicator output in the `status` stanza of the `VirtualMachineSnapshot` object YAML.
* In the web console, click *VirtualMachineSnapshot* -> *Status* in the *Snapshot details* screen.
. Verify the status of your online VM snapshot:
. Verify the status of your online VM snapshot by viewing the values of the `status.indications` parameter:
* `Online` indicates that the VM was running during online snapshot creation.
* `GuestAgent` indicates that the QEMU guest agent was running during online snapshot creation.
* `NoGuestAgent` indicates that the QEMU guest agent was not running during online snapshot creation. The QEMU guest agent could not be used to freeze and thaw the file system, either because the QEMU guest agent was not installed or running or due to another error.

View File

@@ -38,7 +38,9 @@ spec:
.Verification
* Verify that the VM is using the custom scheduler specified in the `VirtualMachine` manifest by checking the `virt-launcher` pod events:
.. View the list of pods in your cluster by entering the following command:
+
[source,terminal]