1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

CNV-62649: Updated CNV modules to pass DITA validation

This commit is contained in:
Jaromir Hradilek
2025-11-14 00:08:17 +01:00
parent c956b2d0a6
commit bcf009aebe
442 changed files with 1253 additions and 591 deletions

View File

@@ -12,7 +12,8 @@ Before you can enable NUMA functionality with {VirtProductName} VMs, you must en
* Worker nodes must have huge pages enabled.
* The `KubeletConfig` object on worker nodes must be configured with the `cpuManagerPolicy: static` spec to guarantee dedicated CPU allocation, which is a prerequisite for NUMA pinning.
+
.Example `cpuManagerPolicy: static` spec
Example `cpuManagerPolicy: static` spec:
+
[source,yaml]
----
apiVersion: machineconfiguration.openshift.io/v1

View File

@@ -6,6 +6,7 @@
[id="virt-about-aaq-operator_{context}"]
= About the AAQ Operator
[role="_abstract"]
The Application-Aware Quota (AAQ) Operator provides more flexible and extensible quota management compared to the native `ResourceQuota` object in the {product-title} platform.
In a multi-tenant cluster environment, where multiple workloads operate on shared infrastructure and resources, using the Kubernetes native `ResourceQuota` object to limit aggregate CPU and memory consumption presents infrastructure overhead and live migration challenges for {VirtProductName} workloads.
@@ -21,7 +22,8 @@ The AAQ Operator introduces two new API objects defined as custom resource defin
* `ApplicationAwareResourceQuota`: Sets aggregate quota restrictions enforced per namespace. The `ApplicationAwareResourceQuota` API is compatible with the native `ResourceQuota` object and shares the same specification and status definitions.
+
.Example manifest
Example manifest:
+
[source,yaml]
----
apiVersion: aaq.kubevirt.io/v1alpha1
@@ -41,7 +43,8 @@ spec:
* `ApplicationAwareClusterResourceQuota`: Mirrors the `ApplicationAwareResourceQuota` object at a cluster scope. It is compatible with the native `ClusterResourceQuota` API object and shares the same specification and status definitions. When creating an AAQ cluster quota, you can select multiple namespaces based on annotation selection, label selection, or both by editing the `spec.selector.labels` or `spec.selector.annotations` fields.
+
.Example manifest
Example manifest:
+
[source,yaml]
----
apiVersion: aaq.kubevirt.io/v1alpha1

View File

@@ -6,8 +6,9 @@
[id="virt-about-application-consistent-backups_{context}"]
= About application-consistent snapshots and backups
You can configure application-consistent snapshots and backups for Linux or Windows virtual machines (VMs) through a cycle of freezing and thawing. For any application, you can either configure a script on a Linux VM or register on a Windows VM to be notified when a snapshot or backup is due to begin.
[role="_abstract"]
You can configure application-consistent snapshots and backups for Linux or Windows virtual machines (VMs) through a cycle of freezing and thawing. For any application, you can configure a script on a Linux VM or register on a Windows VM to be notified when a snapshot or backup is due to begin.
On a Linux VM, freeze and thaw processes trigger automatically when a snapshot is taken or a backup is started by using, for example, a plugin from Velero or another backup vendor. The freeze process, performed by QEMU Guest Agent (QEMU GA) freeze hooks, ensures that before the snapshot or backup of a VM occurs, all of the VM's filesystems are frozen and each appropriately configured application is informed that a snapshot or backup is about to start. This notification affords each application the opportunity to quiesce its state. Depending on the application, quiescing might involve temporarily refusing new requests, finishing in-progress operations, and flushing data to disk. The operating system is then directed to quiesce the filesystems by flushing outstanding writes to disk and freezing new write activity. All new connection requests are refused. When all applications have become inactive, the QEMU GA freezes the filesystems, and a snapshot is taken or a backup initiated. After the taking of the snapshot or start of the backup, the thawing process begins. Filesystems writing is reactivated and applications receive notification to resume normal operations.
The same cycle of freezing and thawing is available on a Windows VM. Applications register with the Volume Shadow Copy Service (VSS) to receive notifications that they should flush out their data because a backup or snapshot is imminent. Thawing of the applications after the backup or snapshot is complete returns them to an active state. For more details, see the Windows Server documentation about the Volume Shadow Copy Service.
The same cycle of freezing and thawing is available on a Windows VM. Applications register with the Volume Shadow Copy Service (VSS) to receive notifications that they should flush out their data because a backup or snapshot is imminent. Thawing of the applications after the backup or snapshot is complete returns them to an active state. For more details, see the Windows Server documentation about the Volume Shadow Copy Service.

View File

@@ -7,12 +7,13 @@
[id="virt-about-auto-bootsource-updates_{context}"]
= About automatic boot source updates
Boot sources can make virtual machine (VM) creation more accessible and efficient for users. If automatic boot source updates are enabled, the Containerized Data Importer (CDI) imports, polls, and updates the images so that they are ready to be cloned for new VMs. By default, CDI automatically updates the _system-defined_ boot sources that {VirtProductName} provides.
[role="_abstract"]
Boot sources can make virtual machine (VM) creation more accessible and efficient for users. If automatic boot source updates are enabled, the Containerized Data Importer (CDI) imports, polls, and updates the images so that they are ready to be cloned for new VMs.
You can opt out of automatic updates for all system-defined boot sources by disabling the `enableCommonBootImageImport` feature gate. If you disable this feature gate, all `DataImportCron` objects are deleted. This does not remove previously imported boot source objects that store operating system images, though administrators can delete them manually.
By default, CDI automatically updates the _system-defined_ boot sources that {VirtProductName} provides. You can opt out of automatic updates for all system-defined boot sources by disabling the `enableCommonBootImageImport` feature gate. If you disable this feature gate, all `DataImportCron` objects are deleted. This does not remove previously imported boot source objects that store operating system images, though administrators can delete them manually.
When the `enableCommonBootImageImport` feature gate is disabled, `DataSource` objects are reset so that they no longer point to the original boot source. An administrator can manually provide a boot source by populating a PVC with an operating system, optionally creating a volume snapshot from the PVC, and then referring to the PVC or volume snapshot from the `DataSource` object.
_Custom_ boot sources that are not provided by {VirtProductName} are not controlled by the feature gate. You must manage them individually by editing the `HyperConverged` custom resource (CR). You can also use this method to manage individual system-defined boot sources.
Cluster administrators can enable automatic subscription for {op-system-base-full} virtual machines in the {product-title} web console.
Cluster administrators can enable automatic subscription for {op-system-base-full} virtual machines in the {product-title} web console.

View File

@@ -8,6 +8,7 @@
[id="virt-about-block-pvs_{context}"]
= About block persistent volumes
[role="_abstract"]
A block persistent volume (PV) is a PV that is backed by a raw block device. These volumes
do not have a file system and can provide performance benefits for
virtual machines by reducing overhead.

View File

@@ -6,6 +6,7 @@
[id="virt-about-cdi-operator_{context}"]
= About the Containerized Data Importer (CDI) Operator
[role="_abstract"]
The CDI Operator, `cdi-operator`, manages CDI and its related resources, which imports a virtual machine (VM) image into a persistent volume claim (PVC) by using a data volume.
image::cnv_components_cdi-operator.png[cdi-operator components]

View File

@@ -6,6 +6,9 @@
[id="about-changing-removing-mediated-devices_{context}"]
= About changing and removing mediated devices
[role="_abstract"]
As an administrator, you can change or remove mediated devices by editing the `HyperConverged` custom resource (CR).
You can reconfigure or remove mediated devices in several ways:
* Edit the `HyperConverged` CR and change the contents of the `mediatedDeviceTypes` stanza.
@@ -17,4 +20,4 @@ You can reconfigure or remove mediated devices in several ways:
[NOTE]
====
If you remove the device information from the `spec.permittedHostDevices` stanza without also removing it from the `spec.mediatedDevicesConfiguration` stanza, you cannot create a new mediated device type on the same node. To properly remove mediated devices, remove the device information from both stanzas.
====
====

View File

@@ -6,12 +6,10 @@
[id="virt-about-cloning_{context}"]
= About cloning
When cloning a data volume, the Containerized Data Importer (CDI) chooses one of the following Container Storage Interface (CSI) clone methods:
[role="_abstract"]
When cloning a data volume, the Containerized Data Importer (CDI) chooses one of the Container Storage Interface (CSI) clone methods: CSI volume cloning or smart cloning. Both methods are efficient but have certain requirements. If the requirements are not met, the CDI uses host-assisted cloning.
* CSI volume cloning
* Smart cloning
Both CSI volume cloning and smart cloning methods are efficient, but they have certain requirements for use. If the requirements are not met, the CDI uses host-assisted cloning. Host-assisted cloning is the slowest and least efficient method of cloning, but it has fewer requirements than either of the other two cloning methods.
Host-assisted cloning is the slowest and least efficient method of cloning, but it has fewer requirements than either of the other two cloning methods.
[id="csi-volume-cloning_{context}"]
== CSI volume cloning
@@ -47,7 +45,7 @@ When the requirements for neither Container Storage Interface (CSI) volume cloni
Host-assisted cloning uses a source pod and a target pod to copy data from the source volume to the target volume. The target persistent volume claim (PVC) is annotated with the fallback reason that explains why host-assisted cloning has been used, and an event is created.
.Example PVC target annotation
Example PVC target annotation:
[source,yaml]
----
@@ -60,7 +58,7 @@ metadata:
cdi.kubevirt.io/cloneType: copy
----
.Example event
Example event:
[source,terminal]
----

View File

@@ -6,6 +6,7 @@
[id="virt-about-cluster-network-addons-operator_{context}"]
= About the Cluster Network Addons Operator
[role="_abstract"]
The Cluster Network Addons Operator, `cluster-network-addons-operator`, deploys networking components on a cluster and manages the related resources for extended network functionality.
image::cnv_components_cluster-network-addons-operator.png[cluster-network-addons-operator components]

View File

@@ -6,7 +6,10 @@
[id="virt-about-control-plane-only-updates_{context}"]
= Control Plane Only updates
Every even-numbered minor version of {product-title} is an Extended Update Support (EUS) version. However, Kubernetes design mandates serial minor version updates, so you cannot directly update from one EUS version to the next. An EUS-to-EUS update starts with updating {VirtProductName} to the latest z-stream of the next odd-numbered minor version. Next, update {product-title} to the target EUS version. When the {product-title} update succeeds, the corresponding update for {VirtProductName} becomes available. You can now update {VirtProductName} to the target EUS version.
[role="_abstract"]
Every even-numbered minor version of {product-title} is an Extended Update Support (EUS) version. However, Kubernetes design mandates serial minor version updates, so you cannot directly update from one EUS version to the next.
An EUS-to-EUS update starts with updating {VirtProductName} to the latest z-stream of the next odd-numbered minor version. Next, update {product-title} to the target EUS version. When the {product-title} update succeeds, the corresponding update for {VirtProductName} becomes available. You can now update {VirtProductName} to the target EUS version.
[NOTE]
====
@@ -29,4 +32,4 @@ Before beginning a Control Plane Only update, you must:
By default, {VirtProductName} automatically updates workloads, such as the `virt-launcher` pod, when you update the {VirtProductName} Operator. You can configure this behavior in the `spec.workloadUpdateStrategy` stanza of the `HyperConverged` custom resource.
====
// link to EUS to EUS docs in assembly due to module limitations
// link to EUS to EUS docs in assembly due to module limitations

View File

@@ -6,6 +6,7 @@
[id="virt-about-cpu-and-memory-quota-namespace_{context}"]
= About CPU and memory quotas in a namespace
[role="_abstract"]
A _resource quota_, defined by the `ResourceQuota` object, imposes restrictions on a namespace that limit the total amount of compute resources that can be consumed by resources within that namespace.
The `HyperConverged` custom resource (CR) defines the user configuration for the Containerized Data Importer (CDI). The CPU and memory request and limit values are set to a default value of `0`. This ensures that pods created by CDI that do not specify compute resource requirements are given the default values and are allowed to run in a namespace that is restricted with a quota.
The `HyperConverged` custom resource (CR) defines the user configuration for the Containerized Data Importer (CDI). The CPU and memory request and limit values are set to a default value of `0`. This ensures that pods created by CDI that do not specify compute resource requirements are given the default values and are allowed to run in a namespace that is restricted with a quota.

View File

@@ -6,6 +6,7 @@
[id="virt-about-creating-storage-classes_{context}"]
= About creating storage classes
[role="_abstract"]
When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a `StorageClass` object's parameters after you create it.
In order to use the hostpath provisioner (HPP) you must create an associated storage class for the CSI driver with the `storagePools` stanza.
@@ -15,4 +16,4 @@ In order to use the hostpath provisioner (HPP) you must create an associated sto
Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.
To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the `StorageClass` value with `volumeBindingMode` parameter set to `WaitForFirstConsumer`, the binding and provisioning of the PV is delayed until a pod is created using the PVC.
====
====

View File

@@ -8,7 +8,10 @@
[id="virt-about-datavolumes_{context}"]
= About data volumes
`DataVolume` objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the `dataVolumeTemplate` field in the virtual machine (VM) specification.
[role="_abstract"]
`DataVolume` objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC).
You can create a data volume as either a standalone resource or by using the `dataVolumeTemplate` field in the virtual machine (VM) specification.
[NOTE]
====

View File

@@ -7,7 +7,10 @@
= About dedicated resources
[role="_abstract"]
When you enable dedicated resources for your virtual machine, your virtual
machine's workload is scheduled on CPUs that will not be used by other
processes. By using dedicated resources, you can improve the performance of the
processes.
By using dedicated resources, you can improve the performance of the
virtual machine and the accuracy of latency predictions.

View File

@@ -6,10 +6,11 @@
[id="virt-about-dr-methods_{context}"]
= About disaster recovery methods
For an overview of disaster recovery (DR) concepts, architecture, and planning considerations, see the link:https://access.redhat.com/articles/7041594[Red{nbsp}Hat {VirtProductName} disaster recovery guide] in the Red{nbsp}Hat Knowledgebase.
[role="_abstract"]
The two primary DR methods for {VirtProductName} are Metropolitan Disaster Recovery (Metro-DR) and Regional-DR.
For an overview of disaster recovery (DR) concepts, architecture, and planning considerations, see the link:https://access.redhat.com/articles/7041594[Red{nbsp}Hat {VirtProductName} disaster recovery guide] in the Red{nbsp}Hat Knowledgebase.
[id="metro-dr_{context}"]
== Metro-DR
@@ -18,4 +19,4 @@ Metro-DR uses synchronous replication. It writes to storage at both the primary
[id="regional-dr_{context}"]
== Regional-DR
Regional-DR uses asynchronous replication. The data in the primary site is synchronized with the secondary site at regular intervals. For this type of replication, you can have a higher latency connection between the primary and secondary sites.
Regional-DR uses asynchronous replication. The data in the primary site is synchronized with the secondary site at regular intervals. For this type of replication, you can have a higher latency connection between the primary and secondary sites.

View File

@@ -6,8 +6,10 @@
[id="virt-about-dv-conditions-and-events_{context}"]
= About data volume conditions and events
You can diagnose data volume issues by examining the output of the `Conditions` and `Events` sections
generated by the command:
[role="_abstract"]
You can diagnose data volume issues by examining the `Conditions` and `Events` sections of the `oc describe` command output.
Run the following command to inspect the data volume:
[source,terminal]
----

View File

@@ -6,7 +6,8 @@
[id="about-fusion-access-san_{context}"]
= About {IBMFusionFirst}
{IBMFusionFirst} is a solution that provides a scalable clustered file system for enterprise storage, primarily designed to offer access to consolidated, block-level data storage. It presents storage devices, such as disk arrays, to the operating system as if they were direct-attached storage.
[role="_abstract"]
{IBMFusionFirst} provides a scalable clustered file system for enterprise storage, primarily designed to offer access to consolidated, block-level data storage. It presents storage devices, such as disk arrays, to the operating system as if they were direct-attached storage.
This solution is particularly geared towards enterprise storage for {VirtProductName} and leverages existing Storage Area Network (SAN) infrastructure. A SAN is a dedicated network of storage devices that is typically not accessible through the local area network (LAN).

View File

@@ -6,6 +6,7 @@
[id="virt-about-hco-operator_{context}"]
= About the HyperConverged Operator (HCO)
[role="_abstract"]
The HCO, `hco-operator`, provides a single entry point for deploying and managing {VirtProductName} and several helper operators with opinionated defaults. It also creates custom resources (CRs) for those operators.
image::cnv_components_hco-operator.png[hco-operator components]

View File

@@ -6,6 +6,7 @@
[id="virt-about-hpp-operator_{context}"]
= About the Hostpath Provisioner (HPP) Operator
[role="_abstract"]
The HPP Operator, `hostpath-provisioner-operator`, deploys and manages the multi-node HPP and related resources.
image::cnv_components_hpp-operator.png[hpp-operator components]

View File

@@ -6,6 +6,7 @@
[id="virt-about-instance-types_{context}"]
= About instance types
[role="_abstract"]
An instance type is a reusable object where you can define resources and characteristics to apply to new VMs. You can define custom instance types or use the variety that are included when you install {VirtProductName}.
To create a new instance type, you must first create a manifest, either manually or by using the `virtctl` CLI tool. You then create the instance type object by applying the manifest to your cluster.
@@ -31,7 +32,6 @@ Because instance types require defined CPU and memory attributes, {VirtProductNa
You can manually create an instance type manifest. For example:
.Example YAML file with required fields
[source,yaml]
----
apiVersion: instancetype.kubevirt.io/v1beta1
@@ -49,7 +49,6 @@ spec:
You can create an instance type manifest by using the `virtctl` CLI utility. For example:
.Example `virtctl` command with required fields
[source,terminal]
----
$ virtctl create instancetype --cpu 2 --memory 256Mi

View File

@@ -7,6 +7,7 @@
[id="virt-about-ksm_{context}"]
= About using {VirtProductName} to activate KSM
[role="_abstract"]
You can configure {VirtProductName} to activate kernel samepage merging (KSM) when nodes experience memory overload.
[id="virt-ksm-configuration-methods"]
@@ -14,18 +15,19 @@ You can configure {VirtProductName} to activate kernel samepage merging (KSM) wh
You can enable or disable the KSM activation feature for all nodes by using the {product-title} web console or by editing the `HyperConverged` custom resource (CR). The `HyperConverged` CR supports more granular configuration.
[discrete]
[id="virt-ksm-cr-configuration"]
=== CR configuration
CR configuration::
+
You can configure the KSM activation feature by editing the `spec.configuration.ksmConfiguration` stanza of the `HyperConverged` CR.
+
--
* You enable the feature and configure settings by editing the `ksmConfiguration` stanza.
* You disable the feature by deleting the `ksmConfiguration` stanza.
* You can allow {VirtProductName} to enable KSM on only a subset of nodes by adding node selection syntax to the `ksmConfiguration.nodeLabelSelector` field.
--
+
[NOTE]
====
Even if the KSM activation feature is disabled in {VirtProductName}, an administrator can still enable KSM on nodes that support it.

View File

@@ -6,6 +6,7 @@
[id="virt-about-libguestfs-tools-virtctl-guestfs_{context}"]
= Libguestfs and virtctl guestfs commands
[role="_abstract"]
`Libguestfs` tools help you access and modify virtual machine (VM) disk images. You can use `libguestfs` tools to view and edit files in a guest, clone and build virtual machines, and format and resize disks.
You can also use the `virtctl guestfs` command and its sub-commands to modify, inspect, and debug VM disks on a PVC. To see a complete list of possible sub-commands, enter `virt-` on the command line and press the Tab key. For example:

View File

@@ -6,9 +6,10 @@
[id="virt-about-live-migration-permissions_{context}"]
= About live migration permissions
In {VirtProductName} 4.19 and later, live migration operations are restricted to users who are explicitly granted the `kubevirt.io:migrate` cluster role. Users with this role can create, delete, and update virtual machine (VM) live migration requests, which are represented by `VirtualMachineInstanceMigration` (VMIM) custom resources.
[role="_abstract"]
In {VirtProductName} 4.19 and later, live migration operations are restricted to users who are explicitly granted the `kubevirt.io:migrate` cluster role. Users with this role can create, delete, and update virtual machine (VM) live migration requests.
Cluster administrators can bind the `kubevirt.io:migrate` role to trusted users or groups at either the namespace or cluster level.
The live migration requests are represented by `VirtualMachineInstanceMigration` (VMIM) custom resources. Cluster administrators can bind the `kubevirt.io:migrate` role to trusted users or groups at either the namespace or cluster level.
Before {VirtProductName} 4.19, namespace administrators had live migration permissions by default. This behavior changed in version 4.19 to prevent unintended or malicious disruptions to infrastructure-critical migration operations.

View File

@@ -7,6 +7,7 @@
[id="virt-about-nmstate_{context}"]
= About nmstate
[role="_abstract"]
{VirtProductName} uses link:https://nmstate.github.io/[`nmstate`] to report on and configure the state of the node network. This makes it possible to modify network policy configuration, such as by creating a Linux bridge on all nodes, by applying a single configuration manifest to the cluster.
Node networking is monitored and updated by the following objects:

View File

@@ -5,6 +5,7 @@
[id="virt-about-node-labeling-obsolete-cpu-models_{context}"]
= About node labeling for obsolete CPU models
[role="_abstract"]
The {VirtProductName} Operator uses a predefined list of obsolete CPU models to ensure that a node supports only valid CPU models for scheduled VMs.
By default, the following CPU models are eliminated from the list of labels generated for the node:
@@ -31,4 +32,4 @@ qemu64
----
====
This predefined list is not visible in the `HyperConverged` CR. You cannot _remove_ CPU models from this list, but you can add to the list by editing the `spec.obsoleteCPUs.cpuModels` field of the `HyperConverged` CR.
This predefined list is not visible in the `HyperConverged` CR. You cannot _remove_ CPU models from this list, but you can add to the list by editing the `spec.obsoleteCPUs.cpuModels` field of the `HyperConverged` CR.

View File

@@ -6,11 +6,8 @@
[id="virt-about-node-placement-virt-components_{context}"]
= About node placement rules for {VirtProductName} components
You can use node placement rules for the following tasks:
* Deploy virtual machines only on nodes intended for virtualization workloads.
* Deploy Operators only on infrastructure nodes.
* Maintain separation between workloads.
[role="_abstract"]
You can use node placement rules to deploy virtual machines only on nodes intended for virtualization workloads, to deploy Operators only on infrastructure nodes, or to maintain separation between workloads.
Depending on the object, you can use one or more of the following rule types:

View File

@@ -6,6 +6,9 @@
[id="virt-about-node-placement-virtualization-components_{context}"]
= About node placement for virtualization components
[role="_abstract"]
You can customize where {VirtProductName} deploys its components by applying node placement rules.
You might want to customize where {VirtProductName} deploys its components to ensure that:
* Virtual machines only deploy on nodes that are intended for virtualization workloads.

View File

@@ -6,7 +6,10 @@
[id="virt-about-node-placement-vms_{context}"]
= About node placement for virtual machines
To ensure that virtual machines (VMs) run on appropriate nodes, you can configure node placement rules. You might want to do this if:
[role="_abstract"]
To ensure that virtual machines (VMs) run on appropriate nodes, you can configure node placement rules.
You might want to do this if:
* You have several VMs. To ensure fault tolerance, you want them to run on different nodes.
* You have two chatty VMs. To avoid redundant inter-node routing, you want the VMs to run on the same node.

View File

@@ -6,6 +6,9 @@
[id="virt-about_pci-passthrough_{context}"]
= About preparing a host device for PCI passthrough
To prepare a host device for PCI passthrough by using the CLI, create a `MachineConfig` object and add kernel arguments to enable the Input-Output Memory Management Unit (IOMMU). Bind the PCI device to the Virtual Function I/O (VFIO) driver and then expose it in the cluster by editing the `permittedHostDevices` field of the `HyperConverged` custom resource (CR). The `permittedHostDevices` list is empty when you first install the {VirtProductName} Operator.
[role="_abstract"]
To prepare a host device for PCI passthrough by using the CLI, create a `MachineConfig` object and add kernel arguments to enable the Input-Output Memory Management Unit (IOMMU).
Bind the PCI device to the Virtual Function I/O (VFIO) driver and then expose it in the cluster by editing the `permittedHostDevices` field of the `HyperConverged` custom resource (CR). The `permittedHostDevices` list is empty when you first install the {VirtProductName} Operator.
To remove a PCI host device from the cluster by using the CLI, delete the PCI device information from the `HyperConverged` CR.

View File

@@ -6,6 +6,7 @@
[id="virt-about-preallocation_{context}"]
= About preallocation
[role="_abstract"]
The Containerized Data Importer (CDI) can use the QEMU preallocate mode for data volumes to improve write performance. You can use preallocation mode for importing and uploading operations and when creating blank data volumes.
If preallocation is enabled, CDI uses the better preallocation method depending on the underlying file system and device type:

View File

@@ -7,6 +7,7 @@
= About readiness and liveness probes
[role="_abstract"]
Use readiness and liveness probes to detect and handle unhealthy virtual machines (VMs). You can include one or more probes in the specification of the VM to ensure that traffic does not reach a VM that is not ready for it and that a new VM is created when a VM becomes unresponsive.
A _readiness probe_ determines whether a VM is ready to accept service requests. If the probe fails, the VM is removed from the list of available endpoints until the VM is ready.

View File

@@ -7,6 +7,7 @@
= About reclaiming statically provisioned persistent volumes
[role="_abstract"]
When you reclaim a persistent volume (PV), you unbind the PV from a persistent volume claim (PVC) and delete the PV. Depending on the underlying storage, you might need to manually delete the shared storage.
You can then re-use the PV configuration to create a PV with a different name.

View File

@@ -6,8 +6,10 @@
[id="virt-about-scratch-space_{context}"]
= About scratch space
[role="_abstract"]
The Containerized Data Importer (CDI) requires scratch space (temporary storage) to complete some operations, such as importing and uploading virtual machine images.
During this process, CDI provisions a scratch space PVC equal to the size of the PVC backing the destination data volume (DV).
The scratch space PVC is deleted after the operation completes or aborts.
You can define the storage class that is used to bind the scratch space PVC in the `spec.scratchSpaceStorageClass` field of the `HyperConverged` custom resource.
@@ -21,7 +23,6 @@ CDI requires requesting scratch space with a `file` volume mode, regardless of t
If the origin PVC is backed by `block` volume mode, you must define a storage class capable of provisioning `file` volume mode PVCs.
====
[discrete]
== Manual provisioning
If there are no storage classes, CDI uses any PVCs in the project that match the size requirements for the image.

View File

@@ -7,6 +7,7 @@
[id="virt-about-services_{context}"]
= About services
[role="_abstract"]
A Kubernetes service exposes network access for clients to an application running on a set of pods. Services offer abstraction, load balancing, and, in the case of the `NodePort` and `LoadBalancer` types, exposure to the outside world.
ClusterIP:: Exposes the service on an internal IP address and as a DNS name to other applications within the cluster. A single service can map to multiple virtual machines. When a client tries to connect to the service, the client's request is load balanced among available backends. `ClusterIP` is the default service type.

View File

@@ -6,7 +6,8 @@
[id="virt-about-smart-cloning_{context}"]
= About smart-cloning
When a data volume is smart-cloned, the following occurs:
[role="_abstract"]
When a data volume is smart-cloned, a set of operations is performed in a specific order.
. A snapshot of the source persistent volume claim (PVC) is created.
. A PVC is created from the snapshot.

View File

@@ -6,4 +6,5 @@
[id="virt-about-ssp-operator_{context}"]
= About the Scheduling, Scale, and Performance (SSP) Operator
The SSP Operator, `ssp-operator`, deploys the common templates, the related default boot sources, the pipeline tasks, and the template validator.
[role="_abstract"]
The SSP Operator, `ssp-operator`, deploys the common templates, the related default boot sources, the pipeline tasks, and the template validator.

View File

@@ -6,6 +6,7 @@
[id="virt-about-static-and-dynamic-ssh-keys_{context}"]
= About static and dynamic SSH key management
[role="_abstract"]
You can add public SSH keys to virtual machines (VMs) statically at first boot or dynamically at runtime.
[NOTE]
@@ -13,7 +14,6 @@ You can add public SSH keys to virtual machines (VMs) statically at first boot o
Only {op-system-base-full} 9 supports dynamic key injection.
====
[discrete]
[id="static-key-management_{context}"]
== Static SSH key management
@@ -24,11 +24,10 @@ You can add the key by using one of the following methods:
* Add a key to a single VM when you create it by using the web console or the command line.
* Add a key to a project by using the web console. Afterwards, the key is automatically added to the VMs that you create in this project.
.Use cases
Use cases:
* As a VM owner, you can provision all your newly created VMs with a single key.
[discrete]
[id="dynamic-key-management_{context}"]
== Dynamic SSH key management
@@ -36,7 +35,7 @@ You can enable dynamic SSH key management for a VM with {op-system-base-full} 9
When dynamic key management is disabled, the default key management setting of a VM is determined by the image used for the VM.
.Use cases
Use cases:
* Granting or revoking access to VMs: As a cluster administrator, you can grant or revoke remote VM access by adding or removing the keys of individual users from a `Secret` object that is applied to all VMs in a namespace.
* User access: You can add your access credentials to all VMs that you create and manage.

View File

@@ -6,13 +6,13 @@
[id="virt-about-storage-pools-pvc-templates_{context}"]
= About storage pools created with PVC templates
[role="_abstract"]
If you have a single, large persistent volume (PV), you can create a storage pool by defining a PVC template in the hostpath provisioner (HPP) custom resource (CR).
A storage pool created with a PVC template can contain multiple HPP volumes. Splitting a PV into smaller volumes provides greater flexibility for data allocation.
The PVC template is based on the `spec` stanza of the `PersistentVolumeClaim` object:
.Example `PersistentVolumeClaim` object
[source,yaml]
----
apiVersion: v1

View File

@@ -7,6 +7,7 @@
[id="virt-about-storage-volumes-for-vm-disks_{context}"]
= About volume and access modes for virtual machine disks
[role="_abstract"]
If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode.
For a list of known storage providers for {VirtProductName}, see link:https://catalog.redhat.com/search?searchType=software&badges_and_features=OpenShift+Virtualization&subcategories=Storage[the Red Hat Ecosystem Catalog].

View File

@@ -6,7 +6,8 @@
[id="virt-about-tekton-tasks-operator_{context}"]
= About the Tekton Tasks Operator
The Tekton Tasks Operator, `tekton-tasks-operator`, deploys example pipelines showing the usage of OpenShift Pipelines for virtual machines (VMs). This operator also deploys additional OpenShift Pipeline tasks that allow users to create VMs from templates, copy and modify templates, and create data volumes.
[role="_abstract"]
The Tekton Tasks Operator, `tekton-tasks-operator`, deploys example pipelines showing the usage of OpenShift Pipelines for virtual machines (VMs). It also deploys additional OpenShift Pipeline tasks that allow users to create VMs from templates, copy and modify templates, and create data volumes.
//image::cnv_components_tekton-tasks-operator.png[tekton-tasks-operator components]

View File

@@ -6,6 +6,7 @@
[id="virt-about-the-overview-dashboard_{context}"]
= About the {product-title} dashboards page
[role="_abstract"]
Access the {product-title} dashboard, which captures high-level information
about the cluster, by navigating to *Home* -> *Overview* from the {product-title} web console.

View File

@@ -6,6 +6,7 @@
[id="virt-about-uefi-mode-for-vms_{context}"]
= About UEFI mode for virtual machines
[role="_abstract"]
Unified Extensible Firmware Interface (UEFI), like legacy BIOS, initializes hardware components and operating system image files when a computer starts. UEFI supports more modern features and customization options than BIOS, enabling faster boot times.
It stores all the information about initialization and startup in a file with a `.efi` extension, which is stored on a special partition called EFI System Partition (ESP). The ESP also contains the boot loader programs for the operating system that is installed on the computer.

View File

@@ -6,7 +6,13 @@
[id="virt-about-upgrading-virt_{context}"]
= About updating {VirtProductName}
When you install {VirtProductName}, you select an update channel and an approval strategy. The update channel determines the versions that {VirtProductName} will be updated to. The approval strategy setting determines whether updates occur automatically or require manual approval. Both settings can impact supportability.
[role="_abstract"]
When you install {VirtProductName}, you select an update channel and an approval strategy. The update channel determines the versions that {VirtProductName} will be updated to. The approval strategy setting determines whether updates occur automatically or require manual approval.
[NOTE]
====
Both settings can impact supportability.
====
[id="recommended-settings_{context}"]
== Recommended settings
@@ -55,4 +61,4 @@ endif::openshift-rosa,openshift-dedicated,openshift-rosa-hcp[]
* Operator Lifecycle Manager (OLM) manages the lifecycle of the {VirtProductName} Operator. The Marketplace Operator, which is deployed during {product-title} installation, makes external Operators available to your cluster.
* OLM provides z-stream and minor version updates for {VirtProductName}. Minor version updates become available when you update {product-title} to the next minor version. You cannot update {VirtProductName} to the next minor version without first updating {product-title}.
* OLM provides z-stream and minor version updates for {VirtProductName}. Minor version updates become available when you update {product-title} to the next minor version. You cannot update {VirtProductName} to the next minor version without first updating {product-title}.

View File

@@ -6,7 +6,10 @@
[id="virt-about-using-virtual-gpus_{context}"]
= About using virtual GPUs with {VirtProductName}
Some graphics processing unit (GPU) cards support the creation of virtual GPUs (vGPUs). {VirtProductName} can automatically create vGPUs and other mediated devices if an administrator provides configuration details in the `HyperConverged` custom resource (CR). This automation is especially useful for large clusters.
[role="_abstract"]
Some graphics processing unit (GPU) cards support the creation of virtual GPUs (vGPUs). {VirtProductName} can automatically create vGPUs and other mediated devices if an administrator provides configuration details in the `HyperConverged` custom resource (CR).
This automation is especially useful for large clusters.
[NOTE]
====

View File

@@ -6,6 +6,7 @@
[id="virt-about-virt-operator_{context}"]
= About the {VirtProductName} Operator
[role="_abstract"]
The {VirtProductName} Operator, `virt-operator`, deploys, upgrades, and manages {VirtProductName} without disrupting current virtual machine (VM) workloads. In addition, the {VirtProductName} Operator deploys the common instance types and common preferences.
image::cnv_components_virt-operator.png[virt-operator components]

View File

@@ -6,6 +6,7 @@
[id="virt-about-vm-snapshots_{context}"]
= About snapshots
[role="_abstract"]
A _snapshot_ represents the state and data of a virtual machine (VM) at a specific point in time. You can use a snapshot to restore an existing VM to a previous state (represented by
the snapshot) for backup and disaster recovery or to rapidly roll back to a previous development version.
@@ -29,7 +30,7 @@ Cloning a VM with a vTPM device attached to it or creating a new VM from its sna
* Restore a VM from a snapshot
* Delete an existing VM snapshot
.VM snapshot controller and custom resources
== VM snapshot controller and custom resources
The VM snapshot feature introduces three new API objects defined as custom resource definitions (CRDs) for managing snapshots:

View File

@@ -7,6 +7,7 @@
[id="virt-about-vmis_{context}"]
= About virtual machine instances
[role="_abstract"]
A virtual machine instance (VMI) is a representation of a running virtual machine (VM). When a VMI is owned by a VM or by another object, you manage it through its owner in the web console or by using the `oc` command-line interface (CLI).
A standalone VMI is created and started independently with a script, through automation, or by using other methods in the CLI. In your environment, you might have standalone VMIs that were developed and started outside of the {VirtProductName} environment. You can continue to manage those standalone VMIs by using the CLI. You can also use the web console for specific tasks associated with standalone VMIs:
@@ -24,4 +25,4 @@ When you delete a VM, the associated VMI is automatically deleted. You delete a
Before you uninstall {VirtProductName}, list and view the standalone VMIs by using the CLI or the web console. Then, delete any outstanding VMIs.
====
When you edit a VM, some settings might be applied to the VMIs dynamically and without the need for a restart. Any change made to a VM object that cannot be applied to the VMIs dynamically will trigger the `RestartRequired` VM condition. Changes are effective on the next reboot, and the condition is removed.
When you edit a VM, some settings might be applied to the VMIs dynamically and without the need for a restart. Any change made to a VM object that cannot be applied to the VMIs dynamically will trigger the `RestartRequired` VM condition. Changes are effective on the next reboot, and the condition is removed.

View File

@@ -6,6 +6,7 @@
[id="virt-about-vms-and-boot-sources_{context}"]
= About VM boot sources
[role="_abstract"]
Virtual machines (VMs) consist of a VM definition and one or more disks that are backed by data volumes. VM templates enable you to create VMs using predefined specifications.
Every template requires a boot source, which is a fully configured disk image including configured drivers. Each template contains a VM definition with a pointer to the boot source. Each boot source has a predefined name and namespace. For some operating systems, a boot source is automatically provided. If it is not provided, then an administrator must prepare a custom boot source.

View File

@@ -6,10 +6,13 @@
[id="virt-about-vtpm-devices_{context}"]
= About vTPM devices
[role="_abstract"]
A virtual Trusted Platform Module (vTPM) device functions like a
physical Trusted Platform Module (TPM) hardware chip.
You can use a vTPM device with any operating system, but Windows 11 requires
the presence of a TPM chip to install or boot. A vTPM device allows VMs created
the presence of a TPM chip to install or boot.
A vTPM device allows VMs created
from a Windows 11 image to function without a physical TPM chip.
{VirtProductName} supports persisting vTPM device state by using Persistent Volume Claims (PVCs) for VMs. If you do not specify the storage class for this PVC, {VirtProductName} uses the default storage class for virtualization workloads. If the default storage class for virtualization workloads is not set, {VirtProductName} uses the default storage class for the cluster.

View File

@@ -6,6 +6,7 @@
[id="virt-about-workload-security_{context}"]
= About workload security
[role="_abstract"]
By default, virtual machine (VM) workloads do not run with root privileges in {VirtProductName}, and there are no supported {VirtProductName} features that require root privileges.
For each VM, a `virt-launcher` pod runs an instance of `libvirt` in _session mode_ to manage the VM process. In session mode, the `libvirt` daemon runs as a non-root user account and only permits connections from clients that are running under the same user identifier (UID). Therefore, VMs run as unprivileged pods, adhering to the security principle of least privilege.
For each VM, a `virt-launcher` pod runs an instance of `libvirt` in _session mode_ to manage the VM process. In session mode, the `libvirt` daemon runs as a non-root user account and only permits connections from clients that are running under the same user identifier (UID). Therefore, VMs run as unprivileged pods, adhering to the security principle of least privilege.

View File

@@ -6,6 +6,7 @@
[id="virt-about-workload-updates_{context}"]
= VM workload updates
[role="_abstract"]
When you update {VirtProductName}, virtual machine workloads, including `libvirt`, `virt-launcher`, and `qemu`, update automatically if they support live migration.
[NOTE]
@@ -33,7 +34,6 @@ If you enable both `LiveMigrate` and `Evict`:
* VMIs that do not support live migration use the `Evict` update strategy. If a VMI is controlled by a `VirtualMachine` object that has `runStrategy: Always` set, a new VMI is created in a new pod with updated components.
[discrete]
[id="migration-attempts-timeouts_{context}"]
== Migration attempts and timeouts

View File

@@ -6,6 +6,7 @@
[id="virt-access-configuration-considerations_{context}"]
= Access configuration considerations
[role="_abstract"]
Each method for configuring access to a virtual machine (VM) has advantages and limitations, depending on the traffic load and client requirements.
[NOTE]

View File

@@ -6,6 +6,7 @@
[id="virt-accessing-exported-vm-manifests_{context}"]
= Accessing exported virtual machine manifests
[role="_abstract"]
After you export a virtual machine (VM) or snapshot, you can get the `VirtualMachine` manifest and related information from the export server.
.Prerequisites
@@ -51,10 +52,8 @@ $ oc get secret export-token-<export_name> -o jsonpath={.data.token} | base64 --
$ oc get vmexport <export_name> -o yaml
----
. Review the `status.links` stanza, which is divided into `external` and `internal` sections. Note the `manifests.url` fields within each section:
. Review the `status.links` stanza, which is divided into `external` and `internal` sections. Note the `manifests.url` fields within each section, for example:
+
.Example output
[source,yaml]
----
apiVersion: export.kubevirt.io/v1beta1

View File

@@ -6,6 +6,7 @@
[id="virt-accessing-node-exporter-outside-cluster_{context}"]
= Accessing the node exporter service outside the cluster
[role="_abstract"]
You can access the node-exporter service outside the cluster and view the exposed metrics.
.Prerequisites
@@ -28,7 +29,8 @@ $ oc expose service -n <namespace> <node_exporter_service_name>
$ oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.host
----
+
.Example output
Example output:
+
[source,terminal]
----
NAME DNS
@@ -41,7 +43,8 @@ node-exporter-service node-exporter-service-dynamation.apps.cluster.example.or
$ curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metrics
----
+
.Example output
Example output:
+
[source,terminal]
----
go_gc_duration_seconds{quantile="0"} 1.5382e-05

View File

@@ -6,6 +6,7 @@
[id="virt-accessing-rdp-console_{context}"]
= Connecting to a Windows virtual machine with an RDP console
[role="_abstract"]
Create a Kubernetes `Service` object to connect to a Windows virtual machine (VM) by using your local Remote Desktop Protocol (RDP) client.
.Prerequisites
@@ -82,7 +83,8 @@ $ oc create -f <service_name>.yaml
$ oc get service -n example-namespace
----
+
.Example output for `NodePort` service
Example output for `NodePort` service:
+
[source,terminal]
----
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
@@ -96,7 +98,8 @@ rdpservice NodePort 172.30.232.73 <none> 3389:30000/TCP 5m
$ oc get node <node_name> -o wide
----
+
.Example output
Example output:
+
[source,terminal]
----
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP

View File

@@ -6,6 +6,7 @@
[id="virt-accessing-serial-console_{context}"]
= Accessing the serial console of a virtual machine instance
[role="_abstract"]
The `virtctl console` command opens a serial console to the specified virtual
machine instance.

View File

@@ -6,6 +6,7 @@
[id="virt-accessing-vnc-console_{context}"]
= Accessing the graphical console of a virtual machine instances with VNC
[role="_abstract"]
The `virtctl` client utility can use the `remote-viewer` function to open a
graphical console to a running virtual machine instance. This capability is
included in the `virt-viewer` package.

View File

@@ -7,7 +7,8 @@
[id="virt-add-boot-order-web_{context}"]
= Adding items to a boot order list in the web console
Add items to a boot order list by using the web console.
[role="_abstract"]
You can add items to a boot order list by using the web console.
.Procedure
@@ -24,7 +25,7 @@ Add items to a boot order list by using the web console.
. Add any additional disks or NICs to the boot order list.
. Click *Save*.
+
[NOTE]
====
If the virtual machine is running, changes to *Boot Order* will not take effect until you restart the virtual machine.

View File

@@ -10,6 +10,7 @@
:FeatureName: Golden image support for heterogeneous clusters
include::snippets/technology-preview.adoc[]
[role="_abstract"]
Add a custom golden image in a heterogeneous cluster by setting the `ssp.kubevirt.io/dict.architectures` annotation in the `spec.dataImportCronTemplates.metadata.annotations` stanza of the `HyperConverged` custom resource (CR). This annotation lists the architectures supported by the image.
.Prerequisites

View File

@@ -7,6 +7,7 @@
= Adding a disk to a virtual machine
[role="_abstract"]
You can add a virtual disk to a virtual machine (VM) by using the {product-title} web console.
.Procedure
@@ -23,7 +24,7 @@ You can add a virtual disk to a virtual machine (VM) by using the {product-title
.. Optional: You can clear *Apply optimized StorageProfile settings* to change the *Volume Mode* and *Access Mode* for the virtual disk. If you do not specify these parameters, the system uses the default values from the `kubevirt-storage-class-defaults` config map.
. Click *Add*.
+
[NOTE]
====
If the VM is running, you must restart the VM to apply the change.

View File

@@ -5,6 +5,7 @@
[id="virt-adding-a-boot-source-web_{context}"]
= Adding boot source to a template
[role="_abstract"]
You can add a boot source or operating system image to a virtual machine (VM) template. When templates are configured with an operating system image, they are labeled *Source available* on the *Catalog* page. After you add a boot source to a template, you can create a VM from the template.
There are four methods for selecting and adding a boot source in the web console:
@@ -48,4 +49,6 @@ Provided boot sources are updated automatically to the latest version of the ope
.. Click *Save and import* if you imported content from a URL or the registry.
.. Click *Save and clone* if you cloned an existing PVC.
.Result
Your custom virtual machine template with a boot source is listed on the *Catalog* page. You can use this template to create a virtual machine.

View File

@@ -8,6 +8,7 @@
[id="virt-adding-container-disk-as-cd_{context}"]
= Installing VirtIO drivers from a container disk added as a SATA CD drive
[role="_abstract"]
You can install VirtIO drivers from a container disk that you add to a Windows virtual machine (VM) as a SATA CD drive.
[TIP]

View File

@@ -7,6 +7,7 @@
[id="virt-adding-kernel-arguments-enable-IOMMU_{context}"]
= Adding kernel arguments to enable the IOMMU driver
[role="_abstract"]
To enable the IOMMU driver in the kernel, create the `MachineConfig` object and add the kernel arguments.
.Prerequisites
@@ -61,7 +62,8 @@ $ oc create -f 100-worker-kernel-arg-iommu.yaml
$ oc get MachineConfig
----
+
.Example output
Example output:
+
[source,terminal]
----
NAME IGNITIONVERSION AGE
@@ -85,9 +87,10 @@ $ dmesg | grep -i iommu
----
* If IOMMU is enabled, output is displayed as shown in the following example:
+
.Example output
Example output:
+
[source,terminal]
----
Intel: [ 0.000000] DMAR: Intel(R) IOMMU Driver
AMD: [ 0.000000] AMD-Vi: IOMMU Initialized
----
----

View File

@@ -16,11 +16,13 @@ endif::[]
= {title} when creating a VM from a template
ifdef::static-key[]
[role="_abstract"]
You can add a statically managed public SSH key when you create a virtual machine (VM) by using the {product-title} web console. The key is added to the VM as a cloud-init data source at first boot. This method does not affect cloud-init user data.
Optional: You can add a key to a project. Afterwards, this key is added automatically to VMs that you create in the project.
endif::[]
ifdef::dynamic-key[]
[role="_abstract"]
You can enable dynamic public SSH key injection when you create a virtual machine (VM) from a template by using the {product-title} web console. Then, you can update the key at runtime.
[NOTE]

View File

@@ -6,6 +6,7 @@
[id="virt-adding-public-key-vm-cli_{context}"]
= Adding a key when creating a VM by using the CLI
[role="_abstract"]
You can add a statically managed public SSH key when you create a virtual machine (VM) by using the command line. The key is added to the VM at first boot.
The key is added to the VM as a cloud-init data source. This method separates the access credentials from the application data in the cloud-init user data. This method does not affect cloud-init user data.
@@ -17,9 +18,10 @@ The key is added to the VM as a cloud-init data source. This method separates th
.Procedure
. Create a manifest file for a `VirtualMachine` object and a `Secret` object:
. Create a manifest file for a `VirtualMachine` object and a `Secret` object.
+
Example manifest:
+
.Example manifest
[source,yaml]
----
include::snippets/virt-static-key.yaml[]
@@ -50,7 +52,8 @@ $ virtctl start vm example-vm -n example-namespace
$ oc describe vm example-vm -n example-namespace
----
+
.Example output
Example output:
+
[source,yaml]
----
apiVersion: kubevirt.io/v1

View File

@@ -7,7 +7,8 @@
= Adding a secret, config map, or service account to a virtual machine
You add a secret, config map, or service account to a virtual machine by using the {product-title} web console.
[role="_abstract"]
You can add a secret, config map, or service account to a virtual machine by using the {product-title} web console.
These resources are added to the virtual machine as disks. You then mount the secret, config map, or service account as you would mount any other disk.

View File

@@ -6,6 +6,7 @@
[id="virt-adding-tls-certificates-for-authenticating-dv-imports_{context}"]
= Adding TLS certificates for authenticating data volume imports
[role="_abstract"]
TLS certificates for registry or HTTPS endpoints must be added to a config map
to import data from these sources. This config map must be present
in the namespace of the destination data volume.

View File

@@ -6,6 +6,7 @@
[id="virt-adding-vm-to-service-mesh_{context}"]
= Adding a virtual machine to a service mesh
[role="_abstract"]
To add a virtual machine (VM) workload to a service mesh, enable automatic sidecar injection in the VM configuration file by setting the `sidecar.istio.io/inject` annotation to `true`. Then expose your VM as a service to view your application in the mesh.
[IMPORTANT]
@@ -20,9 +21,10 @@ To avoid port conflicts, do not use ports used by the Istio sidecar proxy. These
.Procedure
. Edit the VM configuration file to add the `sidecar.istio.io/inject: "true"` annotation:
. Edit the VM configuration file to add the `sidecar.istio.io/inject: "true"` annotation.
+
Example configuration file:
+
.Example configuration file
[source,yaml]
----
apiVersion: kubevirt.io/v1

View File

@@ -6,6 +6,7 @@
[id="virt-adding-vtpm-to-vm_{context}"]
= Adding a vTPM device to a virtual machine
[role="_abstract"]
Adding a virtual Trusted Platform Module (vTPM) device to a virtual machine
(VM) allows you to run a VM created from a Windows 11 image without a physical
TPM device. A vTPM device also stores secrets for that VM.

View File

@@ -6,6 +6,7 @@
[id="virt-additional-scc-for-kubevirt-controller_{context}"]
= Additional SCCs and permissions for the kubevirt-controller service account
[role="_abstract"]
Security context constraints (SCCs) control permissions for pods. These permissions include actions that a pod, a collection of containers, can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with to be accepted into the system.
The `virt-controller` is a cluster controller that creates the `virt-launcher` pods for virtual machines in the cluster.
@@ -19,18 +20,18 @@ The `kubevirt-controller` service account is granted additional SCCs and Linux c
The `kubevirt-controller` service account is granted the following SCCs:
* `scc.AllowHostDirVolumePlugin = true` +
`scc.AllowHostDirVolumePlugin = true`::
This allows virtual machines to use the hostpath volume plugin.
* `scc.AllowPrivilegedContainer = false` +
`scc.AllowPrivilegedContainer = false`::
This ensures the `virt-launcher` pod is not run as a privileged container.
* `scc.AllowedCapabilities = []corev1.Capability{"SYS_NICE", "NET_BIND_SERVICE"}`
`scc.AllowedCapabilities = []corev1.Capability{"SYS_NICE", "NET_BIND_SERVICE"}`::
** `SYS_NICE` allows setting the CPU affinity.
** `NET_BIND_SERVICE` allows DHCP and Slirp operations.
* `SYS_NICE` allows setting the CPU affinity.
* `NET_BIND_SERVICE` allows DHCP and Slirp operations.
.Viewing the SCC and RBAC definitions for the kubevirt-controller
== Viewing the SCC and RBAC definitions for the kubevirt-controller
You can view the `SecurityContextConstraints` definition for the `kubevirt-controller` by using the `oc` tool:

View File

@@ -6,10 +6,13 @@
[id="virt-analyzing-datavolume-conditions-and-events_{context}"]
= Analyzing data volume conditions and events
[role="_abstract"]
By inspecting the `Conditions` and `Events` sections generated by the `describe`
command, you determine the state of the data volume
in relation to persistent volume claims (PVCs), and whether or
not an operation is actively running or completed. You might also receive messages
not an operation is actively running or completed.
You might also receive messages
that offer specific details about the status of the data volume, and how
it came to be in its current state.
@@ -28,9 +31,10 @@ The `Message` indicates which PVC owns the data volume.
+
`Message`, in the `Events` section, provides further details including how
long the PVC has been bound (`Age`) and by what resource (`From`),
in this case `datavolume-controller`:
in this case `datavolume-controller`.
+
Example output:
+
.Example output
[source,terminal]
----
Status:
@@ -62,9 +66,10 @@ the `Message` displays an inability to connect due to a `404`, listed in the
+
From this information, you conclude that an import operation was running,
creating contention for other operations that are
attempting to access the data volume:
attempting to access the data volume.
+
Example output:
+
.Example output
[source,terminal]
----
Status:
@@ -85,9 +90,10 @@ Status:
* `Ready` If `Type` is `Ready` and `Status` is `True`, then the data volume is ready
to be used, as in the following example. If the data volume is not ready to be
used, the `Status` is `False`:
used, the `Status` is `False`.
+
Example output:
+
.Example output
[source,terminal]
----
Status:

View File

@@ -7,9 +7,11 @@
= Applying node placement rules
ifndef::openshift-rosa,openshift-dedicated[]
[role="_abstract"]
You can apply node placement rules by editing a `Subscription`, `HyperConverged`, or `HostPathProvisioner` object using the command line.
endif::openshift-rosa,openshift-dedicated[]
ifdef::openshift-rosa,openshift-dedicated[]
[role="_abstract"]
You can apply node placement rules by editing a `HyperConverged` or `HostPathProvisioner` object using the command line.
endif::openshift-rosa,openshift-dedicated[]

View File

@@ -6,12 +6,14 @@
[id="virt-assigning-pci-device-virtual-machine_{context}"]
= Assigning a PCI device to a virtual machine
[role="_abstract"]
When a PCI device is available in a cluster, you can assign it to a virtual machine and enable PCI passthrough.
.Procedure
* Assign the PCI device to a virtual machine as a host device.
+
.Example
Example:
+
[source,yaml]
----
apiVersion: kubevirt.io/v1
@@ -31,7 +33,8 @@ spec:
[source,terminal]
$ lspci -nnk | grep NVIDIA
+
.Example output
Example output:
+
[source,terminal]
----
$ 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)

View File

@@ -6,6 +6,7 @@
[id="virt-assigning-mdev-vm-cli_{context}"]
= Assigning a vGPU to a VM by using the CLI
[role="_abstract"]
Assign mediated devices such as virtual GPUs (vGPUs) to virtual machines (VMs).
.Prerequisites
@@ -15,9 +16,10 @@ Assign mediated devices such as virtual GPUs (vGPUs) to virtual machines (VMs).
.Procedure
* Assign the mediated device to a virtual machine (VM) by editing the `spec.domain.devices.gpus` stanza of the `VirtualMachine` manifest:
* Assign the mediated device to a virtual machine (VM) by editing the `spec.domain.devices.gpus` stanza of the `VirtualMachine` manifest.
+
Example virtual machine manifest:
+
.Example virtual machine manifest
[source,yaml]
----
apiVersion: kubevirt.io/v1
@@ -41,4 +43,4 @@ spec:
[source,terminal]
----
$ lspci -nnk | grep <device_name>
----
----

View File

@@ -6,7 +6,9 @@
[id="virt-assigning-vgpu-vm-web_{context}"]
= Assigning a vGPU to a VM by using the web console
[role="_abstract"]
You can assign virtual GPUs to virtual machines by using the {product-title} web console.
[NOTE]
====
You can add hardware devices to virtual machines created from customized templates or a YAML file. You cannot add devices to pre-supplied boot source templates for specific operating systems.
@@ -29,4 +31,4 @@ You can add hardware devices to virtual machines created from customized templat
. Click *Save*.
.Verification
* To confirm that the devices were added to the VM, click the *YAML* tab and review the `VirtualMachine` configuration. Mediated devices are added to the `spec.domain.devices` stanza.
* To confirm that the devices were added to the VM, click the *YAML* tab and review the `VirtualMachine` configuration. Mediated devices are added to the `spec.domain.devices` stanza.

View File

@@ -6,6 +6,7 @@
[id="virt-attaching-virtio-disk-to-windows-existing_{context}"]
= Attaching VirtIO container disk to an existing Windows VM
[role="_abstract"]
You must attach the VirtIO container disk to the Windows VM to install the necessary Windows drivers. This can be done to an existing VM.
.Procedure
@@ -14,4 +15,4 @@ You must attach the VirtIO container disk to the Windows VM to install the neces
. Go to *VM Details* -> *Configuration* -> *Storage*.
. Select the *Mount Windows drivers disk* checkbox.
. Click *Save*.
. Start the VM, and connect to a graphical console.
. Start the VM, and connect to a graphical console.

View File

@@ -6,6 +6,7 @@
[id="virt-attaching-virtio-disk-to-windows_{context}"]
= Attaching VirtIO container disk to Windows VMs during installation
[role="_abstract"]
You must attach the VirtIO container disk to the Windows VM to install the necessary Windows drivers. This can be done during creation of the VM.
.Procedure
@@ -15,4 +16,6 @@ You must attach the VirtIO container disk to the Windows VM to install the neces
. Click the *Customize VirtualMachine parameters*.
. Click *Create VirtualMachine*.
.Result
After the VM is created, the `virtio-win` SATA CD disk will be attached to the VM.

View File

@@ -6,6 +6,7 @@
[id="virt-attaching-vm-secondary-network-cli_{context}"]
= Configuring a VM network interface by using the CLI
[role="_abstract"]
You can configure a virtual machine (VM) network interface for a bridge network by using the command line.
.Prerequisites
@@ -49,7 +50,7 @@ $ oc apply -f example-vm.yaml
----
. Optional: If you edited a running virtual machine, you must restart it for the changes to take effect.
+
[NOTE]
====
When running {VirtProductName} on {ibm-z-name} using an OSA card, you must register the MAC address of the device. For more information, see link:https://www.ibm.com/docs/en/linux-on-systems?topic=choices-osa-interface-traffic-forwarding[OSA interface traffic forwarding] (IBM documentation).

View File

@@ -6,6 +6,7 @@
[id="virt-attaching-vm-to-ovn-secondary-nw-cli_{context}"]
= Attaching a virtual machine to an OVN-Kubernetes secondary network using the CLI
[role="_abstract"]
You can connect a virtual machine (VM) to the OVN-Kubernetes secondary network by including the network details in the VM configuration.
.Prerequisites
@@ -53,4 +54,4 @@ spec:
$ oc apply -f <filename>.yaml
----
. Optional: If you edited a running virtual machine, you must restart it for the changes to take effect.
. Optional: If you edited a running virtual machine, you must restart it for the changes to take effect.

View File

@@ -3,9 +3,10 @@
// * virt/vm_networking/virt-connecting-vm-to-primary-udn.adoc
:_mod-docs-content-type: PROCEDURE
[id="virt-attaching-vm-to-primary-udn-web_{context}"]
[id="virt-attaching-vm-to-primary-udn-web_{context}"]
= Attaching a virtual machine to the primary user-defined network by using the web console
[role="_abstract"]
You can connect a virtual machine (VM) to the primary user-defined network (UDN) by using the {product-title} web console. VMs that are created in a namespace where the primary UDN is configured are automatically attached to the UDN with the Layer 2 bridge network binding plugin.
To attach a VM to the primary UDN by using the Plug a Simple Socket Transport (passt) binding, enable the plugin and configure the VM network interface in the web console.
@@ -41,4 +42,4 @@ include::snippets/technology-preview.adoc[]
. Click *Save*.
. If your VM is running, restart it for the changes to take effect.
. If your VM is running, restart it for the changes to take effect.

View File

@@ -6,6 +6,7 @@
[id="virt-attaching-vm-to-primary-udn_{context}"]
= Attaching a virtual machine to the primary user-defined network by using the CLI
[role="_abstract"]
You can connect a virtual machine (VM) to the primary user-defined network (UDN) by using the CLI.
.Prerequisites
@@ -60,4 +61,4 @@ include::snippets/technology-preview.adoc[]
[source,terminal]
----
$ oc apply -f <filename>.yaml
----
----

View File

@@ -6,6 +6,7 @@
[id="virt-attaching-vm-to-secondary-udn_{context}"]
= Attaching a virtual machine to secondary user-defined networks by using the CLI
[role="_abstract"]
You can connect a virtual machine (VM) to multiple secondary cluster-scoped user-defined networks (CUDNs) by configuring the interface binding.
.Prerequisites
@@ -60,4 +61,4 @@ where:
[NOTE]
====
When running {VirtProductName} on {ibm-z-name} using an OSA card, be aware that the OSA card only forwards network traffic to devices that are registered with the OSA device. As a result, any traffic destined for unregistered devices is not forwarded.
====
====

View File

@@ -6,6 +6,7 @@
[id="virt-attaching-vm-to-sriov-network-web-console_{context}"]
= Connecting a VM to an SR-IOV network by using the web console
[role="_abstract"]
You can connect a VM to the SR-IOV network by including the network details in the VM configuration.
.Prerequisites

View File

@@ -6,6 +6,7 @@
[id="virt-attaching-vm-to-sriov-network_{context}"]
= Connecting a virtual machine to an SR-IOV network by using the CLI
[role="_abstract"]
You can connect the virtual machine (VM) to the SR-IOV network by including the network details in the VM configuration.
.Prerequisites

View File

@@ -6,9 +6,10 @@
[id="virt-automatic-certificates-renewal_{context}"]
= TLS certificates
[role="_abstract"]
TLS certificates for {VirtProductName} components are renewed and rotated automatically. You are not required to refresh them manually.
.Automatic renewal schedules
== Automatic renewal schedules
TLS certificates are automatically deleted and replaced according to the following schedule:

View File

@@ -7,6 +7,7 @@
[id="virt-autoupdate-custom-bootsource_{context}"]
= Enabling automatic updates for custom boot sources
[role="_abstract"]
{VirtProductName} automatically updates system-defined boot sources by default, but does not automatically update custom boot sources. You must manually enable automatic updates by editing the `HyperConverged` custom resource (CR).
.Prerequisites
@@ -25,7 +26,6 @@ $ oc edit hyperconverged kubevirt-hyperconverged -n {CNVNamespace}
. Edit the `HyperConverged` CR, adding the appropriate template and boot source in the `dataImportCronTemplates` section. For example:
+
.Example custom resource
[source,yaml]
----
apiVersion: hco.kubevirt.io/v1beta1

View File

@@ -6,6 +6,7 @@ ifndef::openshift-rosa,openshift-dedicated,openshift-rosa-hcp[]
[id="virt-aws-bm_{context}"]
= {VirtProductName} on AWS bare metal
[role="_abstract"]
You can run {VirtProductName} on an Amazon Web Services (AWS) bare metal {product-title} cluster.
[NOTE]
@@ -115,4 +116,4 @@ Hosted control planes (HCPs)::
--
* HCPs for {VirtProductName} are not currently supported on AWS infrastructure.
--
endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[]
endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[]

View File

@@ -5,7 +5,11 @@
:_mod-docs-content-type: PROCEDURE
[id="virt-binding-devices-vfio-driver_{context}"]
= Binding PCI devices to the VFIO driver
To bind PCI devices to the VFIO (Virtual Function I/O) driver, obtain the values for `vendor-ID` and `device-ID` from each device and create a list with the values. Add this list to the `MachineConfig` object. The `MachineConfig` Operator generates the `/etc/modprobe.d/vfio.conf` on the nodes with the PCI devices, and binds the PCI devices to the VFIO driver.
[role="_abstract"]
To bind PCI devices to the VFIO (Virtual Function I/O) driver, obtain the values for `vendor-ID` and `device-ID` from each device and create a list with the values. Add this list to the `MachineConfig` object.
The `MachineConfig` Operator generates the `/etc/modprobe.d/vfio.conf` on the nodes with the PCI devices, and binds the PCI devices to the VFIO driver.
.Prerequisites
* You added kernel arguments to enable IOMMU for the CPU.
@@ -19,7 +23,8 @@ To bind PCI devices to the VFIO (Virtual Function I/O) driver, obtain the values
$ lspci -nnv | grep -i nvidia
----
+
.Example output
Example output:
+
[source,terminal]
----
02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)
@@ -32,7 +37,8 @@ $ lspci -nnv | grep -i nvidia
include::snippets/butane-version.adoc[]
====
+
.Example
Example:
+
[source,yaml,subs="attributes+"]
----
variant: openshift
@@ -80,7 +86,8 @@ $ oc apply -f 100-worker-vfiopci.yaml
$ oc get MachineConfig
----
+
.Example output
Example output:
+
[source,terminal]
----
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE
@@ -103,7 +110,8 @@ $ lspci -nnk -d 10de:
----
The output confirms that the VFIO driver is being used.
+
.Example output
Example output:
+
----
04:00.0 3D controller [0302]: NVIDIA Corporation GP102GL [Tesla P40] [10de:1eb8] (rev a1)
Subsystem: NVIDIA Corporation Device [10de:1eb8]

View File

@@ -6,6 +6,7 @@
[id="virt-booting-vms-uefi-mode_{context}"]
= Booting virtual machines in UEFI mode
[role="_abstract"]
You can configure a virtual machine to boot in UEFI mode by editing the `VirtualMachine` manifest.
.Prerequisites
@@ -14,9 +15,9 @@ You can configure a virtual machine to boot in UEFI mode by editing the `Virtual
.Procedure
. Edit or create a `VirtualMachine` manifest file. Use the `spec.firmware.bootloader` stanza to configure UEFI mode:
. Edit or create a `VirtualMachine` manifest file. Use the `spec.firmware.bootloader` stanza to configure UEFI mode.
+
.Booting in UEFI mode with secure boot active
Booting in UEFI mode with secure boot active:
[source,yaml]
----
apiversion: kubevirt.io/v1

View File

@@ -6,7 +6,10 @@
[id="virt-building-real-time-container-disk-image_{context}"]
= Building a container disk image for {op-system-base} virtual machines
You can build a custom {op-system-base-full} 8 OS image in `qcow2` format and use it to create a container disk image. You can store the container disk image in a registry that is accessible from your cluster and specify the image location in the `spec.param.vmUnderTestContainerDiskImage` attribute of the real-time checkup config map.
[role="_abstract"]
You can build a custom {op-system-base-full} 8 OS image in `qcow2` format and use it to create a container disk image.
You can store the container disk image in a registry that is accessible from your cluster and specify the image location in the `spec.param.vmUnderTestContainerDiskImage` attribute of the real-time checkup config map.
To build a container disk image, you must create an image builder virtual machine (VM). The _image builder VM_ is a {op-system-base} 8 VM that can be used to build custom {op-system-base} images.
@@ -162,4 +165,4 @@ $ podman build . -t real-time-rhel:latest
$ podman push real-time-rhel:latest
----
. Provide a link to the container disk image in the `spec.param.vmUnderTestContainerDiskImage` attribute in the real-time checkup config map.
. Provide a link to the container disk image in the `spec.param.vmUnderTestContainerDiskImage` attribute in the real-time checkup config map.

View File

@@ -6,7 +6,10 @@
[id="virt-building-vm-containerdisk-image_{context}"]
= Building a container disk image for {op-system-base} virtual machines
You can build a custom {op-system-base-full} 9 OS image in `qcow2` format and use it to create a container disk image. You can store the container disk image in a registry that is accessible from your cluster and specify the image location in the `spec.param.vmContainerDiskImage` attribute of the DPDK checkup config map.
[role="_abstract"]
You can build a custom {op-system-base-full} 9 OS image in `qcow2` format and use it to create a container disk image.
You can store the container disk image in a registry that is accessible from your cluster and specify the image location in the `spec.param.vmContainerDiskImage` attribute of the DPDK checkup config map.
To build a container disk image, you must create an image builder virtual machine (VM). The _image builder VM_ is a {op-system-base} 9 VM that can be used to build custom {op-system-base} images.
@@ -163,4 +166,4 @@ $ podman build . -t dpdk-rhel:latest
$ podman push dpdk-rhel:latest
----
. Provide a link to the container disk image in the `spec.param.vmUnderTestContainerDiskImage` attribute in the DPDK checkup config map.
. Provide a link to the container disk image in the `spec.param.vmUnderTestContainerDiskImage` attribute in the DPDK checkup config map.

View File

@@ -6,6 +6,7 @@
[id="virt-canceling-vm-migration-cli_{context}"]
= Canceling live migration by using the CLI
[role="_abstract"]
Cancel the live migration of a virtual machine by deleting the
`VirtualMachineInstanceMigration` object associated with the migration.
@@ -19,7 +20,6 @@ Cancel the live migration of a virtual machine by deleting the
* Delete the `VirtualMachineInstanceMigration` object that triggered the live
migration, `migration-job` in this example:
+
[source,terminal]
----
$ oc delete vmim migration-job

View File

@@ -6,6 +6,7 @@
[id="virt-canceling-vm-migration-web_{context}"]
= Canceling live migration by using the web console
[role="_abstract"]
You can cancel the live migration of a virtual machine (VM) by using the {product-title} web console.
.Prerequisites

View File

@@ -14,58 +14,77 @@
[id="virt-cdi-supported-operations-matrix_{context}"]
= CDI supported operations matrix
[role="_abstract"]
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
|===
|Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload
| KubeVirt (QCOW2)
|&#10003; QCOW2 +
&#10003; GZ* +
a|&#10003; QCOW2
&#10003; GZ*
&#10003; XZ*
|&#10003; QCOW2** +
&#10003; GZ* +
a|&#10003; QCOW2**
&#10003; GZ*
&#10003; XZ*
|&#10003; QCOW2 +
&#10003; GZ* +
a|&#10003; QCOW2
&#10003; GZ*
&#10003; XZ*
| &#10003; QCOW2* +
&#9633; GZ +
a| &#10003; QCOW2*
&#9633; GZ
&#9633; XZ
| &#10003; QCOW2* +
&#10003; GZ* +
a| &#10003; QCOW2*
&#10003; GZ*
&#10003; XZ*
| KubeVirt (RAW)
|&#10003; RAW +
&#10003; GZ +
a|&#10003; RAW
&#10003; GZ
&#10003; XZ
|&#10003; RAW +
&#10003; GZ +
a|&#10003; RAW
&#10003; GZ
&#10003; XZ
| &#10003; RAW +
&#10003; GZ +
a| &#10003; RAW
&#10003; GZ
&#10003; XZ
| &#10003; RAW* +
&#9633; GZ +
a| &#10003; RAW*
&#9633; GZ
&#9633; XZ
| &#10003; RAW* +
&#10003; GZ* +
a| &#10003; RAW*
&#10003; GZ*
&#10003; XZ*
|===
&#10003; Supported operation
&#9633; Unsupported operation
$$*$$ Requires scratch space
$$**$$ Requires scratch space if a custom certificate authority is required
[horizontal]
&#10003;:: Supported operation
&#9633;:: Unsupported operation
$$*$$:: Requires scratch space
$$**$$:: Requires scratch space if a custom certificate authority is required

View File

@@ -34,7 +34,7 @@ $ oc patch vm/<vm_name> --type merge -p '{"spec":{"instancetype":{"name": "<new_
$ oc get vms/<vm_name> -o json | jq .status.instancetypeRef
----
+
*Example output*
Example output:
+
[source,terminal]
----
@@ -54,7 +54,7 @@ $ oc get vms/<vm_name> -o json | jq .status.instancetypeRef
$ oc get vmi/<vm_name> -o json | jq .spec.domain.cpu
----
+
*Example output that verifies that the revision uses 2 vCPUs*
Example output that verifies that the revision uses 2 vCPUs:
+
[source,terminal]
----

View File

@@ -6,6 +6,7 @@
[id="virt-changing-update-settings_{context}"]
= Changing update settings
[role="_abstract"]
You can change the update channel and approval strategy for your {VirtProductName} Operator subscription by using the web console.
.Prerequisites

View File

@@ -6,6 +6,7 @@
[id="virt-checking-cluster-dpdk-readiness_{context}"]
= Running a DPDK checkup by using the CLI
[role="_abstract"]
Use a predefined checkup to verify that your {product-title} cluster node can run a virtual machine (VM) with a Data Plane Development Kit (DPDK) workload with zero packet loss. The DPDK checkup runs traffic between a traffic generator and a VM running a test DPDK application.
You run a DPDK checkup by performing the following steps:
@@ -24,11 +25,11 @@ You run a DPDK checkup by performing the following steps:
.Procedure
. Create a `ServiceAccount`, `Role`, and `RoleBinding` manifest for the DPDK checkup:
. Create a `ServiceAccount`, `Role`, and `RoleBinding` manifest for the DPDK checkup.
+
Example service account, role, and rolebinding manifest file:
+
.Example service account, role, and rolebinding manifest file
[%collapsible]
====
[source,yaml]
----
---
@@ -85,7 +86,6 @@ roleRef:
kind: Role
name: kubevirt-dpdk-checker
----
====
. Apply the `ServiceAccount`, `Role`, and `RoleBinding` manifest:
+
@@ -94,9 +94,10 @@ roleRef:
$ oc apply -n <target_namespace> -f <dpdk_sa_roles_rolebinding>.yaml
----
. Create a `ConfigMap` manifest that contains the input parameters for the checkup:
. Create a `ConfigMap` manifest that contains the input parameters for the checkup.
+
Example input config map:
+
.Example input config map
[source,yaml]
----
apiVersion: v1
@@ -122,9 +123,10 @@ data:
$ oc apply -n <target_namespace> -f <dpdk_config_map>.yaml
----
. Create a `Job` manifest to run the checkup:
. Create a `Job` manifest to run the checkup.
+
Example job manifest:
+
.Example job manifest
[source,yaml,subs="attributes+"]
----
apiVersion: batch/v1
@@ -182,7 +184,8 @@ $ oc wait job dpdk-checkup -n <target_namespace> --for condition=complete --time
$ oc get configmap dpdk-checkup-config -n <target_namespace> -o yaml
----
+
.Example output config map (success)
Example output config map (success):
+
[source,yaml]
----
apiVersion: v1

View File

@@ -6,6 +6,7 @@
[id="virt-checking-storage-configuration_{context}"]
= Running a storage checkup by using the CLI
[role="_abstract"]
Use a predefined checkup to verify that the {product-title} cluster storage is configured optimally to run {VirtProductName} workloads.
.Prerequisites
@@ -32,11 +33,11 @@ subjects:
.Procedure
. Create a `ServiceAccount`, `Role`, and `RoleBinding` manifest file for the storage checkup:
. Create a `ServiceAccount`, `Role`, and `RoleBinding` manifest file for the storage checkup.
+
Example service account, role, and rolebinding manifest:
+
.Example service account, role, and rolebinding manifest
[%collapsible]
====
[source,yaml]
----
---
@@ -84,7 +85,6 @@ roleRef:
kind: Role
name: storage-checkup-role
----
====
. Apply the `ServiceAccount`, `Role`, and `RoleBinding` manifest in the target namespace:
+
@@ -95,7 +95,8 @@ $ oc apply -n <target_namespace> -f <storage_sa_roles_rolebinding>.yaml
. Create a `ConfigMap` and `Job` manifest file. The config map contains the input parameters for the checkup job.
+
.Example input config map and job manifest
Example input config map and job manifest:
+
[source,yaml,subs="attributes+"]
----
---
@@ -152,7 +153,8 @@ $ oc wait job storage-checkup -n <target_namespace> --for condition=complete --t
$ oc get configmap storage-checkup-config -n <target_namespace> -o yaml
----
+
.Example output config map (success)
Example output config map (success):
+
[source,yaml,subs="attributes+"]
----
apiVersion: v1

View File

@@ -6,6 +6,7 @@
[id="virt-cloning-a-datavolume_{context}"]
= Smart-cloning a PVC by using the CLI
[role="_abstract"]
You can smart-clone a persistent volume claim (PVC) by using the command line to create a `DataVolume` object.
.Prerequisites
@@ -15,7 +16,8 @@ You can smart-clone a persistent volume claim (PVC) by using the command line to
* The source and target PVCs must have the same storage provider and volume mode.
* The value of the `driver` key of the `VolumeSnapshotClass` object must match the value of the `provisioner` key of the `StorageClass` object as shown in the following example:
+
.Example `VolumeSnapshotClass` object
Example `VolumeSnapshotClass` object:
+
[source,yaml]
----
kind: VolumeSnapshotClass
@@ -24,7 +26,8 @@ driver: openshift-storage.rbd.csi.ceph.com
# ...
----
+
.Example `StorageClass` object
Example `StorageClass` object:
+
[source,yaml]
----
kind: StorageClass

View File

@@ -7,6 +7,7 @@
[id="virt-cloning-pvc-of-vm-disk-into-new-datavolume_{context}"]
= Cloning the PVC of a VM disk into a new data volume
[role="_abstract"]
You can clone the persistent volume claim (PVC) of an existing (virtual machine) VM disk
into a new data volume. The new data volume can then be used for a new virtual
machine.

Some files were not shown because too many files have changed in this diff Show More