1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

CNV-70051: Deleting unused CNV modules

This commit is contained in:
Ashleigh Brennan
2025-12-16 15:32:41 -06:00
committed by openshift-cherrypick-robot
parent 1d4e143f67
commit 27d3011822
20 changed files with 3 additions and 867 deletions

View File

@@ -1,19 +0,0 @@
// Module included in the following assembly:
//
// * virt/virtual_machines/virt-creating-and-using-boot-sources.adoc
//
:_mod-docs-content-type: CONCEPT
[id="virt-about-auto-bootsource-updates_{context}"]
= About automatic boot source updates
[role="_abstract"]
Boot sources can make virtual machine (VM) creation more accessible and efficient for users. If automatic boot source updates are enabled, the Containerized Data Importer (CDI) imports, polls, and updates the images so that they are ready to be cloned for new VMs.
By default, CDI automatically updates the _system-defined_ boot sources that {VirtProductName} provides. You can opt out of automatic updates for all system-defined boot sources by disabling the `enableCommonBootImageImport` feature gate. If you disable this feature gate, all `DataImportCron` objects are deleted. This does not remove previously imported boot source objects that store operating system images, though administrators can delete them manually.
When the `enableCommonBootImageImport` feature gate is disabled, `DataSource` objects are reset so that they no longer point to the original boot source. An administrator can manually provide a boot source by populating a PVC with an operating system, optionally creating a volume snapshot from the PVC, and then referring to the PVC or volume snapshot from the `DataSource` object.
_Custom_ boot sources that are not provided by {VirtProductName} are not controlled by the feature gate. You must manage them individually by editing the `HyperConverged` custom resource (CR). You can also use this method to manage individual system-defined boot sources.
Cluster administrators can enable automatic subscription for {op-system-base-full} virtual machines in the {product-title} web console.

View File

@@ -1,17 +0,0 @@
// Module included in the following assemblies:
//
// * virt/virtual_machines/virtual_disks/virt-uploading-local-disk-images-block.adoc
// * virt/virtual_machines/importing_vms/virt-importing-virtual-machine-images-datavolumes.adoc
// * virt/virtual_machines/cloning_vms/virt-cloning-vm-disk-to-new-block-storage-pvc.adoc
:_mod-docs-content-type: CONCEPT
[id="virt-about-block-pvs_{context}"]
= About block persistent volumes
[role="_abstract"]
A block persistent volume (PV) is a PV that is backed by a raw block device. These volumes
do not have a file system and can provide performance benefits for
virtual machines by reducing overhead.
Raw block volumes are provisioned by specifying `volumeMode: Block` in the
PV and persistent volume claim (PVC) specification.

View File

@@ -1,19 +0,0 @@
// Module included in the following assemblies:
//
// * virt/virtual_machines/virtual_disks/virt-expanding-virtual-storage-with-blank-disk-images.adoc
// * virt/storage/virt-preparing-cdi-scratch-space.adoc
// * virt/storage/virt-enabling-user-permissions-to-clone-datavolumes.adoc
:_mod-docs-content-type: CONCEPT
[id="virt-about-datavolumes_{context}"]
= About data volumes
[role="_abstract"]
`DataVolume` objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC).
You can create a data volume as either a standalone resource or by using the `dataVolumeTemplate` field in the virtual machine (VM) specification.
[NOTE]
====
* VM disk PVCs that are prepared by using standalone data volumes maintain an independent lifecycle from the VM. If you use the `dataVolumeTemplate` field in the VM specification to prepare the PVC, the PVC shares the same lifecycle as the VM.
====

View File

@@ -1,32 +0,0 @@
// Module included in the following assemblies:
//
// * networking/k8s_nmstate/k8s-nmstate-observing-node-network-state.adoc
// * networking/k8s_nmstate/k8s-nmstate-updating-node-network-config.adoc
:_mod-docs-content-type: CONCEPT
[id="virt-about-nmstate_{context}"]
= About nmstate
[role="_abstract"]
{VirtProductName} uses link:https://nmstate.github.io/[`nmstate`] to report on and configure the state of the node network. This makes it possible to modify network policy configuration, such as by creating a Linux bridge on all nodes, by applying a single configuration manifest to the cluster.
Node networking is monitored and updated by the following objects:
`NodeNetworkState`:: Reports the state of the network on that node.
`NodeNetworkConfigurationPolicy`:: Describes the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a `NodeNetworkConfigurationPolicy` manifest to the cluster.
`NodeNetworkConfigurationEnactment`:: Reports the network policies enacted upon each node.
{VirtProductName} supports the use of the following nmstate interface types:
* Linux Bridge
* VLAN
* Bond
* Ethernet
[NOTE]
====
You cannot make configuration changes to the `br-ex` bridge or its underlying interfaces. As a workaround, use a secondary network interface connected to your host or switch.
====

View File

@@ -1,100 +0,0 @@
// Module included in the following assemblies:
//
// * virt/install/virt-specifying-nodes-for-virtualization-components.adoc
:_mod-docs-content-type: CONCEPT
[id="virt-about-node-placement-virtualization-components_{context}"]
= About node placement for virtualization components
[role="_abstract"]
You can customize where {VirtProductName} deploys its components by applying node placement rules.
You might want to customize where {VirtProductName} deploys its components to ensure that:
* Virtual machines only deploy on nodes that are intended for virtualization workloads.
* Operators only deploy on infrastructure nodes.
* Certain nodes are unaffected by {VirtProductName}. For example, you have workloads unrelated to virtualization running on your cluster, and you want those workloads to be isolated from {VirtProductName}.
[id="how-to-apply-node-placement-rules-virt-components"]
== How to apply node placement rules to virtualization components
You can specify node placement rules for a component by editing the corresponding object directly or by using the web console.
* For the {VirtProductName} Operators that Operator Lifecycle Manager (OLM) deploys, edit the OLM `Subscription` object directly. Currently, you cannot configure node placement rules for the `Subscription` object by using the web console.
* For components that the {VirtProductName} Operators deploy, edit the `HyperConverged` object directly or configure it by using the web console during {VirtProductName} installation.
* For the hostpath provisioner, edit the `HostPathProvisioner` object directly or configure it by using the web console.
+
[WARNING]
====
You must schedule the hostpath provisioner and the virtualization components on the same nodes. Otherwise, virtualization pods that use the hostpath provisioner cannot run.
====
Depending on the object, you can use one or more of the following rule types:
`nodeSelector`:: Allows pods to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs.
`affinity`:: Enables you to use more expressive syntax to set rules that match nodes with pods. Affinity also allows for more nuance in how the rules are applied. For example, you can specify that a rule is a preference, rather than a hard requirement, so that pods are still scheduled if the rule is not satisfied.
`tolerations`:: Allows pods to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts pods that tolerate the taint.
[id="node-placement-olm-subscription_{context}"]
== Node placement in the OLM Subscription object
To specify the nodes where OLM deploys the {VirtProductName} Operators, edit the `Subscription` object during {VirtProductName} installation. You can include node placement rules in the `spec.config` field, as shown in the following example:
[source,yaml,subs="attributes+"]
----
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: hco-operatorhub
namespace: {CNVNamespace}
spec:
source: {CNVSubscriptionSpecSource}
sourceNamespace: openshift-marketplace
name: {CNVSubscriptionSpecName}
startingCSV: kubevirt-hyperconverged-operator.v{HCOVersion}
channel: "stable"
config: <1>
----
<1> The `config` field supports `nodeSelector` and `tolerations`, but it does not support `affinity`.
[id="node-placement-hco_{context}"]
== Node placement in the HyperConverged object
To specify the nodes where {VirtProductName} deploys its components, you can include the `nodePlacement` object in the HyperConverged Cluster custom resource (CR) file that you create during {VirtProductName} installation. You can include `nodePlacement` under the `spec.infra` and `spec.workloads` fields, as shown in the following example:
[source,yaml,subs="attributes+"]
----
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
namespace: {CNVNamespace}
spec:
infra:
nodePlacement: <1>
...
workloads:
nodePlacement:
...
----
<1> The `nodePlacement` fields support `nodeSelector`, `affinity`, and `tolerations` fields.
[id="node-placement-hpp_{context}"]
== Node placement in the HostPathProvisioner object
You can configure node placement rules by specifying `nodeSelector`, `affinity`, or `tolerations` for the `spec.workload` field of the `HostPathProvisioner` object that you create when you install the hostpath provisioner. If after you create the `HostPathProvisioner` you delete the `HostPathProvisioner` pod and then want to delete the virtual machine (VM), you must first update the `spec.workload` field to another value and then wait for the `HostPathProvisioner` pod to restart. You can then delete the VM from the node.
[source,yaml]
----
apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
kind: HostPathProvisioner
metadata:
name: hostpath-provisioner
spec:
imagePullPolicy: IfNotPresent
pathConfig:
path: "</path/to/backing/directory>"
useNamingPrefix: false
workload: <1>
----
<1> The `workload` field supports `nodeSelector`, `affinity`, and `tolerations` fields.

View File

@@ -1,21 +0,0 @@
// Module included in the following assemblies:
//
// virt/virtual_machines/virtual_disks/virt-reusing-statically-provisioned-persistent-volumes.adoc
:_mod-docs-content-type: CONCEPT
[id="virt-about-reclaiming-statically-provisioned-persistent-volumes_{context}"]
= About reclaiming statically provisioned persistent volumes
[role="_abstract"]
When you reclaim a persistent volume (PV), you unbind the PV from a persistent volume claim (PVC) and delete the PV. Depending on the underlying storage, you might need to manually delete the shared storage.
You can then re-use the PV configuration to create a PV with a different name.
Statically provisioned PVs must have a reclaim policy of `Retain` to be reclaimed.
If they do not, the PV enters a failed state when the PVC is unbound from the PV.
[IMPORTANT]
====
The `Recycle` reclaim policy is deprecated in {product-title} 4.
====

View File

@@ -1,14 +0,0 @@
// Module included in the following assemblies:
//
// * virt/virtual_machines/virtual_disks/virt-cloning-a-datavolume-using-smart-cloning.adoc
:_mod-docs-content-type: CONCEPT
[id="virt-about-smart-cloning_{context}"]
= About smart-cloning
[role="_abstract"]
When a data volume is smart-cloned, a set of operations is performed in a specific order.
. A snapshot of the source persistent volume claim (PVC) is created.
. A PVC is created from the snapshot.
. The snapshot is deleted.

View File

@@ -1,42 +0,0 @@
// Module included in the following assemblies:
//
// * virt/virt-architecture.adoc
:_mod-docs-content-type: CONCEPT
[id="virt-about-tekton-tasks-operator_{context}"]
= About the Tekton Tasks Operator
[role="_abstract"]
The Tekton Tasks Operator, `tekton-tasks-operator`, deploys example pipelines showing the usage of OpenShift Pipelines for virtual machines (VMs). It also deploys additional OpenShift Pipeline tasks that allow users to create VMs from templates, copy and modify templates, and create data volumes.
//image::cnv_components_tekton-tasks-operator.png[tekton-tasks-operator components]
.Tekton Tasks Operator components
[cols="1,1"]
|===
|*Component* |*Description*
|`deployment/create-vm-from-template`
| Creates a VM from a template.
|`deployment/copy-template`
| Copies a VM template.
|`deployment/modify-vm-template`
| Creates or removes a VM template.
|`deployment/modify-data-object`
| Creates or removes data volumes or data sources.
|`deployment/cleanup-vm`
| Runs a script or a command on a VM, then stops or deletes the VM afterward.
|`deployment/disk-virt-customize`
| Runs a `customize` script on a target persistent volume claim (PVC) using `virt-customize`.
|`deployment/disk-virt-sysprep`
| Runs a `sysprep` script on a target PVC by using `virt-sysprep`.
|`deployment/wait-for-vmi-status`
| Waits for a specific virtual machine instance (VMI) status, then fails or succeeds according to that status.
|===

View File

@@ -1,110 +0,0 @@
// Module included in the following assemblies:
//
// * virt/virtual_machines/virt-accessing-vm-consoles.adoc
:_mod-docs-content-type: PROCEDURE
[id="virt-accessing-rdp-console_{context}"]
= Connecting to a Windows virtual machine with an RDP console
[role="_abstract"]
Create a Kubernetes `Service` object to connect to a Windows virtual machine (VM) by using your local Remote Desktop Protocol (RDP) client.
.Prerequisites
* A running Windows virtual machine with the QEMU guest agent installed. The `qemu-guest-agent` object is included in the VirtIO drivers.
* An RDP client installed on your local machine.
* You have installed the {oc-first}.
.Procedure
. Edit the `VirtualMachine` manifest to add the label for service creation:
+
[source,yaml]
----
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: vm-ephemeral
namespace: example-namespace
spec:
runStrategy: Halted
template:
metadata:
labels:
special: key <1>
# ...
----
<1> Add the label `special: key` in the `spec.template.metadata.labels` section.
+
[NOTE]
====
Labels on a virtual machine are passed through to the pod. The `special: key` label must match the label in the `spec.selector` attribute of the `Service` manifest.
====
. Save the `VirtualMachine` manifest file to apply your changes.
. Create a `Service` manifest to expose the VM:
+
[source,yaml]
----
apiVersion: v1
kind: Service
metadata:
name: rdpservice <1>
namespace: example-namespace <2>
spec:
ports:
- targetPort: 3389 <3>
protocol: TCP
selector:
special: key <4>
type: NodePort <5>
# ...
----
<1> The name of the `Service` object.
<2> The namespace where the `Service` object resides. This must match the `metadata.namespace` field of the `VirtualMachine` manifest.
<3> The VM port to be exposed by the service. It must reference an open port if a port list is defined in the VM manifest.
<4> The reference to the label that you added in the `spec.template.metadata.labels` stanza of the `VirtualMachine` manifest.
<5> The type of service.
. Save the `Service` manifest file.
. Create the service by running the following command:
+
[source,terminal]
----
$ oc create -f <service_name>.yaml
----
. Start the VM. If the VM is already running, restart it.
. Query the `Service` object to verify that it is available:
+
[source,terminal]
----
$ oc get service -n example-namespace
----
+
Example output for `NodePort` service:
+
[source,terminal]
----
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rdpservice NodePort 172.30.232.73 <none> 3389:30000/TCP 5m
----
. Run the following command to obtain the IP address for the node:
+
[source,terminal]
----
$ oc get node <node_name> -o wide
----
+
Example output:
+
[source,terminal]
----
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
node01 Ready worker 6d22h v1.24.0 192.168.55.101 <none>
----
. Specify the node IP address and the assigned port in your preferred RDP client.
. Enter the user name and password to connect to the Windows virtual machine.

View File

@@ -1,25 +0,0 @@
// Module included in the following assemblies:
//
// * virt/virtual_machines/virt-accessing-vm-consoles.adoc
:_mod-docs-content-type: PROCEDURE
[id="virt-accessing-serial-console_{context}"]
= Accessing the serial console of a virtual machine instance
[role="_abstract"]
The `virtctl console` command opens a serial console to the specified virtual
machine instance.
.Prerequisites
* The `virt-viewer` package must be installed.
* The virtual machine instance you want to access must be running.
.Procedure
* Connect to the serial console with `virtctl`:
+
[source,terminal]
----
$ virtctl console <VMI>
----

View File

@@ -1,42 +0,0 @@
// Module included in the following assemblies:
//
// * virt/virtual_machines/virt-accessing-vm-consoles.adoc
:_mod-docs-content-type: PROCEDURE
[id="virt-accessing-vnc-console_{context}"]
= Accessing the graphical console of a virtual machine instances with VNC
[role="_abstract"]
The `virtctl` client utility can use the `remote-viewer` function to open a
graphical console to a running virtual machine instance. This capability is
included in the `virt-viewer` package.
.Prerequisites
* The `virt-viewer` package must be installed.
* The virtual machine instance you want to access must be running.
[NOTE]
====
If you use `virtctl` via SSH on a remote machine, you must
forward the X session to your machine.
====
.Procedure
. Connect to the graphical interface with the `virtctl` utility:
+
[source,terminal]
----
$ virtctl vnc <VMI>
----
. If the command failed, try using the `-v` flag to collect
troubleshooting information:
+
[source,terminal]
----
$ virtctl vnc <VMI> -v 4
----

View File

@@ -1,54 +0,0 @@
// Module included in the following assemblies:
//
:_mod-docs-content-type: PROCEDURE
[id="virt-adding-a-boot-source-web_{context}"]
= Adding boot source to a template
[role="_abstract"]
You can add a boot source or operating system image to a virtual machine (VM) template. When templates are configured with an operating system image, they are labeled *Source available* on the *Catalog* page. After you add a boot source to a template, you can create a VM from the template.
There are four methods for selecting and adding a boot source in the web console:
* *Upload local file (creates PVC)*
* *URL (creates PVC)*
* *Clone (creates PVC)*
* *Registry (creates PVC)*
.Prerequisites
* You must be logged in as a user with the `os-images.kubevirt.io:edit` RBAC role or as an administrator.
+
You do not need special privileges to create a VM from a template with an operating system image added.
* To upload a local file, the boot source file must exist on your local machine.
* To download from a URL endpoint, you must have access to the web server with the boot source. For example: the Red Hat Enterprise Linux web page with images.
* To clone an existing PVC, access to the project with a PVC is required.
* To download a boot source from a registry, access to the container registry is required.
.Procedure
. In the {product-title} console, click *Virtualization* -> *Catalog* from the side menu.
. Click the options menu beside a template and select *Edit boot source*.
. Click *Add disk*.
. In the *Add disk* window, select *Use this disk as a boot source*.
. Enter the disk name and select a *Source*, for example, *Blank (creates PVC)*.
. Enter a value for *Persistent Volume Claim size* to specify the PVC size that is adequate for the uncompressed image and any additional space that is required.
. Select a *Type*, for example, *Disk*.
. Optional: Click *Storage class* and select the storage class that is used to create the disk. Typically, this storage class is the default storage class that is created for use by all PVCs.
+
[NOTE]
====
Provided boot sources are updated automatically to the latest version of the operating system. For auto-updated boot sources, persistent volume claims (PVCs) and volume snapshots are created with the cluster's default storage class. If you select a different default storage class after configuration, you must delete the existing boot sources in the cluster namespace that are configured with the previous default storage class.
====
. Optional: Clear *Apply optimized StorageProfile settings* to edit the access mode or volume mode.
. Select the appropriate method to save your boot source:
.. Click *Save and upload* if you uploaded a local file.
.. Click *Save and import* if you imported content from a URL or the registry.
.. Click *Save and clone* if you cloned an existing PVC.
.Result
Your custom virtual machine template with a boot source is listed on the *Catalog* page. You can use this template to create a virtual machine.

View File

@@ -1,36 +0,0 @@
// Module included in the following assemblies:
//
// * virt/virtual_machines/importing_vms/virt-tls-certificates-for-dv-imports.adoc
:_mod-docs-content-type: PROCEDURE
[id="virt-adding-tls-certificates-for-authenticating-dv-imports_{context}"]
= Adding TLS certificates for authenticating data volume imports
[role="_abstract"]
TLS certificates for registry or HTTPS endpoints must be added to a config map
to import data from these sources. This config map must be present
in the namespace of the destination data volume.
Create the config map by referencing the relative file path for the TLS certificate.
.Prerequisites
* You have installed the {oc-first}.
.Procedure
. Ensure you are in the correct namespace. The config map can only be referenced by data volumes if it is in the same namespace.
+
[source,terminal]
----
$ oc get ns
----
. Create the config map:
+
[source,terminal]
----
$ oc create configmap <configmap-name> --from-file=</path/to/file/ca.pem>
----

View File

@@ -1,168 +0,0 @@
// Module included in the following assemblies:
//
// * virt//support/monitoring/virt-running-cluster-checkups.adoc
:_mod-docs-content-type: PROCEDURE
[id="virt-building-real-time-container-disk-image_{context}"]
= Building a container disk image for {op-system-base} virtual machines
[role="_abstract"]
You can build a custom {op-system-base-full} 8 OS image in `qcow2` format and use it to create a container disk image.
You can store the container disk image in a registry that is accessible from your cluster and specify the image location in the `spec.param.vmUnderTestContainerDiskImage` attribute of the real-time checkup config map.
To build a container disk image, you must create an image builder virtual machine (VM). The _image builder VM_ is a {op-system-base} 8 VM that can be used to build custom {op-system-base} images.
.Prerequisites
* The image builder VM must run {op-system-base} 8.7 and must have a minimum of 2 CPU cores, 4 GiB RAM, and 20 GB of free space in the `/var` directory.
* You have installed the image builder tool and its CLI (`composer-cli`) on the VM.
* You have installed the `virt-customize` tool by using the following command:
+
[source,terminal]
----
# dnf install libguestfs-tools
----
* You have installed the Podman CLI tool (`podman`).
.Procedure
. Verify that you can build a {op-system-base} 8.7 image:
+
[source,terminal]
----
# composer-cli distros list
----
+
[NOTE]
====
To run the `composer-cli` commands as non-root, add your user to the `weldr` or `root` groups:
[source,terminal]
----
# usermod -a -G weldr user
----
[source,terminal]
----
$ newgrp weldr
----
====
. Enter the following command to create an image blueprint file in TOML format that contains the packages to be installed, kernel customizations, and the services to be disabled during boot time:
+
[source,terminal]
----
$ cat << EOF > real-time-vm.toml
name = "realtime_image"
description = "Image to use with the real-time checkup"
version = "0.0.1"
distro = "rhel-87"
[[customizations.user]]
name = "root"
password = "redhat"
[[packages]]
name = "real-time"
[[packages]]
name = "real-time-tools"
[[packages]]
name = "driverctl"
[[packages]]
name = "tuned-profiles-cpu-partitioning"
[customizations.kernel]
append = "default_hugepagesz=1GB hugepagesz=1G hugepages=1"
[customizations.services]
disabled = ["NetworkManager-wait-online", "sshd"]
EOF
----
. Push the blueprint file to the image builder tool by running the following command:
+
[source,terminal]
----
# composer-cli blueprints push realtime-vm.toml
----
. Generate the system image by specifying the blueprint name and output file format. The Universally Unique Identifier (UUID) of the image is displayed when you start the compose process.
+
[source,terminal]
----
# composer-cli compose start realtime_image qcow2
----
. Wait for the compose process to complete. The compose status must show `FINISHED` before you can continue to the next step.
+
[source,terminal]
----
# composer-cli compose status
----
. Enter the following command to download the `qcow2` image file by specifying its UUID:
+
[source,terminal]
----
# composer-cli compose image <UUID>
----
. Create the customization scripts by running the following commands:
+
[source,terminal]
----
$ cat <<EOF >customize-vm
#!/bin/bash
# Setup hugepages mount
mkdir -p /mnt/huge
echo "hugetlbfs /mnt/huge hugetlbfs defaults,pagesize=1GB 0 0" >> /etc/fstab
# Create vfio-noiommu.conf
echo "options vfio enable_unsafe_noiommu_mode=1" > /etc/modprobe.d/vfio-noiommu.conf
# Enable guest-exec,guest-exec-status on the qemu-guest-agent configuration
sed -i '/^BLACKLIST_RPC=/ { s/guest-exec-status//; s/guest-exec//g }' /etc/sysconfig/qemu-ga
sed -i '/^BLACKLIST_RPC=/ { s/,\+/,/g; s/^,\|,$//g }' /etc/sysconfig/qemu-ga
EOF
----
. Use the `virt-customize` tool to customize the image generated by the image builder tool:
+
[source,terminal]
----
$ virt-customize -a <UUID>-disk.qcow2 --run=customize-vm --selinux-relabel
----
. To create a Dockerfile that contains all the commands to build the container disk image, enter the following command:
+
[source,terminal]
----
$ cat << EOF > Dockerfile
FROM scratch
COPY --chown=107:107 <UUID>-disk.qcow2 /disk/
EOF
----
+
where:
<UUID>-disk.qcow2:: Specifies the name of the custom image in `qcow2` format.
. Build and tag the container by running the following command:
+
[source,terminal]
----
$ podman build . -t real-time-rhel:latest
----
. Push the container disk image to a registry that is accessible from your cluster by running the following command:
+
[source,terminal]
----
$ podman push real-time-rhel:latest
----
. Provide a link to the container disk image in the `spec.param.vmUnderTestContainerDiskImage` attribute in the real-time checkup config map.

View File

@@ -1,72 +0,0 @@
// Module included in the following assemblies:
//
// * virt/virtual_machines/virtual_disks/virt-cloning-a-datavolume-using-smart-cloning.adoc
:_mod-docs-content-type: PROCEDURE
[id="virt-cloning-a-datavolume_{context}"]
= Smart-cloning a PVC by using the CLI
[role="_abstract"]
You can smart-clone a persistent volume claim (PVC) by using the command line to create a `DataVolume` object.
.Prerequisites
* You have installed the {oc-first}.
* Your storage provider must support snapshots.
* The source and target PVCs must have the same storage provider and volume mode.
* The value of the `driver` key of the `VolumeSnapshotClass` object must match the value of the `provisioner` key of the `StorageClass` object as shown in the following example:
+
Example `VolumeSnapshotClass` object:
+
[source,yaml]
----
kind: VolumeSnapshotClass
apiVersion: snapshot.storage.k8s.io/v1
driver: openshift-storage.rbd.csi.ceph.com
# ...
----
+
Example `StorageClass` object:
+
[source,yaml]
----
kind: StorageClass
apiVersion: storage.k8s.io/v1
# ...
provisioner: openshift-storage.rbd.csi.ceph.com
----
.Procedure
. Create a YAML file for a `DataVolume` object as shown in the following example:
+
[source,yaml]
----
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: <datavolume> <1>
spec:
source:
pvc:
namespace: "<source_namespace>" <2>
name: "<my_vm_disk>" <3>
storage:
storageClassName: <storage_class> <4>
----
<1> Specify the name of the new data volume.
<2> Specify the namespace of the source PVC.
<3> Specify the name of the source PVC.
<4> Optional: If the storage class is not specified, the default storage class is used.
. Create the data volume by running the following command:
+
[source,terminal]
----
$ oc create -f <datavolume>.yaml
----
+
[NOTE]
====
Data volumes prevent a virtual machine from starting before the PVC is prepared. You can create a virtual machine that references the new data volume while the PVC is being cloned.
====

View File

@@ -1,57 +0,0 @@
// Module included in the following assemblies:
//
// * virt/virtual_machines/cloning_vms/virt-cloning-vm-disk-into-new-datavolume.adoc
// * virt/virtual_machines/cloning_vms/virt-cloning-vm-disk-to-new-block-storage-pvc.adoc
:_mod-docs-content-type: PROCEDURE
[id="virt-cloning-pvc-of-vm-disk-into-new-datavolume_{context}"]
= Cloning the PVC of a VM disk into a new data volume
[role="_abstract"]
You can clone the persistent volume claim (PVC) of an existing (virtual machine) VM disk
into a new data volume. The new data volume can then be used for a new virtual
machine.
[NOTE]
====
When a data volume is created independently of a VM, the lifecycle of the data volume is independent of the VM. If the VM is deleted, neither the data volume nor its associated PVC is deleted.
====
.Prerequisites
* The VM must be stopped.
* You have installed the {oc-first}.
.Procedure
. Create a YAML file for a data volume as shown in the following example:
+
[source,yaml]
----
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: <datavolume> <1>
spec:
source:
pvc:
namespace: "<source_namespace>" <2>
name: "<my_vm_disk>" <3>
storage: {}
----
<1> Specify the name of the new data volume.
<2> Specify the namespace of the source PVC.
<3> Specify the name of the source PVC.
. Create the data volume by running the following command:
+
[source,terminal]
----
$ oc create -f <datavolume>.yaml
----
+
[NOTE]
====
Data volumes prevent a VM from starting before the PVC is prepared. You can create a VM that references the new data volume while the
PVC is being cloned.
====

View File

@@ -1,17 +0,0 @@
// Module included in the following assemblies:
//
// * virt/virtual_machines/creating_vm/virt-creating-vms-from-templates.adoc
:_mod-docs-content-type: REFERENCE
[id="virt-cloud-init-fields-web_{context}"]
= Cloud-init fields
|===
|Name | Description
|Authorized SSH Keys
|The user's public key that is copied to *_~/.ssh/authorized_keys_* on the virtual machine.
|Custom script
|Replaces other options with a field in which you paste a custom cloud-init script.
|===

View File

@@ -1,15 +0,0 @@
// Module included in the following assemblies:
//
// * virt/advanced_vm_management/virt-vm-control-plane-tuning.adoc
:_mod-docs-content-type: PROCEDURE
[id="virt-configuring-rate-limiters_{context}"]
= Configuring rate limiters
[role="_abstract"]
To compensate for large-scale burst rates, scale the `QPS` (Queries per Second) and `burst` rate limits to process a higher rate of client requests or API calls concurrently for each component.
.Procedure
* Apply a `jsonpatch` annotation to adjust the `kubevirt-hyperconverged` cluster configuration by using `tuningPolicy` to apply scalable tuning parameters. This tuning policy automatically adjusts all virtualization components (`webhook`, `api`, `controller`, `handler`) to match the `QPS` and `burst` values specified by the profile.

View File

@@ -11,7 +11,4 @@ toc::[]
* The `highBurst` profile, which uses fixed `QPS` and `burst` rates, to create hundreds of virtual machines (VMs) in one batch
* Migration setting adjustment based on workload type
// this module commented out until jsonpatch is supported or this becomes a TP or DP
// include::modules/virt-configuring-rate-limiters.adoc[leveloffset=+1]
include::modules/virt-configuring-highburst-profile.adoc[leveloffset=+1]
include::modules/virt-configuring-highburst-profile.adoc[leveloffset=+1]

View File

@@ -19,8 +19,7 @@ You can configure a dynamic IP address when you create a VM by using the command
The IP address is provisioned with cloud-init.
// Commenting this out until bug is fixed. https://bugzilla.redhat.com/show_bug.cgi?id=2217541
// include::modules/virt-configuring-ip-vm-web.adoc[leveloffset=+2]
include::modules/virt-configuring-ip-vm-web.adoc[leveloffset=+2]
include::modules/virt-configuring-ip-vm-cli.adoc[leveloffset=+2]
@@ -38,4 +37,4 @@ include::modules/virt-viewing-vmi-ip-cli.adoc[leveloffset=+2]
[role="_additional-resources"]
[id="additional-resources_virt-configuring-viewing-ips-for-vms"]
== Additional resources
* xref:../../virt/managing_vms/virt-installing-qemu-guest-agent.adoc#virt-installing-qemu-guest-agent[Installing the QEMU guest agent]
* xref:../../virt/managing_vms/virt-installing-qemu-guest-agent.adoc#virt-installing-qemu-guest-agent[Installing the QEMU guest agent]