diff --git a/modules/virt-about-creating-storage-classes.adoc b/modules/virt-about-creating-storage-classes.adoc new file mode 100644 index 0000000000..27545f0282 --- /dev/null +++ b/modules/virt-about-creating-storage-classes.adoc @@ -0,0 +1,18 @@ +// Module included in the following assemblies: +// +// * virt/virtual_machines/virtual_disks/virt-configuring-local-storage-for-vms.adoc + +:_content-type: CONCEPT +[id="virt-about-creating-storage-classes_{context}"] += About creating storage classes + +When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a `StorageClass` object's parameters after you create it. + +In order to use the hostpath provisioner (HPP) you must create an associated storage class for the CSI driver with the `storagePools` stanza. + +[NOTE] +==== +Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned. + +To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the `StorageClass` value with `volumeBindingMode` parameter set to `WaitForFirstConsumer`, the binding and provisioning of the PV is delayed until a pod is created using the PVC. +==== \ No newline at end of file diff --git a/modules/virt-about-hostpath-provisioner.adoc b/modules/virt-about-hostpath-provisioner.adoc index 1e4337dc6b..e7ed67280a 100644 --- a/modules/virt-about-hostpath-provisioner.adoc +++ b/modules/virt-about-hostpath-provisioner.adoc @@ -4,23 +4,25 @@ :_content-type: CONCEPT [id="virt-about-hostpath-provisioner_{context}"] -= About the hostpath provisioner (HPP) += About the hostpath provisioner -When you install the {VirtProductName} Operator, the Hostpath Provisioner Operator is automatically installed. The HPP is a local storage provisioner designed for {VirtProductName} that is created by the Hostpath Provisioner Operator. To use the HPP, you must create a HPP custom resource. +When you install the {VirtProductName} Operator, the Hostpath Provisioner (HPP) Operator is automatically installed. The HPP is a local storage provisioner designed for {VirtProductName} that is created by the Hostpath Provisioner Operator. To use the HPP, you must create an HPP custom resource (CR). [IMPORTANT] ==== -In {VirtProductName} 4.10, the HPP Operator configures the Kubernetes CSI driver. The Operator also recognizes the existing (legacy) format of the custom resource. +In {VirtProductName} 4.10, the HPP Operator configures the Kubernetes CSI driver. The Operator also recognizes the existing (legacy) format of the HPP CR. -The legacy HPP and the CSI host path driver are supported in parallel for a number of releases. However, at some point, the legacy HPP will no longer be supported. If you use the HPP, plan to create a storage class for the CSI driver as part of your migration strategy. +The legacy HPP and the Container Storage Interface (CSI) driver are supported in parallel for a number of releases. However, at some point, the legacy HPP will no longer be supported. If you use the HPP, plan to create a storage class for the CSI driver as part of your migration strategy. ==== If you upgrade to {VirtProductName} version 4.10 on an existing cluster, the HPP Operator is upgraded and the system performs the following actions: * The CSI driver is installed. -* The CSI driver is configured with the contents of your legacy custom resource. +* The CSI driver is configured with the contents of your legacy HPP CR. If you install {VirtProductName} version 4.10 on a new cluster, you must perform the following actions: -* Create the HPP custom resource including a `storagePools` stanza in the HPP custom resource. +* Create an HPP CR with a basic storage pool. * Create a storage class for the CSI driver. + +Optional: You can create a storage pool with a PVC template for multiple HPP volumes. diff --git a/modules/virt-about-storage-pools-pvc-templates.adoc b/modules/virt-about-storage-pools-pvc-templates.adoc new file mode 100644 index 0000000000..de53a28d87 --- /dev/null +++ b/modules/virt-about-storage-pools-pvc-templates.adoc @@ -0,0 +1,35 @@ +// Module included in the following assemblies: +// +// * virt/virtual_machines/virtual_disks/virt-configuring-local-storage-for-vms.adoc + +:_content-type: CONCEPT +[id="virt-about-storage-pools-pvc-templates_{context}"] += About storage pools created with PVC templates + +If you have a single, large persistent volume (PV), you can create a storage pool by defining a PVC template in the hostpath provisioner (HPP) custom resource (CR). + +A storage pool created with a PVC template can contain multiple HPP volumes. Splitting a PV into smaller volumes provides greater flexibility for data allocation. + +The PVC template is based on the `spec` stanza of the `PersistentVolumeClaim` object: + +.Example `PersistentVolumeClaim` object +[source,yaml] +---- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: iso-pvc +spec: + volumeMode: Block <1> + storageClassName: my-storage-class + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 5Gi +---- +<1> This value is only required for block volume mode PVs. + +You define a storage pool using a `pvcTemplate` specification in the HPP CR. The Operator creates a PVC from the `pvcTemplate` specification for each node containing the HPP CSI driver. The PVC created from the PVC template consumes the single large PV, allowing the HPP to create smaller dynamic volumes. + +You can combine basic storage pools with storage pools created from PVC templates. diff --git a/modules/virt-creating-custom-resources-hpp.adoc b/modules/virt-creating-custom-resources-hpp.adoc deleted file mode 100644 index 3d3144a117..0000000000 --- a/modules/virt-creating-custom-resources-hpp.adoc +++ /dev/null @@ -1,40 +0,0 @@ -// Module included in the following assemblies: -// -// * virt/virtual_machines/virtual_disks/virt-configuring-local-storage-for-vms.adoc - -:_content-type: PROCEDURE -[id="virt-creating-custom-resources-hpp_{context}"] -= Create the HPP custom resource with a storage pool - -Storage pools allow you to specify the name and path that are used by the CSI driver. - -.Procedure - -. Create a YAML file for the HPP custom resource with a `storagePools` stanza in the YAML. For example: -+ -[source,terminal] ----- -$ touch hostpathprovisioner_cr.yaml ----- - -. Edit the file. For example: -+ -[source,yaml] ----- -apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 -kind: HostPathProvisioner -metadata: - name: hostpath-provisioner -spec: - imagePullPolicy: IfNotPresent - storagePools: <1> -   - name: -     path: "" <2> - workload: -   nodeSelector: -     kubernetes.io/os: linux ----- -<1> The `storagePools` stanza is an array to which you can add multiple entries. -<2> Create directories under this node path. Read/write access is required. Ensure that the node-level directory (`/var/myvolumes`) is not on the same partition as the operating system. If it is on the same partition as the operating system, users can potentially fill the operating system partition and impact performance or cause the node to become unstable or unusable. - -. Save the file and exit. diff --git a/modules/virt-creating-hpp-basic-storage-pool.adoc b/modules/virt-creating-hpp-basic-storage-pool.adoc new file mode 100644 index 0000000000..4d70f953cc --- /dev/null +++ b/modules/virt-creating-hpp-basic-storage-pool.adoc @@ -0,0 +1,45 @@ +// Module included in the following assemblies: +// +// * virt/virtual_machines/virtual_disks/virt-configuring-local-storage-for-vms.adoc + +:_content-type: PROCEDURE +[id="virt-creating-hpp-basic-storage-pool_{context}"] += Creating a hostpath provisioner with a basic storage pool + +You configure a hostpath provisioner (HPP) with a basic storage pool by creating an HPP custom resource (CR) with a `storagePools` stanza. The storage pool specifies the name and path used by the CSI driver. + +.Prerequisites + +* The directories specified in `spec.storagePools.path` must have read/write access. +* The storage pools must not be in the same partition as the operating system. Otherwise, the operating system partition might become filled to capacity, which will impact performance or cause the node to become unstable or unusable. + +.Procedure + +. Create an `hpp_cr.yaml` file with a `storagePools` stanza as in the following example: ++ +[source,yaml] +---- +apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 +kind: HostPathProvisioner +metadata: + name: hostpath-provisioner +spec: + imagePullPolicy: IfNotPresent + storagePools: <1> + - name: any_name + path: "/var/myvolumes" <2> +workload: + nodeSelector: + kubernetes.io/os: linux +---- +<1> The `storagePools` stanza is an array to which you can add multiple entries. +<2> Specify the storage pool directories under this node path. + +. Save the file and exit. + +. Create the HPP by running the following command: ++ +[source,terminal] +---- +$ oc create -f hpp_cr.yaml +---- diff --git a/modules/virt-creating-single-pvc-template-storage-pool.adoc b/modules/virt-creating-single-pvc-template-storage-pool.adoc deleted file mode 100644 index a0e2e5a24e..0000000000 --- a/modules/virt-creating-single-pvc-template-storage-pool.adoc +++ /dev/null @@ -1,78 +0,0 @@ -// Module included in the following assemblies: -// -// * virt/virtual_machines/virtual_disks/virt-configuring-local-storage-for-vms.adoc - -:_content-type: PROCEDURE -[id="virt-creating-single-pvc-template-storage-pool_{context}"] -= Creating a storage pool using a pvcTemplate specification in a host path provisioner (HPP) custom resource. - -If you have a single large persistent volume (PV) on your node, you might want to virtually divide the volume and use one partition to store only the HPP volumes. By defining a storage pool using a `pvcTemplate` specification in the HPP custom resource, you can virtually split the PV into multiple smaller volumes, providing more flexibility in data allocation. - -The `pvcTemplate` matches the `spec` portion of a persistent volume claim (PVC). For example: - -[source,yaml] ----- -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: "iso-pvc" - labels: - app: containerized-data-importer - annotations: - cdi.kubevirt.io/storage.import.endpoint: "http://cdi-file-host.cdi:80/tinyCore.iso.tar" -spec: <1> - volumeMode: Block -  storageClassName: -  accessModes: -    - ReadWriteOnce -       resources: -         requests: -           storage: 5Gi ----- -<1> A `pvcTemplate` is the `spec` (specification) section of a PVC - -The Operator creates a PVC from the PVC template for each node containing the HPP CSI driver. The PVC created from the PVC template consumes the single large PV, allowing the HPP to create smaller dynamic volumes. - -You can create any combination of storage pools. You can combine standard storage pools with storage pools that use PVC templates in the `storagePools` stanza. - -.Procedure - -. Create a YAML file for the CSI custom resource specifying a single `pvcTemplate` storage pool. For example: -+ -[source,terminal] ----- -$ touch hostpathprovisioner_cr_pvc.yaml ----- - -. Edit the file. For example: -+ -[source,yaml] ----- -apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 -kind: HostPathProvisioner -metadata: - name: hostpath-provisioner -spec: - imagePullPolicy: IfNotPresent -storagePools: <1> -   - name: -     path: "" <2> -  pvcTemplate: -       volumeMode: Block <3> -       storageClassName: <4> -       accessModes: -       - ReadWriteOnce -       resources: -         requests: -           storage: 5Gi <5> -   workload: -   nodeSelector: -     kubernetes.io/os: linux ----- -<1> The `storagePools` stanza is an array to which you can add multiple entries. -<2> Create directories under this node path. Read/write access is required. Ensure that the node-level directory (`/var/myvolumes`) is not on the same partition as the operating system. If it is, users of the volumes can potentially fill the operating system partition and cause the node to impact performance, become unstable, or become unusable. -<3> `volumeMode` parameter is optional and can be either `Block` or `Filesystem` but must match the provisioned volume format, if used. The default value is `Filesystem`. If the `volumeMode` is `block`, the mounting pod creates an XFS file system on the block volume before mounting it. -<4> If the `storageClassName` parameter is omitted, the default storage class is used to create PVCs. If you omit `storageClassName`, ensure that the HPP storage class is not the default storage class. -<5> You can specify statically or dynamically provisioned storage. In either case, ensure the requested storage size is appropriate for the volume you want to virtually divide or the PVC cannot be bound to the large PV. If the storage class you are using uses dynamically provisioned storage, pick an allocation size that matches the size of a typical request. - -. Save the file and exit. diff --git a/modules/virt-creating-storage-class-csi-driver.adoc b/modules/virt-creating-storage-class-csi-driver.adoc new file mode 100644 index 0000000000..000d45bc86 --- /dev/null +++ b/modules/virt-creating-storage-class-csi-driver.adoc @@ -0,0 +1,43 @@ +// Module included in the following assemblies: +// +// * virt/virtual_machines/virtual_disks/virt-configuring-local-storage-for-vms.adoc + +:_content-type: PROCEDURE +[id="virt-creating-storage-class-csi-driver_{context}"] += Creating a storage class for the CSI driver with the storagePools stanza + +You create a storage class custom resource (CR) for the hostpath provisioner (HPP) CSI driver. + +.Prerequisites + +* You must have {VirtProductName} 4.10 or later. + +.Procedure + +. Create a `storageclass_csi.yaml` file to define the storage class: ++ +[source,yaml] +---- +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: hostpath-csi <1> +provisioner: kubevirt.io.hostpath-provisioner +reclaimPolicy: Delete <2> +volumeBindingMode: WaitForFirstConsumer <3> +parameters: + storagePool: my-storage-pool <4> +---- +<1> Assign any meaningful name to the storage class. In this example, `csi` is used to specify that the class is using the CSI provisioner instead of the legacy provisioner. Choosing descriptive names for storage classes, based on legacy or CSI driver provisioning, eases implementation of your migration strategy. +<2> The two possible `reclaimPolicy` values are `Delete` and `Retain`. If you do not specify a value, the default value is `Delete`. +<3> The `volumeBindingMode` parameter determines when dynamic provisioning and volume binding occur. Specify `WaitForFirstConsumer` to delay the binding and provisioning of a persistent volume (PV) until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod's scheduling requirements. +<4> Specify the name of the storage pool defined in the HPP CR. + +. Save the file and exit. + +. Create the `StorageClass` object by running the following command: ++ +[source,terminal] +---- +$ oc create -f storageclass_csi.yaml +---- diff --git a/modules/virt-creating-storage-class-legacy-hpp.adoc b/modules/virt-creating-storage-class-legacy-hpp.adoc new file mode 100644 index 0000000000..297347d6d6 --- /dev/null +++ b/modules/virt-creating-storage-class-legacy-hpp.adoc @@ -0,0 +1,40 @@ +// Module included in the following assemblies: +// +// * virt/virtual_machines/virtual_disks/virt-configuring-local-storage-for-vms.adoc + +:_content-type: PROCEDURE +[id="virt-creating-storage-class-legacy-hpp_{context}"] += Creating a storage class for the legacy hostpath provisioner + +You create a storage class for the legacy hostpath provisioner (HPP) by creating a `StorageClass` object without the `storagePool` parameter. + +.Procedure + +. Create a `storageclass.yaml` file to define the storage class: ++ +[source,yaml] +---- +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: hostpath-provisioner +provisioner: kubevirt.io/hostpath-provisioner +reclaimPolicy: Delete <1> +volumeBindingMode: WaitForFirstConsumer <2> +---- +<1> The two possible `reclaimPolicy` values are `Delete` and `Retain`. If you do not specify a value, the storage class defaults to `Delete`. +<2> The `volumeBindingMode` value determines when dynamic provisioning and volume binding occur. Specify the `WaitForFirstConsumer` value to delay the binding and provisioning of a persistent volume until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod's scheduling requirements. + +. Save the file and exit. + +. Create the `StorageClass` object by running the following command: ++ +[source,terminal] +---- +$ oc create -f storageclass.yaml +---- + +[role="_additional-resources"] +.Additional resources + +* link:https://kubernetes.io/docs/concepts/storage/storage-classes/[Storage classes] diff --git a/modules/virt-creating-storage-class.adoc b/modules/virt-creating-storage-class.adoc deleted file mode 100644 index 6ef456dda0..0000000000 --- a/modules/virt-creating-storage-class.adoc +++ /dev/null @@ -1,108 +0,0 @@ -// Module included in the following assemblies: -// -// * virt/virtual_machines/virtual_disks/virt-configuring-local-storage-for-vms.adoc - -:_content-type: PROCEDURE -[id="virt-creating-storage-class_{context}"] -= Creating a storage class - -When you create a storage class, you set parameters that affect the -dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a `StorageClass` object's parameters after you create it. - -In order to use the host path provisioner (HPP) you must create an associated storage class for the CSI driver with the `storagePools` stanza. - -[NOTE] -==== -Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned. - -To solve this problem, use the Kubernetes pod scheduler to bind the PVC to a PV on the correct node. By using the `StorageClass` value with `volumeBindingMode` parameter set to `WaitForFirstConsumer`, the binding and provisioning of the PV is delayed until a pod is created using the PVC. -==== - -[id="virt-creating-storage-class-csi_{context}"] -== Creating a storage class for the CSI driver with the storagePools stanza - -Use this procedure to create a storage class for use with the HPP CSI driver implementation. You must create this storage class to use HPP in {VirtProductName} 4.10 and later. - -.Procedure - -. Create a YAML file for defining the storage class. For example: -+ -[source,terminal] ----- -$ touch .yaml ----- - -. Edit the file. For example: -+ -[source,yaml] ----- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: hostpath-csi <1> -provisioner: kubevirt.io.hostpath-provisioner <2> -reclaimPolicy: Delete <3> -volumeBindingMode: WaitForFirstConsumer <4> -parameters: - storagePool: <5> ----- -<1> Assign any meaningful name to the storage class. In this example, `csi` is used to specify that the class is using the CSI provisioner instead of the legacy provisioner. Choosing descriptive names for storage classes, based on legacy or CSI driver provisioning, eases implementation of your migration strategy. -<2> The legacy provisioner uses `kubevirt.io/hostpath-provisioner`. The CSI driver uses `kubevirt.io.hostpath-provisioner`. -<3> The two possible `reclaimPolicy` values are `Delete` and `Retain`. If you -do not specify a value, the storage class defaults to `Delete`. -<4> The `volumeBindingMode` parameter determines when dynamic provisioning and volume binding occur. Specify `WaitForFirstConsumer` to delay the binding and provisioning of a PV until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod's scheduling requirements. -<5> `` must match the name of the storage pool, which you define in the HPP custom resource. - -. Save the file and exit. - -. Create the `StorageClass` object: -+ -[source,terminal] ----- -$ oc create -f .yaml ----- - -[id="virt-creating-storage-class-legacy-hpp_{context}"] -== Creating a storage class for the legacy hostpath provisioner - -Use this procedure to create a storage class for the legacy hostpath provisioner (HPP). You do not need to explicitly add a `storagePool` parameter. - -.Procedure - -. Create a YAML file for defining the storage class. For example: -+ -[source,terminal] ----- -$ touch storageclass.yaml ----- - -. Edit the file. For example: -+ -[source,yaml] ----- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: hostpath-provisioner <1> -provisioner: kubevirt.io/hostpath-provisioner -reclaimPolicy: Delete <2> -volumeBindingMode: WaitForFirstConsumer <3> ----- -<1> Assign any meaningful name to the storage class. In this example, `csi` is used to specify that the class is using the CSI provisioner, instead of the legacy provisioner. Choosing descriptive names for storage classes, based on legacy or CSI driver provisioning, eases implementation of your migration strategy. -<2> The two possible `reclaimPolicy` values are `Delete` and `Retain`. If you -do not specify a value, the storage class defaults to `Delete`. -<3> The `volumeBindingMode` value determines when dynamic provisioning and volume binding occur. Specify the `WaitForFirstConsumer` value to delay the binding and provisioning of a PV until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod's scheduling requirements. - -. Save the file and exit. - -. Create the `StorageClass` object: -+ -[source,terminal] ----- -$ oc create -f storageclass.yaml ----- - -[role="_additional-resources"] -.Additional resources - -* link:https://kubernetes.io/docs/concepts/storage/storage-classes/[Storage classes] diff --git a/modules/virt-creating-storage-pool-pvc-template.adoc b/modules/virt-creating-storage-pool-pvc-template.adoc new file mode 100644 index 0000000000..178beb6a0b --- /dev/null +++ b/modules/virt-creating-storage-pool-pvc-template.adoc @@ -0,0 +1,56 @@ +// Module included in the following assemblies: +// +// * virt/virtual_machines/virtual_disks/virt-configuring-local-storage-for-vms.adoc + +:_content-type: PROCEDURE +[id="virt-creating-storage-pool-pvc-template_{context}"] += Creating a storage pool with a PVC template + +You can create a storage pool for multiple hostpath provisioner (HPP) volumes by specifying a PVC template in the HPP custom resource (CR). + +.Prerequisites + +* The directories specified in `spec.storagePools.path` must have read/write access. +* The storage pools must not be in the same partition as the operating system. Otherwise, the operating system partition might become filled to capacity, which will impact performance or cause the node to become unstable or unusable. + +.Procedure + +. Create an `hpp_pvc_template_pool.yaml` file for the HPP CR that specifies a persistent volume (PVC) template in the `storagePools` stanza according to the following example: ++ +[source,yaml] +---- +apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 +kind: HostPathProvisioner +metadata: + name: hostpath-provisioner +spec: + imagePullPolicy: IfNotPresent + storagePools: <1> + - name: my-storage-pool + path: "/var/myvolumes" <2> + pvcTemplate: + volumeMode: Block <3> + storageClassName: my-storage-class <4> + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 5Gi <5> + workload: + nodeSelector: + kubernetes.io/os: linux +---- +<1> The `storagePools` stanza is an array that can contain both basic and PVC template storage pools. +<2> Specify the storage pool directories under this node path. +<3> Optional: The `volumeMode` parameter can be either `Block` or `Filesystem` as long as it matches the provisioned volume format. If no value is specified, the default is `Filesystem`. If the `volumeMode` is `Block`, the mounting pod creates an XFS file system on the block volume before mounting it. +<4> If the `storageClassName` parameter is omitted, the default storage class is used to create PVCs. If you omit `storageClassName`, ensure that the HPP storage class is not the default storage class. +<5> You can specify statically or dynamically provisioned storage. In either case, ensure the requested storage size is appropriate for the volume you want to virtually divide or the PVC cannot be bound to the large PV. If the storage class you are using uses dynamically provisioned storage, pick an allocation size that matches the size of a typical request. + +. Save the file and exit. + +. Create the HPP with a storage pool by running the following command: ++ +[source,terminal] +---- +$ oc create -f hpp_pvc_template_pool.yaml +---- diff --git a/modules/virt-importing-vm-datavolume.adoc b/modules/virt-importing-vm-datavolume.adoc index b433532f3a..720bac2f97 100644 --- a/modules/virt-importing-vm-datavolume.adoc +++ b/modules/virt-importing-vm-datavolume.adoc @@ -106,8 +106,8 @@ status: {} <1> Specify the name of the virtual machine. <2> Specify the name of the data volume. <3> Specify `http` for an HTTP or HTTPS endpoint. Specify `registry` for a container disk image imported from a registry. -<4> The source of the virtual machine image you want to import. This example references a virtual machine image at an HTTPS endpoint. An example of a container registry endpoint is `url: "docker://kubevirt/fedora-cloud-container-disk-demo:latest"`. -<5> Required if you created a `Secret` for the data source. +<4> Specify the URL or registry endpoint of the virtual machine image you want to import. This example references a virtual machine image at an HTTPS endpoint. An example of a container registry endpoint is `url: "docker://kubevirt/fedora-cloud-container-disk-demo:latest"`. +<5> Specify the `Secret` name if you created a `Secret` for the data source. <6> Optional: Specify a CA certificate config map. ==== diff --git a/modules/virt-importing-vm-to-block-pv.adoc b/modules/virt-importing-vm-to-block-pv.adoc index 287655e93d..cae7fdf87b 100644 --- a/modules/virt-importing-vm-to-block-pv.adoc +++ b/modules/virt-importing-vm-to-block-pv.adoc @@ -64,7 +64,7 @@ spec: <1> Specify the name of the data volume. <2> Optional: Set the storage class or omit it to accept the cluster default. <3> Specify the HTTP or HTTPS URL of the image to import. -<4> Required if you created a `Secret` for the data source. +<4> Specify the `Secret` name if you created a `Secret` for the data source. <5> The volume mode and access mode are detected automatically for known storage provisioners. Otherwise, specify `Block`. . Create the data volume to import the virtual machine image: diff --git a/virt/virtual_machines/virtual_disks/virt-configuring-local-storage-for-vms.adoc b/virt/virtual_machines/virtual_disks/virt-configuring-local-storage-for-vms.adoc index 2b34c142f3..ff51878d59 100644 --- a/virt/virtual_machines/virtual_disks/virt-configuring-local-storage-for-vms.adoc +++ b/virt/virtual_machines/virtual_disks/virt-configuring-local-storage-for-vms.adoc @@ -6,17 +6,21 @@ include::_attributes/common-attributes.adoc[] toc::[] -Configure storage for your virtual machines. When configuring local storage, use the hostpath provisioner (HPP). +You can configure local storage for virtual machines by using the hostpath provisioner (HPP). include::modules/virt-about-hostpath-provisioner.adoc[leveloffset=+1] -include::modules/virt-creating-custom-resources-hpp.adoc[leveloffset=+1] +include::modules/virt-creating-hpp-basic-storage-pool.adoc[leveloffset=+1] -include::modules/virt-creating-storage-class.adoc[leveloffset=+1] +include::modules/virt-about-creating-storage-classes.adoc[leveloffset=+1] -In addition to configuring a basic storage pool for use with the HPP, you have the option of creating single storage pools with the `pvcTemplate` specification as well as multiple storage pools. +include::modules/virt-creating-storage-class-csi-driver.adoc[leveloffset=+2] -include::modules/virt-creating-single-pvc-template-storage-pool.adoc[leveloffset=+1] +include::modules/virt-creating-storage-class-legacy-hpp.adoc[leveloffset=+2] + +include::modules/virt-about-storage-pools-pvc-templates.adoc[leveloffset=+1] + +include::modules/virt-creating-storage-pool-pvc-template.adoc[leveloffset=+2] [role="_additional-resources"] .Additional resources diff --git a/virt/virtual_machines/virtual_disks/virt-creating-data-volumes.adoc b/virt/virtual_machines/virtual_disks/virt-creating-data-volumes.adoc index 714a0e9ff1..acc665701e 100644 --- a/virt/virtual_machines/virtual_disks/virt-creating-data-volumes.adoc +++ b/virt/virtual_machines/virtual_disks/virt-creating-data-volumes.adoc @@ -35,6 +35,6 @@ include::modules/virt-customizing-storage-profile.adoc[leveloffset=+1] [id="additional-resources_creating-data-volumes-using-profiles"] [role="_additional-resources"] == Additional resources -* xref:../../../virt/virtual_machines/virtual_disks/virt-configuring-local-storage-for-vms.adoc#virt-creating-storage-class_virt-configuring-local-storage-for-vms[Creating a storage class] +* xref:../../../virt/virtual_machines/virtual_disks/virt-configuring-local-storage-for-vms.adoc#virt-about-creating-storage-classes_virt-configuring-local-storage-for-vms[About creating storage classes] * xref:../../../virt/virtual_machines/virtual_disks/virt-reserving-pvc-space-fs-overhead.adoc#virt-overriding-default-fs-overhead-value_virt-reserving-pvc-space-fs-overhead[Overriding the default file system overhead value] * xref:../../../virt/virtual_machines/virtual_disks/virt-cloning-a-datavolume-using-smart-cloning.adoc#virt-cloning-a-datavolume-using-smart-cloning[Cloning a data volume using smart cloning]