mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
Merge pull request #42687 from adellape/pick_virt
This commit is contained in:
@@ -4,24 +4,23 @@
|
||||
|
||||
:_content-type: CONCEPT
|
||||
[id="virt-about-hostpath-provisioner_{context}"]
|
||||
= About the hostpath provisioner
|
||||
= About the hostpath provisioner (HPP)
|
||||
|
||||
The hostpath provisioner is a local storage provisioner designed for
|
||||
{VirtProductName}. If you want to configure local storage for
|
||||
virtual machines, you must enable the hostpath provisioner first.
|
||||
When you install the {VirtProductName} Operator, the Hostpath Provisioner Operator is automatically installed. The HPP is a local storage provisioner designed for {VirtProductName} that is created by the Hostpath Provisioner Operator. To use the HPP, you must create a HPP custom resource.
|
||||
|
||||
When you install the {VirtProductName} Operator, the hostpath provisioner Operator
|
||||
is automatically installed. To use it, you must:
|
||||
[IMPORTANT]
|
||||
====
|
||||
In {VirtProductName} 4.10, the HPP Operator configures the Kubernetes CSI driver. The Operator also recognizes the existing (legacy) format of the custom resource.
|
||||
|
||||
* Configure SELinux:
|
||||
** If you use {op-system-first} 8 workers, you must create a `MachineConfig`
|
||||
object on each node.
|
||||
** Otherwise, apply the SELinux label `container_file_t` to the persistent volume (PV) backing
|
||||
directory on each node.
|
||||
* Create a `HostPathProvisioner` custom resource.
|
||||
* Create a `StorageClass` object for the hostpath provisioner.
|
||||
The legacy HPP and the CSI host path driver are supported in parallel for a number of releases. However, at some point, the legacy HPP will no longer be supported. If you use the HPP, plan to create a storage class for the CSI driver as part of your migration strategy.
|
||||
====
|
||||
|
||||
The hostpath provisioner Operator deploys the provisioner as a _DaemonSet_ on each
|
||||
node when you create its custom resource. In the custom resource file, you specify
|
||||
the backing directory for the persistent volumes that the hostpath provisioner
|
||||
creates.
|
||||
If you upgrade to {VirtProductName} version 4.10 on an existing cluster, the HPP Operator is upgraded and the system performs the following actions:
|
||||
|
||||
* The CSI driver is installed.
|
||||
* The CSI driver is configured with the contents of your legacy custom resource.
|
||||
|
||||
If you install {VirtProductName} version 4.10 on a new cluster, you must perform the following actions:
|
||||
|
||||
* Create the HPP custom resource including a `storagePools` stanza in the HPP custom resource.
|
||||
* Create a storage class for the CSI driver.
|
||||
|
||||
40
modules/virt-creating-custom-resources-hpp.adoc
Normal file
40
modules/virt-creating-custom-resources-hpp.adoc
Normal file
@@ -0,0 +1,40 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * virt/virtual_machines/virtual_disks/virt-configuring-local-storage-for-vms.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="virt-creating-custom-resources-hpp_{context}"]
|
||||
= Create the HPP custom resource with a storage pool
|
||||
|
||||
Storage pools allow you to specify the name and path that are used by the CSI driver.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create a YAML file for the HPP custom resource with a `storagePools` stanza in the YAML. For example:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ touch hostpathprovisioner_cr.yaml
|
||||
----
|
||||
|
||||
. Edit the file. For example:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
|
||||
kind: HostPathProvisioner
|
||||
metadata:
|
||||
name: hostpath-provisioner
|
||||
spec:
|
||||
imagePullPolicy: IfNotPresent
|
||||
storagePools: <1>
|
||||
- name: <any_name>
|
||||
path: "</var/myvolumes>" <2>
|
||||
workload:
|
||||
nodeSelector:
|
||||
kubernetes.io/os: linux
|
||||
----
|
||||
<1> The `storagePools` stanza is an array to which you can add multiple entries.
|
||||
<2> Create directories under this node path. Read/write access is required. Ensure that the node-level directory (`/var/myvolumes`) is not on the same partition as the operating system. If it is on the same partition as the operating system, users can potentially fill the operating system partition and impact performance or cause the node to become unstable or unusable.
|
||||
|
||||
. Save the file and exit.
|
||||
78
modules/virt-creating-single-pvc-template-storage-pool.adoc
Normal file
78
modules/virt-creating-single-pvc-template-storage-pool.adoc
Normal file
@@ -0,0 +1,78 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * virt/virtual_machines/virtual_disks/virt-configuring-local-storage-for-vms.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="virt-creating-single-pvc-template-storage-pool_{context}"]
|
||||
= Creating a storage pool using a pvcTemplate specification in a host path provisioner (HPP) custom resource.
|
||||
|
||||
If you have a single large persistent volume (PV) on your node, you might want to virtually divide the volume and use one partition to store only the HPP volumes. By defining a storage pool using a `pvcTemplate` specification in the HPP custom resource, you can virtually split the PV into multiple smaller volumes, providing more flexibility in data allocation.
|
||||
|
||||
The `pvcTemplate` matches the `spec` portion of a persistent volume claim (PVC). For example:
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: "iso-pvc"
|
||||
labels:
|
||||
app: containerized-data-importer
|
||||
annotations:
|
||||
cdi.kubevirt.io/storage.import.endpoint: "http://cdi-file-host.cdi:80/tinyCore.iso.tar"
|
||||
spec: <1>
|
||||
volumeMode: Block
|
||||
storageClassName: <any_storage_class>
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 5Gi
|
||||
----
|
||||
<1> A `pvcTemplate` is the `spec` (specification) section of a PVC
|
||||
|
||||
The Operator creates a PVC from the PVC template for each node containing the HPP CSI driver. The PVC created from the PVC template consumes the single large PV, allowing the HPP to create smaller dynamic volumes.
|
||||
|
||||
You can create any combination of storage pools. You can combine standard storage pools with storage pools that use PVC templates in the `storagePools` stanza.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create a YAML file for the CSI custom resource specifying a single `pvcTemplate` storage pool. For example:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ touch hostpathprovisioner_cr_pvc.yaml
|
||||
----
|
||||
|
||||
. Edit the file. For example:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
|
||||
kind: HostPathProvisioner
|
||||
metadata:
|
||||
name: hostpath-provisioner
|
||||
spec:
|
||||
imagePullPolicy: IfNotPresent
|
||||
storagePools: <1>
|
||||
- name: <any_name>
|
||||
path: "</var/myvolumes>" <2>
|
||||
pvcTemplate:
|
||||
volumeMode: Block <3>
|
||||
storageClassName: <any_storage_class> <4>
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 5Gi <5>
|
||||
workload:
|
||||
nodeSelector:
|
||||
kubernetes.io/os: linux
|
||||
----
|
||||
<1> The `storagePools` stanza is an array to which you can add multiple entries.
|
||||
<2> Create directories under this node path. Read/write access is required. Ensure that the node-level directory (`/var/myvolumes`) is not on the same partition as the operating system. If it is, users of the volumes can potentially fill the operating system partition and cause the node to impact performance, become unstable, or become unusable.
|
||||
<3> `volumeMode` parameter is optional and can be either `Block` or `Filesystem` but must match the provisioned volume format, if used. The default value is `Filesystem`. If the `volumeMode` is `block`, the mounting pod creates an XFS file system on the block volume before mounting it.
|
||||
<4> If the `storageClassName` parameter is omitted, the default storage class is used to create PVCs. If you omit `storageClassName`, ensure that the HPP storage class is not the default storage class.
|
||||
<5> You can specify statically or dynamically provisioned storage. In either case, ensure the requested storage size is appropriate for the volume you want to virtually divide or the PVC cannot be bound to the large PV. If the storage class you are using uses dynamically provisioned storage, pick an allocation size that matches the size of a typical request.
|
||||
|
||||
. Save the file and exit.
|
||||
@@ -9,18 +9,69 @@
|
||||
When you create a storage class, you set parameters that affect the
|
||||
dynamic provisioning of persistent volumes (PVs) that belong to that storage class.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
When using {VirtProductName} with {product-title} Container Storage, specify RBD block mode persistent volume claims (PVCs) when creating virtual machine disks. With virtual machine disks, RBD block mode volumes are more efficient and provide better performance than Ceph FS or RBD filesystem-mode PVCs.
|
||||
|
||||
To specify RBD block mode PVCs, use the 'ocs-storagecluster-ceph-rbd' storage class and `VolumeMode: Block`.
|
||||
====
|
||||
In order to use the host path provisioner (HPP) you must create an associated storage class for the CSI driver with the `storagePools` stanza.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
You cannot update a `StorageClass` object's parameters after you create it.
|
||||
====
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.
|
||||
|
||||
To solve this problem, use the Kubernetes pod scheduler to bind the PVC to a PV on the correct node. By using the `StorageClass` value with `volumeBindingMode` parameter set to `WaitForFirstConsumer`, the binding and provisioning of the PV is delayed until a pod is created using the PVC.
|
||||
====
|
||||
|
||||
[id="virt-creating-storage-class-csi_{context}"]
|
||||
== Creating a storage class for the CSI driver with the storagePools stanza
|
||||
|
||||
Use this procedure to create a storage class for use with the HPP CSI driver implementation. You must create this storage class to use HPP in {VirtProductName} 4.10 and later.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create a YAML file for defining the storage class. For example:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ touch <storageclass_csi>.yaml
|
||||
----
|
||||
|
||||
. Edit the file. For example:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: hostpath-csi <1>
|
||||
provisioner: kubevirt.io.hostpath-provisioner <2>
|
||||
reclaimPolicy: Delete <3>
|
||||
volumeBindingMode: WaitForFirstConsumer <4>
|
||||
parameters:
|
||||
storagePool: <any_name> <5>
|
||||
----
|
||||
<1> Assign any meaningful name to the storage class. In this example, `csi` is used to specify that the class is using the CSI provisioner instead of the legacy provisioner. Choosing descriptive names for storage classes, based on legacy or CSI driver provisioning, eases implementation of your migration strategy.
|
||||
<2> The legacy provisioner uses `kubevirt.io/hostpath-provisioner`. The CSI driver uses `kubevirt.io.hostpath-provisioner`.
|
||||
<3> The two possible `reclaimPolicy` values are `Delete` and `Retain`. If you
|
||||
do not specify a value, the storage class defaults to `Delete`.
|
||||
<4> The `volumeBindingMode` parameter determines when dynamic provisioning and volume binding occur. Specify `WaitForFirstConsumer` to delay the binding and provisioning of a PV until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod's scheduling requirements.
|
||||
<5> `<any_name>` must match the name of the storage pool, which you define in the HPP custom resource.
|
||||
|
||||
. Save the file and exit.
|
||||
|
||||
. Create the `StorageClass` object:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f <storageclass_csi>.yaml
|
||||
----
|
||||
|
||||
[id="virt-creating-storage-class-legacy-hpp_{context}"]
|
||||
== Creating a storage class for the legacy hostpath provisioner
|
||||
|
||||
Use this procedure to create a storage class for the legacy hostpath provisioner (HPP). You do not need to explicitly add a `storagePool` parameter.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create a YAML file for defining the storage class. For example:
|
||||
@@ -42,20 +93,12 @@ provisioner: kubevirt.io/hostpath-provisioner
|
||||
reclaimPolicy: Delete <2>
|
||||
volumeBindingMode: WaitForFirstConsumer <3>
|
||||
----
|
||||
<1> You can optionally rename the storage class by changing this value.
|
||||
<1> Assign any meaningful name to the storage class. In this example, `csi` is used to specify that the class is using the CSI provisioner, instead of the legacy provisioner. Choosing descriptive names for storage classes, based on legacy or CSI driver provisioning, eases implementation of your migration strategy.
|
||||
<2> The two possible `reclaimPolicy` values are `Delete` and `Retain`. If you
|
||||
do not specify a value, the storage class defaults to `Delete`.
|
||||
<3> The `volumeBindingMode` value determines when dynamic provisioning and volume
|
||||
binding occur. Specify `WaitForFirstConsumer` to delay the binding and provisioning
|
||||
of a PV until after a pod that uses the persistent volume claim (PVC)
|
||||
is created. This ensures that the PV meets the pod's scheduling requirements.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.
|
||||
<3> The `volumeBindingMode` value determines when dynamic provisioning and volume binding occur. Specify the `WaitForFirstConsumer` value to delay the binding and provisioning of a PV until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod's scheduling requirements.
|
||||
|
||||
To solve this problem, use the Kubernetes pod scheduler to bind the PVC to a PV on the correct node. By using `StorageClass` with `volumeBindingMode` set to `WaitForFirstConsumer`, the binding and provisioning of the PV is delayed until a `Pod` is created using the PVC.
|
||||
====
|
||||
. Save the file and exit.
|
||||
|
||||
. Create the `StorageClass` object:
|
||||
+
|
||||
|
||||
@@ -7,17 +7,18 @@ include::_attributes/virt-document-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
You can configure local storage for your virtual machines by using the hostpath
|
||||
provisioner feature.
|
||||
Configure storage for your virtual machines. When configuring local storage, use the hostpath provisioner (HPP).
|
||||
|
||||
include::modules/virt-about-hostpath-provisioner.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/virt-configuring-selinux-hpp-on-rhcos8.adoc[leveloffset=+1]
|
||||
include::modules/virt-creating-custom-resources-hpp.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/virt-using-hostpath-provisioner.adoc[leveloffset=+1]
|
||||
include::modules/virt-creating-storage-class.adoc[leveloffset=+1]
|
||||
|
||||
In addition to configuring a basic storage pool for use with the HPP, you have the option of creating single storage pools with the `pvcTemplate` specification as well as multiple storage pools.
|
||||
|
||||
include::modules/virt-creating-single-pvc-template-storage-pool.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
* xref:../../../virt/install/virt-specifying-nodes-for-virtualization-components.adoc#virt-specifying-nodes-for-virtualization-components[Specifying nodes for virtualization components]
|
||||
|
||||
include::modules/virt-creating-storage-class.adoc[leveloffset=+1]
|
||||
* xref:../../../virt/virtual_machines/virtual_disks/virt-creating-data-volumes.adoc#virt-customizing-storage-profile_virt-creating-data-volumes[Customizing the storage profile]
|
||||
|
||||
Reference in New Issue
Block a user