1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/modules/lvms-about-lvmcluster-cr.adoc

180 lines
7.5 KiB
Plaintext

// Module included in the following assemblies:
//
// storage/persistent_storage/persistent_storage_local/persistent-storage-using-lvms.adoc
:_mod-docs-content-type: CONCEPT
[id="about-lvmcluster_{context}"]
= About the LVMCluster custom resource
You can configure the `LVMCluster` CR to perform the following actions:
* Create LVM volume groups that you can use to provision persistent volume claims (PVCs).
* Configure a list of devices that you want to add to the LVM volume groups.
* Configure the requirements to select the nodes on which you want to create an LVM volume group, and the thin pool configuration for the volume group.
* Force wipe the selected devices.
After you have installed {lvms}, you must create an `LVMCluster` custom resource (CR).
include::snippets/lvms-creating-lvmcluster.adoc[]
== Explanation of fields in the LVMCluster CR
The `LVMCluster` CR fields are described in the following table:
.`LVMCluster` CR fields
[cols=".^2,.^2,.^6a",options="header"]
|====
|Field|Type|Description
|`spec.storage.deviceClasses`
|`array`
|Contains the configuration to assign the local storage devices to the LVM volume groups.
LVM Storage creates a storage class and volume snapshot class for each device class that you create.
|`deviceClasses.name`
|`string`
|Specify a name for the LVM volume group (VG).
You can also configure this field to reuse a volume group that you created in the previous installation. For more information, see "Reusing a volume group from the previous LVM Storage installation".
|`deviceClasses.fstype`
|`string`
|Set this field to `ext4` or `xfs`. By default, this field is set to `xfs`.
|`deviceClasses.default`
|`boolean`
|Set this field to `true` to indicate that a device class is the default. Otherwise, you can set it to `false`. You can only configure a single default device class.
|`deviceClasses.nodeSelector`
|`object`
|Contains the configuration to choose the nodes on which you want to create the LVM volume group. If this field is empty, all nodes without no-schedule taints are considered.
On the control-plane node, {lvms} detects and uses the additional worker nodes when the new nodes become active in the cluster.
|`nodeSelector.nodeSelectorTerms`
|`array`
|Configure the requirements that are used to select the node.
|`deviceClasses.deviceSelector`
|`object`
|Contains the configuration to perform the following actions:
* Specify the paths to the devices that you want to add to the LVM volume group.
* Force wipe the devices that are added to the LVM volume group.
For more information, see "About adding devices to a volume group".
|`deviceSelector.paths`
|`array`
|Specify the device paths.
If the device path specified in this field does not exist, or the device is not supported by {lvms}, the `LVMCluster` CR moves to the `Failed` state.
|`deviceSelector.optionalPaths`
|`array`
| Specify the optional device paths.
If the device path specified in this field does not exist, or the device is not supported by {lvms}, {lvms} ignores the device without causing an error.
|`deviceSelector.
forceWipeDevicesAndDestroyAllData`
|`boolean`
|{lvms} uses only those disks that are empty and do not contain file system signatures. To ensure that the disks are empty and do not contain file system signatures, wipe the disks before using them.
To force wipe the selected devices, set this field to `true`. By default, this field is set to `false`.
[WARNING]
====
If this field is set to `true`, {lvms} wipes all previous data on the devices. Use this feature with caution.
====
Wiping the device can lead to inconsistencies in data integrity if any of the following conditions are met:
* The device is being used as swap space.
* The device is part of a RAID array.
* The device is mounted.
If any of these conditions are true, do not force wipe the disk. Instead, you must manually wipe the disk.
|`deviceClasses.thinPoolConfig`
|`object`
|Contains the configuration to create a thin pool in the LVM volume group.
If you exclude this field, logical volumes are thick provisioned.
Using thick-provisioned storage includes the following limitations:
* No copy-on-write support for volume cloning.
* No support for snapshot class.
* No support for over-provisioning. As a result, the provisioned capacity of `PersistentVolumeClaims` (PVCs) is immediately reduced from the volume group.
* No support for thin metrics. Thick-provisioned devices only support volume group metrics.
|`thinPoolConfig.name`
|`string`
|Specify a name for the thin pool.
|`thinPoolConfig.sizePercent`
|`integer`
|Specify the percentage of space in the LVM volume group for creating the thin pool.
By default, this field is set to 90. The minimum value that you can set is 10, and the maximum value is 90.
|`thinPoolConfig.overprovisionRatio`
|`integer`
|Specify a factor by which you can provision additional storage based on the available storage in the thin pool.
For example, if this field is set to 10, you can provision up to 10 times the amount of available storage in the thin pool.
You can modify this field after the LVM cluster has been created.
To update the parameter, do any of the following tasks:
* To edit the LVM Cluster, run the following command:
[source,terminal]
----
$ oc edit lvmcluster <lvmcluster_name>
----
* To apply a patch, run the following command:
[source,terminal]
----
$ oc patch lvmcluster <lvmcluster_name> -p <patch_file.yaml>
----
To disable over-provisioning, set this field to 1.
|`thinPoolConfig.chunkSize`
|`integer`
|Specifies the statically calculated chunk size for the thin pool. This field is only used when the `ChunkSizeCalculationPolicy` field is set to `Static`. The value for this field must be configured in the range of 64 KiB to 1 GiB because of the underlying limitations of `lvm2`.
If you do not configure this field and the `ChunkSizeCalculationPolicy` field is set to `Static`, the default chunk size is set to 128 KiB.
For more information, see "Overview of chunk size".
|`thinPoolConfig.chunkSizeCalculationPolicy`
|`string`
|Specifies the policy to calculate the chunk size for the underlying volume group. You can set this field to either `Static` or `Host`. By default, this field is set to `Static`.
If this field is set to `Static`, the chunk size is set to the value of the `chunkSize` field. If the `chunkSize` field is not configured, chunk size is set to 128 KiB.
If this field is set to `Host`, the chunk size is calculated based on the configuration in the `lvm.conf` file.
For more information, see "Limitations to configure the size of the devices used in LVM Storage".
|`thinPoolConfig.metadataSize`
|`integer`
|Specifies the metadata size for the thin pool. You can configure this field only when the `MetadataSizeCalculationPolicy` field is set to `Static`.
If this field is not configured, and the `MetadataSizeCalculationPolicy` field is set to `Static`, the default metadata size is set to 1 GiB.
The value for this field must be configured in the range of 2 MiB to 16 GiB due to the underlying limitations of `lvm2`. You can only increase the value of this field during updates.
|`thinPoolConfig.metadataSizeCalculationPolicy`
|`string`
|Specifies the policy to calculate the metadata size for the underlying volume group. You can set this field to either `Static` or `Host`. By default, this field is set to `Host`.
If this field is set to `Static`, the metadata size is calculated based on the value of the `thinPoolConfig.metadataSize` field.
If this field is set to `Host`, the metadata size is calculated based on the `lvm2` settings.
|====