1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/modules/lvms-creating-lvms-cluster-using-cli.adoc

152 lines
3.9 KiB
Plaintext

// Module included in the following assemblies:
//
// storage/persistent_storage/persistent_storage_local/persistent-storage-using-lvms.adoc
:_mod-docs-content-type: PROCEDURE
[id="lvms-creating-lvms-cluster-using-cli_{context}"]
= Creating an LVMCluster CR by using the CLI
You can create an `LVMCluster` custom resource (CR) on a worker node using the OpenShift CLI (`oc`).
[IMPORTANT]
====
You can only create a single instance of the `LVMCluster` custom resource (CR) on an {product-title} cluster.
====
.Prerequisites
* You have installed the OpenShift CLI (`oc`).
* You have logged in to {product-title} as a user with `cluster-admin` privileges.
* You have installed {lvms}.
* You have installed a worker node in the cluster.
* You read the "About the LVMCluster custom resource" section.
.Procedure
. Create an `LVMCluster` custom resource (CR) YAML file:
+
.Example `LVMCluster` CR YAML file
[source,yaml]
----
apiVersion: lvm.topolvm.io/v1alpha1
kind: LVMCluster
metadata:
name: my-lvmcluster
spec:
# ...
storage:
deviceClasses: <1>
# ...
nodeSelector: <2>
# ...
deviceSelector: <3>
# ...
thinPoolConfig: <4>
# ...
----
<1> Contains the configuration to assign the local storage devices to the LVM volume groups.
<2> Contains the configuration to choose the nodes on which you want to create the LVM volume group. If this field is empty, all nodes without no-schedule taints are considered.
<3> Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group, and force wipe the devices that are added to the LVM volume group.
<4> Contains the configuration to create a thin pool in the LVM volume group. If you exclude this field, logical volumes are thick provisioned.
. Create the `LVMCluster` CR by running the following command:
+
[source,terminal]
----
$ oc create -f <file_name>
----
+
.Example output
[source,terminal]
----
lvmcluster/lvmcluster created
----
.Verification
. Check that the `LVMCluster` CR is in the `Ready` state:
+
[source, terminal]
----
$ oc get lvmclusters.lvm.topolvm.io -o jsonpath='{.items[*].status}' -n <namespace>
----
+
.Example output
[source,json]
----
{"deviceClassStatuses": <1>
[
{
"name": "vg1",
"nodeStatus": [ <2>
{
"devices": [ <3>
"/dev/nvme0n1",
"/dev/nvme1n1",
"/dev/nvme2n1"
],
"node": "kube-node", <4>
"status": "Ready" <5>
}
]
}
]
"state":"Ready"} <6>
----
<1> The status of the device class.
<2> The status of the LVM volume group on each node.
<3> The list of devices used to create the LVM volume group.
<4> The node on which the device class is created.
<5> The status of the LVM volume group on the node.
<6> The status of the `LVMCluster` CR.
+
[NOTE]
====
If the `LVMCluster` CR is in the `Failed` state, you can view the reason for failure in the `status` field.
Example of `status` field with the reason for failue:
[source, yaml]
----
status:
deviceClassStatuses:
- name: vg1
nodeStatus:
- node: my-node-1.example.com
reason: no available devices found for volume group
status: Failed
state: Failed
----
====
. Optional: To view the storage classes created by {lvms} for each device class, run the following command:
+
[source,terminal]
----
$ oc get storageclass
----
+
.Example output
[source, terminal]
----
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
lvms-vg1 topolvm.io Delete WaitForFirstConsumer true 31m
----
. Optional: To view the volume snapshot classes created by {lvms} for each device class, run the following command:
+
[source,terminal]
----
$ oc get volumesnapshotclass
----
+
.Example output
[source, terminal]
----
NAME DRIVER DELETIONPOLICY AGE
lvms-vg1 topolvm.io Delete 24h
----