diff --git a/modules/lvms-creating-lvmcluster-using-rhacm.adoc b/modules/lvms-creating-lvmcluster-using-rhacm.adoc new file mode 100644 index 0000000000..4eb2ef25f5 --- /dev/null +++ b/modules/lvms-creating-lvmcluster-using-rhacm.adoc @@ -0,0 +1,63 @@ +// Module included in the following assemblies: +// +// storage/persistent_storage/persistent_storage_local/persistent-storage-using-lvms.adoc + +:_mod-docs-content-type: PROCEDURE +[id="lvms-creating-lvmcluster-using-rhacm_{context}"] += Creating an LVMCluster CR by using {rh-rhacm} + +After you have installed {lvms} by using {rh-rhacm}, you must create an `LVMCluster` custom resource (CR). + +.Prerequisites + +* You have installed {lvms} by using {rh-rhacm}. +* You have access to the {rh-rhacm} cluster using an account with `cluster-admin` permissions. +* You read the "About the LVMCluster custom resource" section. See the "Additional resources" section. + +.Procedure + +. Log in to the {rh-rhacm} CLI using your {product-title} credentials. + +. Create a `ConfigurationPolicy` CR YAML file with the configuration to create an `LVMCluster` CR: ++ +.Example `ConfigurationPolicy` CR YAML file to create an `LVMCluster` CR +[source,yaml] +---- +apiVersion: policy.open-cluster-management.io/v1 +kind: ConfigurationPolicy +metadata: + name: lvms +spec: + object-templates: + - complianceType: musthave + objectDefinition: + apiVersion: lvm.topolvm.io/v1alpha1 + kind: LVMCluster + metadata: + name: my-lvmcluster + namespace: openshift-storage + spec: + storage: + deviceClasses: <1> +# ... + deviceSelector: <2> +# ... + thinPoolConfig: <3> +# ... + nodeSelector: <4> +# ... + remediationAction: enforce + severity: low +---- +<1> Contains the configuration to assign the local storage devices to the LVM volume groups. +<2> Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group, and force wipe the devices that are added to the LVM volume group. +<3> Contains the LVM thin pool configuration. +<4> Contains the configuration to choose the nodes on which you want to create the LVM volume groups. If this field is empty, then all nodes without no-schedule taints are considered. + +. Create the `ConfigurationPolicy` CR by running the following command: ++ +[source,terminal] +---- +$ oc create -f -n <1> +---- +<1> Namespace of the {product-title} cluster on which {lvms} is installed. diff --git a/modules/lvms-deleting-lvmcluster-using-rhacm.adoc b/modules/lvms-deleting-lvmcluster-using-rhacm.adoc new file mode 100644 index 0000000000..18e1a30548 --- /dev/null +++ b/modules/lvms-deleting-lvmcluster-using-rhacm.adoc @@ -0,0 +1,192 @@ +// Module included in the following assemblies: +// +// storage/persistent_storage/persistent_storage_local/persistent-storage-using-lvms.adoc + +:_mod-docs-content-type: PROCEDURE +[id="lvms-deleting-lvmcluster-using-rhacm_{context}"] += Deleting an LVMCluster CR by using {rh-rhacm} + +If you have installed {lvms} by using {rh-rhacm-first}, you can delete an `LVMCluster` CR by using {rh-rhacm}. + +.Prerequisites + +* You have access to the {rh-rhacm} cluster as a user with `cluster-admin` permissions. +* You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by {lvms}. You have also deleted the applications that are using these resources. + +.Procedure + +. Log in to the {rh-rhacm} CLI using your {product-title} credentials. +. Delete the `ConfigurationPolicy` CR YAML file that was created for the `LVMCluster` CR: ++ +[source,terminal] +---- +$ oc delete -f -n <1> +---- +<1> Namespace of the {product-title} cluster on which {lvms} is installed. + +. Create a `Policy` CR YAML file to delete the `LVMCluster` CR: ++ +.Example `Policy` CR to delete the `LVMCluster` CR +[source,yaml] +---- +apiVersion: policy.open-cluster-management.io/v1 +kind: Policy +metadata: + name: policy-lvmcluster-delete + annotations: + policy.open-cluster-management.io/standards: NIST SP 800-53 + policy.open-cluster-management.io/categories: CM Configuration Management + policy.open-cluster-management.io/controls: CM-2 Baseline Configuration +spec: + remediationAction: enforce + disabled: false + policy-templates: + - objectDefinition: + apiVersion: policy.open-cluster-management.io/v1 + kind: ConfigurationPolicy + metadata: + name: policy-lvmcluster-removal + spec: + remediationAction: enforce <1> + severity: low + object-templates: + - complianceType: mustnothave + objectDefinition: + kind: LVMCluster + apiVersion: lvm.topolvm.io/v1alpha1 + metadata: + name: my-lvmcluster + namespace: openshift-storage <2> +--- +apiVersion: policy.open-cluster-management.io/v1 +kind: PlacementBinding +metadata: + name: binding-policy-lvmcluster-delete +placementRef: + apiGroup: apps.open-cluster-management.io + kind: PlacementRule + name: placement-policy-lvmcluster-delete +subjects: + - apiGroup: policy.open-cluster-management.io + kind: Policy + name: policy-lvmcluster-delete +--- +apiVersion: apps.open-cluster-management.io/v1 +kind: PlacementRule +metadata: + name: placement-policy-lvmcluster-delete +spec: + clusterConditions: + - status: "True" + type: ManagedClusterConditionAvailable + clusterSelector: <3> + matchExpressions: + - key: mykey + operator: In + values: + - myvalue +---- +<1> The `spec.remediationAction` in `policy-template` is overridden by the preceding parameter value for `spec.remediationAction`. +<2> This `namespace` field must have the `openshift-storage` value. +<3> Configure the requirements to select the clusters. {lvms} is uninstalled on the clusters that match the selection criteria. + +. Create the `Policy` CR by running the following command: ++ +[source,terminal] +---- +$ oc create -f -n +---- + +. Create a `Policy` CR YAML file to check if the `LVMCluster` CR has been deleted: ++ +.Example `Policy` CR to check if the `LVMCluster` CR has been deleted +[source,yaml] +---- +apiVersion: policy.open-cluster-management.io/v1 +kind: Policy +metadata: + name: policy-lvmcluster-inform + annotations: + policy.open-cluster-management.io/standards: NIST SP 800-53 + policy.open-cluster-management.io/categories: CM Configuration Management + policy.open-cluster-management.io/controls: CM-2 Baseline Configuration +spec: + remediationAction: inform + disabled: false + policy-templates: + - objectDefinition: + apiVersion: policy.open-cluster-management.io/v1 + kind: ConfigurationPolicy + metadata: + name: policy-lvmcluster-removal-inform + spec: + remediationAction: inform <1> + severity: low + object-templates: + - complianceType: mustnothave + objectDefinition: + kind: LVMCluster + apiVersion: lvm.topolvm.io/v1alpha1 + metadata: + name: my-lvmcluster + namespace: openshift-storage <2> +--- +apiVersion: policy.open-cluster-management.io/v1 +kind: PlacementBinding +metadata: + name: binding-policy-lvmcluster-check +placementRef: + apiGroup: apps.open-cluster-management.io + kind: PlacementRule + name: placement-policy-lvmcluster-check +subjects: + - apiGroup: policy.open-cluster-management.io + kind: Policy + name: policy-lvmcluster-inform +--- +apiVersion: apps.open-cluster-management.io/v1 +kind: PlacementRule +metadata: + name: placement-policy-lvmcluster-check +spec: + clusterConditions: + - status: "True" + type: ManagedClusterConditionAvailable + clusterSelector: + matchExpressions: + - key: mykey + operator: In + values: + - myvalue +---- +<1> The `policy-template` `spec.remediationAction` is overridden by the preceding parameter value for `spec.remediationAction`. +<2> The `namespace` field must have the `openshift-storage` value. + +. Create the `Policy` CR by running the following command: ++ +[source,terminal] +---- +$ oc create -f -n +---- + +.Verification + +* Check the status of the `Policy` CRs by running the following command: ++ +[source,terminal] +---- +$ oc get policy -n +---- ++ +.Example output +[source,terminal] +---- +NAME REMEDIATION ACTION COMPLIANCE STATE AGE +policy-lvmcluster-delete enforce Compliant 15m +policy-lvmcluster-inform inform Compliant 15m +---- ++ +[IMPORTANT] +==== +The `Policy` CRs must be in `Compliant` state. +==== \ No newline at end of file diff --git a/modules/lvms-installing-logical-volume-manager-operator-using-rhacm.adoc b/modules/lvms-installing-logical-volume-manager-operator-using-rhacm.adoc index 745f52920f..507a590d7c 100644 --- a/modules/lvms-installing-logical-volume-manager-operator-using-rhacm.adoc +++ b/modules/lvms-installing-logical-volume-manager-operator-using-rhacm.adoc @@ -4,30 +4,34 @@ :_mod-docs-content-type: PROCEDURE [id="lvms-installing-odf-logical-volume-manager-operator-using-rhacm_{context}"] -= Installing {lvms} using {rh-rhacm} += Installing {lvms} by using {rh-rhacm} -{lvms} is deployed on the clusters using {rh-rhacm-first}. -You create a `Policy` object on {rh-rhacm} that deploys and configures the Operator when it is applied to managed clusters which match the selector specified in the `PlacementRule` resource. -The policy is also applied to clusters that are imported later and satisfy the placement rule. +To install {lvms} on the clusters by using {rh-rhacm-first}, you must create a `Policy` custom resource (CR). You can also configure the criteria to select the clusters on which you want to install {lvms}. + +[NOTE] +==== +The `Policy` CR that is created to install {lvms} is also applied to the clusters that are imported or created after creating the `Policy` CR. +==== .Prerequisites -* Access to the {rh-rhacm} cluster using an account with `cluster-admin` and Operator installation permissions. -* Dedicated disks on each cluster to be used by {lvms}. -* The cluster needs to be managed by {rh-rhacm}, either imported or created. +* You have access to the {rh-rhacm} cluster using an account with `cluster-admin` and Operator installation permissions. +* You have dedicated disks that {lvms} can use on each cluster. +* The cluster must be be managed by {rh-rhacm}. .Procedure . Log in to the {rh-rhacm} CLI using your {product-title} credentials. -. Create a namespace in which you will create policies. +. Create a namespace. + [source,terminal] ---- -# oc create ns lvms-policy-ns +$ oc create ns ---- -. To create a policy, save the following YAML to a file with a name such as `policy-lvms-operator.yaml`: +. Create a `Policy` CR YAML file: + +.Example `Policy` CR to install and configure {lvms} [source,yaml] ---- apiVersion: apps.open-cluster-management.io/v1 @@ -78,7 +82,7 @@ spec: spec: object-templates: - complianceType: musthave - objectDefinition: + objectDefinition: <2> apiVersion: v1 kind: Namespace metadata: @@ -89,7 +93,7 @@ spec: pod-security.kubernetes.io/warn: privileged name: openshift-storage - complianceType: musthave - objectDefinition: + objectDefinition: <3> apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: @@ -99,7 +103,7 @@ spec: targetNamespaces: - openshift-storage - complianceType: musthave - objectDefinition: + objectDefinition: <4> apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: @@ -112,73 +116,21 @@ spec: sourceNamespace: openshift-marketplace remediationAction: enforce severity: low - - objectDefinition: - apiVersion: policy.open-cluster-management.io/v1 - kind: ConfigurationPolicy - metadata: - name: lvms - spec: - object-templates: - - complianceType: musthave - objectDefinition: - apiVersion: lvm.topolvm.io/v1alpha1 - kind: LVMCluster - metadata: - name: my-lvmcluster - namespace: openshift-storage - spec: - storage: - deviceClasses: - - name: vg1 - default: true - deviceSelector: <2> - paths: - - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 - optionalPaths: - - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - - /dev/disk/by-path/pci-0000:90:00.0-nvme-1 - thinPoolConfig: - name: thin-pool-1 - sizePercent: 90 - overprovisionRatio: 10 - nodeSelector: <3> - nodeSelectorTerms: - - matchExpressions: - - key: app - operator: In - values: - - test1 - remediationAction: enforce - severity: low ---- -<1> Replace the key and value in `PlacementRule.spec.clusterSelector` to match the labels set on the clusters on which you want to install {lvms}. -<2> Optional. To control or restrict the volume group to your preferred devices, you can manually specify the local paths of the devices in the `deviceSelector` section of the `LVMCluster` YAML. The `paths` section refers to devices the `LVMCluster` adds, which means those paths must exist. The `optionalPaths` section refers to devices the `LVMCluster` might add. You must specify at least one of `paths` or `optionalPaths` when specifying the `deviceSelector` section. If you specify `paths`, it is not mandatory to specify `optionalPaths`. If you specify `optionalPaths`, it is not mandatory to specify `paths` but at least one optional path must be present on the node. If you do not specify any paths, it will add all unused devices on the node. -<3> To add a node filter, which is a subset of the additional worker nodes, specify the required filter in the `nodeSelector` section. {lvms} detects and uses the additional worker nodes when the new nodes show up. -+ --- -[IMPORTANT] -==== -This `nodeSelector` node filter matching is not the same as the pod label matching. -==== --- +<1> Set the `key` field and `values` field in `PlacementRule.spec.clusterSelector` to match the labels that are configured in the clusters on which you want to install {lvms}. +<2> Namespace configuration. +<3> The `OperatorGroup` CR configuration. +<4> The `Subscription` CR configuration. -. Create the policy in the namespace by running the following command: +. Create the `Policy` CR by running the following command: + [source,terminal] ---- -# oc create -f policy-lvms-operator.yaml -n lvms-policy-ns <1> +$ oc create -f -n ---- -<1> The `policy-lvms-operator.yaml` is the name of the file to which the policy is saved. ++ +Upon creating the `Policy` CR, the following custom resources are created on the clusters that match the selection criteria configured in the `PlacementRule` CR: -+ -This creates a `Policy`, a `PlacementRule`, and a `PlacementBinding` object in the `lvms-policy-ns` namespace. -The policy creates a `Namespace`, `OperatorGroup`, `Subscription`, and `LVMCluster` resource on the clusters that match the placement rule. -This deploys the Operator on the clusters which match the selection criteria and configures it to set up the required resources to provision storage. -The Operator uses all the disks specified in the `LVMCluster` CR. -If no disks are specified, the Operator uses all the unused disks on the node. -+ -[IMPORTANT] -==== -After a device is added to the `LVMCluster`, it cannot be removed. -==== +* `Namespace` +* `OperatorGroup` +* `Subscription` \ No newline at end of file diff --git a/modules/lvms-scaling-storage-of-clusters-using-rhacm.adoc b/modules/lvms-scaling-storage-of-clusters-using-rhacm.adoc new file mode 100644 index 0000000000..2f163cc13c --- /dev/null +++ b/modules/lvms-scaling-storage-of-clusters-using-rhacm.adoc @@ -0,0 +1,69 @@ +// Module included in the following assemblies: +// +// storage/persistent_storage/persistent_storage_local/persistent-storage-using-lvms.adoc + +:_mod-docs-content-type: PROCEDURE +[id="lvms-scaling-storage-of-clusters-using-rhacm_{context}"] += Scaling up the storage of clusters by using {rh-rhacm} + +You can scale up the storage capacity of worker nodes on the clusters by using {rh-rhacm}. + +.Prerequisites + +* You have access to the {rh-rhacm} cluster using an account with `cluster-admin` privileges. +* You have created an `LVMCluster` custom resource (CR) by using {rh-rhacm}. +* You have additional unused devices on each cluster to be used by {lvms-first}. + +.Procedure + +. Log in to the {rh-rhacm} CLI using your {product-title} credentials. +. Edit the `LVMCluster` CR that you created using {rh-rhacm} by running the following command: ++ +[source,terminal] +---- +$ oc edit -f -ns <1> +---- +<1> Replace `` with the name of the `LVMCluster` CR. + +. In the `LVMCluster` CR, add the path to the new device in the `deviceSelector` field. ++ +.Example `LVMCluster` CR: +[source,yaml] +---- +apiVersion: policy.open-cluster-management.io/v1 + kind: ConfigurationPolicy + metadata: + name: lvms + spec: + object-templates: + - complianceType: musthave + objectDefinition: + apiVersion: lvm.topolvm.io/v1alpha1 + kind: LVMCluster + metadata: + name: my-lvmcluster + namespace: openshift-storage + spec: + storage: + deviceClasses: +# ... + deviceSelector: <1> + paths: <2> + - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 + optionalPaths: <3> + - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 +# ... +---- +<1> Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group. +You can specify the device paths in the `paths` field, the `optionalPaths` field, or both. If you do not specify the device paths in both `paths` and `optionalPaths`, {lvms-first} adds the supported unused devices to the LVM volume group. {lvms} adds the devices to the LVM volume group only if the following conditions are met: +* The device path exists. +* The device is supported by {lvms}. For information about unsupported devices, see "Devices not supported by {lvms}" in the "Additional resources" section. +<2> Specify the device paths. If the device path specified in this field does not exist, or the device is not supported by {lvms}, the `LVMCluster` CR moves to the `Failed` state. +<3> Specify the optional device paths. If the device path specified in this field does not exist, or the device is not supported by {lvms}, {lvms} ignores the device without causing an error. ++ +[IMPORTANT] +==== +After a device is added to the LVM volume group, it cannot be removed. +==== + +. Save the `LVMCluster` CR. \ No newline at end of file diff --git a/modules/lvms-scaling-storage-of-single-node-openshift-cluster-using-rhacm.adoc b/modules/lvms-scaling-storage-of-single-node-openshift-cluster-using-rhacm.adoc deleted file mode 100644 index 2a31ca95bb..0000000000 --- a/modules/lvms-scaling-storage-of-single-node-openshift-cluster-using-rhacm.adoc +++ /dev/null @@ -1,163 +0,0 @@ -// Module included in the following assemblies: -// -// storage/persistent_storage/persistent_storage_local/persistent-storage-using-lvms.adoc - -:_mod-docs-content-type: PROCEDURE -[id="lvms-scaling-storage-of-single-node-openshift-cluster-using-rhacm_{context}"] -= Scaling up storage by adding capacity to your cluster using {rh-rhacm} - -You can scale the storage capacity of your configured worker nodes on a cluster using {rh-rhacm}. - -.Prerequisites - -* You have access to the {rh-rhacm} cluster using an account with `cluster-admin` privileges. -* You have additional unused devices on each cluster that {lvms} can use. - -.Procedure - -. Log in to the {rh-rhacm} CLI using your {product-title} credentials. -. Find the device that you want to add. The device to be added needs to match with the device name and path of the existing devices. -. To add capacity to the cluster, edit the `deviceSelector` section of the existing policy YAML, for example, `policy-lvms-operator.yaml`. - -+ -[NOTE] -==== -In case the `deviceSelector` field is not included during the `LVMCluster` creation, it is not possible to add the `deviceSelector` section to the CR. You need to remove the `LVMCluster` and then recreate it from the new CR. -==== - -+ -[source,yaml] ----- -apiVersion: apps.open-cluster-management.io/v1 -kind: PlacementRule -metadata: - name: placement-install-lvms -spec: - clusterConditions: - - status: "True" - type: ManagedClusterConditionAvailable - clusterSelector: - matchExpressions: - - key: mykey - operator: In - values: - - myvalue ---- -apiVersion: policy.open-cluster-management.io/v1 -kind: PlacementBinding -metadata: - name: binding-install-lvms -placementRef: - apiGroup: apps.open-cluster-management.io - kind: PlacementRule - name: placement-install-lvms -subjects: -- apiGroup: policy.open-cluster-management.io - kind: Policy - name: install-lvms ---- -apiVersion: policy.open-cluster-management.io/v1 -kind: Policy -metadata: - annotations: - policy.open-cluster-management.io/categories: CM Configuration Management - policy.open-cluster-management.io/controls: CM-2 Baseline Configuration - policy.open-cluster-management.io/standards: NIST SP 800-53 - name: install-lvms -spec: - disabled: false - remediationAction: enforce - policy-templates: - - objectDefinition: - apiVersion: policy.open-cluster-management.io/v1 - kind: ConfigurationPolicy - metadata: - name: install-lvms - spec: - object-templates: - - complianceType: musthave - objectDefinition: - apiVersion: v1 - kind: Namespace - metadata: - labels: - openshift.io/cluster-monitoring: "true" - pod-security.kubernetes.io/enforce: privileged - pod-security.kubernetes.io/audit: privileged - pod-security.kubernetes.io/warn: privileged - name: openshift-storage - - complianceType: musthave - objectDefinition: - apiVersion: operators.coreos.com/v1 - kind: OperatorGroup - metadata: - name: openshift-storage-operatorgroup - namespace: openshift-storage - spec: - targetNamespaces: - - openshift-storage - - complianceType: musthave - objectDefinition: - apiVersion: operators.coreos.com/v1alpha1 - kind: Subscription - metadata: - name: lvms - namespace: openshift-storage - spec: - installPlanApproval: Automatic - name: lvms-operator - source: redhat-operators - sourceNamespace: openshift-marketplace - remediationAction: enforce - severity: low - - objectDefinition: - apiVersion: policy.open-cluster-management.io/v1 - kind: ConfigurationPolicy - metadata: - name: lvms - spec: - object-templates: - - complianceType: musthave - objectDefinition: - apiVersion: lvm.topolvm.io/v1alpha1 - kind: LVMCluster - metadata: - name: my-lvmcluster - namespace: openshift-storage - spec: - storage: - deviceClasses: - - name: vg1 - default: true - deviceSelector: <1> - paths: - - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 - optionalPaths: - - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - - /dev/disk/by-path/pci-0000:90:00.0-nvme-1 - thinPoolConfig: - name: thin-pool-1 - sizePercent: 90 - overprovisionRatio: 10 - nodeSelector: - nodeSelectorTerms: - - matchExpressions: - - key: app - operator: In - values: - - test1 - remediationAction: enforce - severity: low ----- -<1> Optional. To control or restrict the volume group to your preferred devices, you can manually specify the local paths of the devices in the `deviceSelector` section of the `LVMCluster` YAML. The `paths` section refers to devices the `LVMCluster` adds, which means those paths must exist. The `optionalPaths` section refers to devices the `LVMCluster` might add. You must specify at least one of `paths` or `optionalPaths` when specifying the `deviceSelector` section. If you specify `paths`, it is not mandatory to specify `optionalPaths`. If you specify `optionalPaths`, it is not mandatory to specify `paths` but at least one optional path must be present on the node. If you do not specify any paths, it will add all unused devices on the node. - -. Edit the policy by running the following command: -+ -[source,terminal] ----- -# oc edit -f policy-lvms-operator.yaml -ns lvms-policy-ns <1> ----- -<1> The `policy-lvms-operator.yaml` is the name of the existing policy. -+ -This uses the new disk specified in the `LVMCluster` CR to provision storage. diff --git a/modules/lvms-uninstalling-logical-volume-manager-operator-using-rhacm.adoc b/modules/lvms-uninstalling-logical-volume-manager-operator-using-rhacm.adoc index fc5bc5ca04..d4539a1de1 100644 --- a/modules/lvms-uninstalling-logical-volume-manager-operator-using-rhacm.adoc +++ b/modules/lvms-uninstalling-logical-volume-manager-operator-using-rhacm.adoc @@ -6,198 +6,29 @@ [id="lvms-uninstalling-lvms-rhacm_{context}"] = Uninstalling {lvms} installed using {rh-rhacm} -To uninstall {lvms} that you installed using {rh-rhacm}, you need to delete the {rh-rhacm} policy that you created for deploying and configuring the Operator. - -When you delete the {rh-rhacm} policy, the resources that the policy has created are not removed. -You need to create additional policies to remove the resources. - -As the created resources are not removed when you delete the policy, you need to perform the following steps: - -. Remove all the Persistent volume claims (PVCs) and volume snapshots provisioned by {lvms}. -. Remove the `LVMCluster` resources to clean up Logical Volume Manager resources created on the disks. -. Create an additional policy to uninstall the Operator. +To uninstall {lvms} that you installed using {rh-rhacm}, you must delete the {rh-rhacm} `Policy` custom resource (CR) that you created for installing and configuring {lvms}. .Prerequisites -* Ensure that the following are deleted before deleting the policy: -** All the applications on the managed clusters that are using the storage provisioned by {lvms}. -** PVCs and persistent volumes (PVs) provisioned using {lvms}. -** All volume snapshots provisioned by {lvms}. -* Ensure you have access to the {rh-rhacm} cluster using an account with a `cluster-admin` role. +* You have access to the {rh-rhacm} cluster as a user with `cluster-admin` permissions. +* You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by {lvms}. You have also deleted the applications that are using these resources. +* You have deleted the `LVMCluster` CR that you created using {rh-rhacm}. .Procedure -. In the OpenShift CLI (`oc`), delete the {rh-rhacm} policy that you created for deploying and configuring {lvms} on the hub cluster by using the following command: +. Log in to the OpenShift CLI (`oc`). + +. Delete the {rh-rhacm} `Policy` CR that you created for installing and configuring {lvms} by using the following command: + [source,terminal] ---- -# oc delete -f policy-lvms-operator.yaml -n lvms-policy-ns <1> +$ oc delete -f -n <1> ---- -<1> The `policy-lvms-operator.yaml` is the name of the file to which the policy was saved. +<1> Replace `` with the name of the `Policy` CR YAML file. -. To create a policy for removing the `LVMCluster` resource, save the following YAML to a file with a name such as `lvms-remove-policy.yaml`. -This enables the Operator to clean up all Logical Volume Manager resources that it created on the cluster. -+ -[source,yaml] ----- -apiVersion: policy.open-cluster-management.io/v1 -kind: Policy -metadata: - name: policy-lvmcluster-delete - annotations: - policy.open-cluster-management.io/standards: NIST SP 800-53 - policy.open-cluster-management.io/categories: CM Configuration Management - policy.open-cluster-management.io/controls: CM-2 Baseline Configuration -spec: - remediationAction: enforce - disabled: false - policy-templates: - - objectDefinition: - apiVersion: policy.open-cluster-management.io/v1 - kind: ConfigurationPolicy - metadata: - name: policy-lvmcluster-removal - spec: - remediationAction: enforce <1> - severity: low - object-templates: - - complianceType: mustnothave - objectDefinition: - kind: LVMCluster - apiVersion: lvm.topolvm.io/v1alpha1 - metadata: - name: my-lvmcluster - namespace: openshift-storage <2> ---- -apiVersion: policy.open-cluster-management.io/v1 -kind: PlacementBinding -metadata: - name: binding-policy-lvmcluster-delete -placementRef: - apiGroup: apps.open-cluster-management.io - kind: PlacementRule - name: placement-policy-lvmcluster-delete -subjects: - - apiGroup: policy.open-cluster-management.io - kind: Policy - name: policy-lvmcluster-delete ---- -apiVersion: apps.open-cluster-management.io/v1 -kind: PlacementRule -metadata: - name: placement-policy-lvmcluster-delete -spec: - clusterConditions: - - status: "True" - type: ManagedClusterConditionAvailable - clusterSelector: - matchExpressions: - - key: mykey - operator: In - values: - - myvalue ----- -<1> The `policy-template` `spec.remediationAction` is overridden by the preceding parameter value for `spec.remediationAction`. -<2> This `namespace` field must have the `openshift-storage` value. - -. Set the value of the `PlacementRule.spec.clusterSelector` field to select the clusters from which to uninstall {lvms}. - -. Create the policy by running the following command: -+ -[source,terminal] ----- -# oc create -f lvms-remove-policy.yaml -n lvms-policy-ns ----- - -. To create a policy to check if the `LVMCluster` CR has been removed, save the following YAML to a file with a name such as `check-lvms-remove-policy.yaml`: -+ -[source,yaml] ----- -apiVersion: policy.open-cluster-management.io/v1 -kind: Policy -metadata: - name: policy-lvmcluster-inform - annotations: - policy.open-cluster-management.io/standards: NIST SP 800-53 - policy.open-cluster-management.io/categories: CM Configuration Management - policy.open-cluster-management.io/controls: CM-2 Baseline Configuration -spec: - remediationAction: inform - disabled: false - policy-templates: - - objectDefinition: - apiVersion: policy.open-cluster-management.io/v1 - kind: ConfigurationPolicy - metadata: - name: policy-lvmcluster-removal-inform - spec: - remediationAction: inform <1> - severity: low - object-templates: - - complianceType: mustnothave - objectDefinition: - kind: LVMCluster - apiVersion: lvm.topolvm.io/v1alpha1 - metadata: - name: my-lvmcluster - namespace: openshift-storage <2> ---- -apiVersion: policy.open-cluster-management.io/v1 -kind: PlacementBinding -metadata: - name: binding-policy-lvmcluster-check -placementRef: - apiGroup: apps.open-cluster-management.io - kind: PlacementRule - name: placement-policy-lvmcluster-check -subjects: - - apiGroup: policy.open-cluster-management.io - kind: Policy - name: policy-lvmcluster-inform ---- -apiVersion: apps.open-cluster-management.io/v1 -kind: PlacementRule -metadata: - name: placement-policy-lvmcluster-check -spec: - clusterConditions: - - status: "True" - type: ManagedClusterConditionAvailable - clusterSelector: - matchExpressions: - - key: mykey - operator: In - values: - - myvalue ----- -<1> The `policy-template` `spec.remediationAction` is overridden by the preceding parameter value for `spec.remediationAction`. -<2> The `namespace` field must have the `openshift-storage` value. - -. Create the policy by running the following command: -+ -[source,terminal] ----- -# oc create -f check-lvms-remove-policy.yaml -n lvms-policy-ns ----- - -. Check the policy status by running the following command: -+ -[source,terminal] ----- -# oc get policy -n lvms-policy-ns ----- - -+ -.Example output -[source,terminal] ----- -NAME REMEDIATION ACTION COMPLIANCE STATE AGE -policy-lvmcluster-delete enforce Compliant 15m -policy-lvmcluster-inform inform Compliant 15m ----- - -. After both the policies are compliant, save the following YAML to a file with a name such as `lvms-uninstall-policy.yaml` to create a policy to uninstall {lvms}. +. Create a `Policy` CR YAML file with the configuration to uninstall {lvms}: + +.Example `Policy` CR to uninstall {lvms} [source,yaml] ---- apiVersion: apps.open-cluster-management.io/v1 @@ -306,9 +137,9 @@ spec: severity: high ---- -. Create the policy by running the following command: +. Create the `Policy` CR by running the following command: + [source,terminal] ---- -# oc create -f lvms-uninstall-policy.yaml -ns lvms-policy-ns +$ oc create -f -ns ---- \ No newline at end of file diff --git a/storage/persistent_storage/persistent_storage_local/persistent-storage-using-lvms.adoc b/storage/persistent_storage/persistent_storage_local/persistent-storage-using-lvms.adoc index 85cfe47af9..10d68ddf9c 100644 --- a/storage/persistent_storage/persistent_storage_local/persistent-storage-using-lvms.adoc +++ b/storage/persistent_storage/persistent_storage_local/persistent-storage-using-lvms.adoc @@ -78,14 +78,6 @@ include::modules/lvms-about-adding-devices-to-a-vg.adoc[leveloffset=+2] // Devices not supported by LVMS include::modules/lvms-unsupported-devices.adoc[leveloffset=+2] -[role="_additional-resources"] -.Additional resources - -* xref:../../../installing/install_config/installing-customizing.adoc#installation-special-config-raid_installing-customizing[Configuring a RAID-enabled data volume] -* link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_storage_devices/managing-raid_managing-storage-devices#creating-a-software-raid-on-an-installed-system_managing-raid[Creating a software RAID on an installed system] -* link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_storage_devices/managing-raid_managing-storage-devices#replacing-a-failed-disk-in-raid_managing-raid[Replacing a failed disk in RAID] -* link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_storage_devices/managing-raid_managing-storage-devices#repairing-raid-disks_managing-raid[Repairing RAID disks] - // About creating an LVMCluster custom resource include::modules/lvms-about-creating-lvmcluster-cr.adoc[leveloffset=+1] @@ -96,6 +88,14 @@ include::modules/lvms-reusing-vg-from-prev-installation.adoc[leveloffset=+2] //Integrating software RAID arrays include::modules/lvms-integrating-software-raid-arrays.adoc[leveloffset=+2] +[role="_additional-resources"] +.Additional resources + +* xref:../../../installing/install_config/installing-customizing.adoc#installation-special-config-raid_installing-customizing[Configuring a RAID-enabled data volume] +* link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_storage_devices/managing-raid_managing-storage-devices#creating-a-software-raid-on-an-installed-system_managing-raid[Creating a software RAID on an installed system] +* link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_storage_devices/managing-raid_managing-storage-devices#replacing-a-failed-disk-in-raid_managing-raid[Replacing a failed disk in RAID] +* link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_storage_devices/managing-raid_managing-storage-devices#repairing-raid-disks_managing-raid[Repairing RAID disks] + include::modules/lvms-creating-lvms-cluster-using-cli.adoc[leveloffset=+2] [role="_additional-resources"] @@ -110,12 +110,22 @@ include::modules/lvms-creating-lvms-cluster-using-web-console.adoc[leveloffset=+ * xref:../../../storage/persistent_storage/persistent_storage_local/persistent-storage-using-lvms.adoc#about-lvmcluster_logical-volume-manager-storage[About the LVMCluster custom resource] +include::modules/lvms-creating-lvmcluster-using-rhacm.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* link:https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/{rh-rhacm-version}/html/install/installing#installing-while-connected-online[Red Hat Advanced Cluster Management for Kubernetes: Installing while connected online] + +* xref:../../../storage/persistent_storage/persistent_storage_local/persistent-storage-using-lvms.adoc#about-lvmcluster_logical-volume-manager-storage[About the LVMCluster custom resource] + // Deleting the LVMCluster custom resource include::modules/lvms-about-deleting-lvmcluster-cr.adoc[leveloffset=+1] include::modules/lvms-deleting-lvmcluster-using-cli.adoc[leveloffset=+2] include::modules/lvms-deleting-lvmcluster-using-web-console.adoc[leveloffset=+2] +include::modules/lvms-deleting-lvmcluster-using-rhacm.adoc[leveloffset=+2] //Provisioning include::modules/lvms-provisioning-storage-using-logical-volume-manager-operator.adoc[leveloffset=+1] @@ -152,7 +162,7 @@ include::modules/lvms-scaling-storage-of-clusters-using-web-console.adoc[levelof * xref:../../../storage/persistent_storage/persistent_storage_local/persistent-storage-using-lvms.adoc#about-lvmcluster_logical-volume-manager-storage[About the LVMCluster custom resource] -include::modules/lvms-scaling-storage-of-single-node-openshift-cluster-using-rhacm.adoc[leveloffset=+2] +include::modules/lvms-scaling-storage-of-clusters-using-rhacm.adoc[leveloffset=+2] [role="_additional-resources"] .Additional resources @@ -161,6 +171,10 @@ include::modules/lvms-scaling-storage-of-single-node-openshift-cluster-using-rha * xref:../../../storage/persistent_storage/persistent_storage_local/persistent-storage-using-lvms.adoc#about-lvmcluster_logical-volume-manager-storage[About the LVMCluster custom resource] +* xref:../../../storage/persistent_storage/persistent_storage_local/persistent-storage-using-lvms.adoc#lvms-unsupported-devices_logical-volume-manager-storage[Devices not supported by {lvms}] + +* xref:../../../storage/persistent_storage/persistent_storage_local/persistent-storage-using-lvms.adoc#lvms-integrating-software-raid-arrays_logical-volume-manager-storage[Integrating software RAID arrays with {lvms}] + // Expanding PVCs include::modules/lvms-scaling-storage-expand-pvc.adoc[leveloffset=+1]