mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 21:46:22 +01:00
OCPBUGS-30031: Updated monitoring section
This commit is contained in:
@@ -8,9 +8,10 @@
|
||||
|
||||
When {lvms} is unable to automatically resolve a problem, use the must-gather tool to collect the log files and diagnostic information so that you or the Red Hat Support can review the problem and determine a solution.
|
||||
|
||||
* Run the must-gather command from the client connected to {lvms} cluster by running the following command:
|
||||
.Procedure
|
||||
* Run the `must-gather` command from the client connected to the {lvms} cluster:
|
||||
+
|
||||
[source,terminal,subs="attributes+"]
|
||||
----
|
||||
$ oc adm must-gather --image=registry.redhat.io/lvms4/lvms-must-gather-rhel9:v{product-version} --dest-dir=<directory-name>
|
||||
$ oc adm must-gather --image=registry.redhat.io/lvms4/lvms-must-gather-rhel9:v{product-version} --dest-dir=<directory_name>
|
||||
----
|
||||
@@ -3,45 +3,59 @@
|
||||
// storage/persistent_storage/persistent_storage_local/persistent-storage-using-lvms.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="lvms-monitoring-using-lvms_{context}"]
|
||||
[id="lvms-monitoring_{context}"]
|
||||
= Monitoring {lvms}
|
||||
|
||||
When you use {rh-rhacm} to install {lvms}, you must configure {rh-rhacm} Observability to monitor all the {sno} clusters from one place.
|
||||
To enable cluster monitoring, you must add the following label in the namespace where you have installed {lvms}:
|
||||
[source,text]
|
||||
----
|
||||
openshift.io/cluster-monitoring=true
|
||||
----
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
For information about enabling cluster monitoring in {rh-rhacm}, see link:https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/{rh-rhacm-version}/html-single/observability/index[Observability] and link:https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/{rh-rhacm-version}/html-single/observability/index#adding-custom-metrics[Adding custom metrics].
|
||||
====
|
||||
|
||||
[id="lvms-monitoring-using-lvms-metrics_{context}"]
|
||||
== Metrics
|
||||
|
||||
You can monitor {lvms} by viewing the metrics exported by the Operator on the {rh-rhacm} dashboards and the alerts that are triggered.
|
||||
You can monitor {lvms} by viewing the metrics.
|
||||
|
||||
* Add the following `topolvm` metrics to the `allow` list:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
topolvm_thinpool_data_percent
|
||||
topolvm_thinpool_metadata_percent
|
||||
topolvm_thinpool_size_bytes
|
||||
----
|
||||
The following table describes the `topolvm` metrics:
|
||||
|
||||
.`topolvm` metrics
|
||||
[%autowidth,options="header"]
|
||||
|===
|
||||
|Alert | Description
|
||||
|`topolvm_thinpool_data_percent` | Indicates the percentage of data space used in the LVM thinpool.
|
||||
|`topolvm_thinpool_metadata_percent` | Indicates the percentage of metadata space used in the LVM thinpool.
|
||||
|`topolvm_thinpool_size_bytes` | Indicates the size of the LVM thin pool in bytes.
|
||||
|`topolvm_volumegroup_available_bytes` | Indicates the available space in the LVM volume group in bytes.
|
||||
|`topolvm_volumegroup_size_bytes` | Indicates the size of the LVM volume group in bytes.
|
||||
|`topolvm_thinpool_overprovisioned_available` | Indicates the available over-provisioned size of the LVM thin pool in bytes.
|
||||
|===
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Metrics are updated every 10 minutes or when there is a change in the thin pool, such as a new logical volume creation.
|
||||
Metrics are updated every 10 minutes or when there is a change, such as a new logical volume creation, in the thin pool.
|
||||
====
|
||||
|
||||
[id="lvms-monitoring-using-lvms-alerts_{context}"]
|
||||
== Alerts
|
||||
|
||||
When the thin pool and volume group are filled up, further operations fail and might lead to data loss.
|
||||
{lvms} sends the following alerts about the usage of the thin pool and volume group when utilization crosses a certain value:
|
||||
When the thin pool and volume group reach maximum storage capacity, further operations fail. This can lead to data loss.
|
||||
|
||||
.Alerts for Logical Volume Manager cluster in {rh-rhacm}
|
||||
[[alerts_for_LVMCluster_in_{rh-rhacm}]]
|
||||
[%autowidth,frame="topbot",options="header"]
|
||||
{lvms} sends the following alerts when the usage of the thin pool and volume group exceeds a certain value:
|
||||
|
||||
.LVM Storage alerts
|
||||
[%autowidth, options="header"]
|
||||
|===
|
||||
|Alert| Description
|
||||
|`VolumeGroupUsageAtThresholdNearFull`|This alert is triggered when both the volume group and thin pool utilization cross 75% on nodes. Data deletion or volume group expansion is required.
|
||||
|`VolumeGroupUsageAtThresholdCritical`|This alert is triggered when both the volume group and thin pool utilization cross 85% on nodes. `VolumeGroup` is critically full. Data deletion or volume group expansion is required.
|
||||
|`ThinPoolDataUsageAtThresholdNearFull`|This alert is triggered when the thin pool data utilization in the volume group crosses 75% on nodes. Data deletion or thin pool expansion is required.
|
||||
|`ThinPoolDataUsageAtThresholdCritical`|This alert is triggered when the thin pool data utilization in the volume group crosses 85% on nodes. Data deletion or thin pool expansion is required.
|
||||
|`ThinPoolMetaDataUsageAtThresholdNearFull`|This alert is triggered when the thin pool metadata utilization in the volume group crosses 75% on nodes. Data deletion or thin pool expansion is required.
|
||||
|`ThinPoolMetaDataUsageAtThresholdCritical`|This alert is triggered when the thin pool metadata utilization in the volume group crosses 85% on nodes. Data deletion or thin pool expansion is required.
|
||||
|`VolumeGroupUsageAtThresholdNearFull`|This alert is triggered when both the volume group and thin pool usage exceeds 75% on nodes. Data deletion or volume group expansion is required.
|
||||
|`VolumeGroupUsageAtThresholdCritical`|This alert is triggered when both the volume group and thin pool usage exceeds 85% on nodes. In this case, the volume group is critically full. Data deletion or volume group expansion is required.
|
||||
|`ThinPoolDataUsageAtThresholdNearFull`|This alert is triggered when the thin pool data uusage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required.
|
||||
|`ThinPoolDataUsageAtThresholdCritical`|This alert is triggered when the thin pool data usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required.
|
||||
|`ThinPoolMetaDataUsageAtThresholdNearFull`|This alert is triggered when the thin pool metadata usage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required.
|
||||
|`ThinPoolMetaDataUsageAtThresholdCritical`|This alert is triggered when the thin pool metadata usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required.
|
||||
|===
|
||||
@@ -74,16 +74,6 @@ include::modules/lvms-adding-a-storage-class.adoc[leveloffset=+1]
|
||||
//Provisioning
|
||||
include::modules/lvms-provisioning-storage-using-logical-volume-manager-operator.adoc[leveloffset=+1]
|
||||
|
||||
//Monitoring
|
||||
include::modules/lvms-monitoring-logical-volume-manager-operator.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* link:https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html-single/observability/index[Observability]
|
||||
|
||||
* link:https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html-single/observability/index#adding-custom-metrics[Adding custom metrics]
|
||||
|
||||
//Scaling
|
||||
include::modules/lvms-scaling-storage-of-single-node-open-concept.adoc[leveloffset=+1]
|
||||
|
||||
@@ -139,6 +129,9 @@ include::modules/lvms-volume-clones-in-single-node-openshift.adoc[leveloffset=+1
|
||||
include::modules/lvms-creating-volume-clones-in-single-node-openshift.adoc[leveloffset=+2]
|
||||
include::modules/lvms-deleting-cloned-volumes-in-single-node-openshift.adoc[leveloffset=+2]
|
||||
|
||||
//Monitoring
|
||||
include::modules/lvms-monitoring-logical-volume-manager-operator.adoc[leveloffset=+1]
|
||||
|
||||
//Must-gather
|
||||
include::modules/lvms-download-log-files-and-diagnostics.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
Reference in New Issue
Block a user