1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/modules/es-node-disk-low-watermark-reached.adoc
Satyajeet Munje cc94fd4ee0 OBSDOCS-693
2024-04-24 18:11:38 +00:00

91 lines
2.9 KiB
Plaintext

// Module included in the following assemblies:
//
// * observability/logging/troubleshooting/troubleshooting-logging-alerts.adoc
:_mod-docs-content-type: PROCEDURE
[id="es-node-disk-low-watermark-reached_{context}"]
= Elasticsearch node disk low watermark reached
Elasticsearch does not allocate shards to nodes that reach the low watermark.
include::snippets/es-pod-var-logging.adoc[]
.Procedure
. Identify the node on which Elasticsearch is deployed by running the following command:
+
[source,terminal]
----
$ oc -n openshift-logging get po -o wide
----
. Check if there are unassigned shards by running the following command:
+
[source,terminal]
----
$ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME \
-- es_util --query=_cluster/health?pretty | grep unassigned_shards
----
. If there are unassigned shards, check the disk space on each node, by running the following command:
+
[source,terminal]
----
$ for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; \
do echo $pod; oc -n openshift-logging exec -c elasticsearch $pod \
-- df -h /elasticsearch/persistent; done
----
. In the command output, check the `Use` column to determine the used disk percentage on that node.
+
.Example output
[source,terminal]
----
elasticsearch-cdm-kcrsda6l-1-586cc95d4f-h8zq8
Filesystem Size Used Avail Use% Mounted on
/dev/nvme1n1 19G 522M 19G 3% /elasticsearch/persistent
elasticsearch-cdm-kcrsda6l-2-5b548fc7b-cwwk7
Filesystem Size Used Avail Use% Mounted on
/dev/nvme2n1 19G 522M 19G 3% /elasticsearch/persistent
elasticsearch-cdm-kcrsda6l-3-5dfc884d99-59tjw
Filesystem Size Used Avail Use% Mounted on
/dev/nvme3n1 19G 528M 19G 3% /elasticsearch/persistent
----
+
If the used disk percentage is above 85%, the node has exceeded the low watermark, and shards can no longer be allocated to this node.
. To check the current `redundancyPolicy`, run the following command:
+
[source,terminal]
----
$ oc -n openshift-logging get es elasticsearch \
-o jsonpath='{.spec.redundancyPolicy}'
----
+
If you are using a `ClusterLogging` resource on your cluster, run the following command:
+
[source,terminal]
----
$ oc -n openshift-logging get cl \
-o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'
----
+
If the cluster `redundancyPolicy` value is higher than the `SingleRedundancy` value, set it to the `SingleRedundancy` value and save this change.
. If the preceding steps do not fix the issue, delete the old indices.
.. Check the status of all indices on Elasticsearch by running the following command:
+
[source,terminal]
----
$ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME -- indices
----
.. Identify an old index that can be deleted.
.. Delete the index by running the following command:
+
[source,terminal]
----
$ oc exec -n openshift-logging -c elasticsearch $ES_POD_NAME \
-- es_util --query=<elasticsearch_index_name> -X DELETE
----