1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/modules/cluster-logging-elasticsearch-ha.adoc
Prithviraj Patil 116ff6004c Incorrect structure of command in Edit the ClusterLogging custom resource (CR) step under multiple docs.
- The structure of the command is incorrect in Edit the ClusterLogging custom resource (CR) step under multiple documents.
- Here is the one of the documentation link:
https://docs.openshift.com/container-platform/4.16/observability/logging/log_storage/logging-config-es-store.html#cluster-logging-elasticsearch-ha_logging-config-es-store
- Here in Step1 under the procedure section, we could see the below command is mentioned:
~~~
$ oc edit clusterlogging instance
~~~

- But this is not the correct way.
- The correct way to mention `clusterlogging` in the Red Hat Standard Documentation is `ClusterLogging`.
- We can verify this in the other part of our documentation.
- In addition, Step1 is noted with "Edit the ClusterLogging custom resource (CR) in the openshift-logging project:"
- However, the project name is not mentioned in the command.
- It is necessary to mention the project name while executing that command.

**Reason:**

1. Suppose the user is not a part of `openshift-logging` project, and he tries to run this command then this command will not work.
2. If the credentials are shared, and two people are using the same cluster at the same time, then, the second person could change to work in a different namespace.

- Hence it will be always beneficial to run the above command with the project name.
- We need to perform these changes in our documentation.
- Here is the correct structure of this command.
--------------------
1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

~~~
$ oc -n openshift-logging edit ClusterLogging instance
~~~
--------------------

- We need to perform this changes in the following modules/section:
2025-01-16 05:50:38 +00:00

59 lines
2.0 KiB
Plaintext

// Module included in the following assemblies:
//
// * observability/logging/cluster-logging-elasticsearch.adoc
:_mod-docs-content-type: PROCEDURE
[id="cluster-logging-elasticsearch-ha_{context}"]
= Configuring replication policy for the log store
You can define how Elasticsearch shards are replicated across data nodes in the cluster.
.Prerequisites
* The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.
.Procedure
. Edit the `ClusterLogging` custom resource (CR) in the `openshift-logging` project:
+
[source,terminal]
----
$ oc -n openshift-logging edit ClusterLogging instance
----
+
[source,yaml]
----
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "instance"
....
spec:
logStore:
type: "elasticsearch"
elasticsearch:
redundancyPolicy: "SingleRedundancy" <1>
----
<1> Specify a redundancy policy for the shards. The change is applied upon saving the changes.
+
* *FullRedundancy*. Elasticsearch fully replicates the primary shards for each index
to every data node. This provides the highest safety, but at the cost of the highest amount of disk required and the poorest performance.
* *MultipleRedundancy*. Elasticsearch fully replicates the primary shards for each index to half of the data nodes.
This provides a good tradeoff between safety and performance.
* *SingleRedundancy*. Elasticsearch makes one copy of the primary shards for each index.
Logs are always available and recoverable as long as at least two data nodes exist.
Better performance than MultipleRedundancy, when using 5 or more nodes. You cannot
apply this policy on deployments of single Elasticsearch node.
* *ZeroRedundancy*. Elasticsearch does not make copies of the primary shards.
Logs might be unavailable or lost in the event a node is down or fails.
Use this mode when you are more concerned with performance than safety, or have
implemented your own disk/PVC backup/restore strategy.
[NOTE]
====
The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes.
====