1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 21:46:22 +01:00

edits per jcantrill

This commit is contained in:
Michael Burke
2019-04-03 12:04:11 -04:00
parent 2bc120aabf
commit d45d6c7cfe
4 changed files with 20 additions and 14 deletions

View File

@@ -30,9 +30,11 @@ include::modules/efk-logging-fluentd-limits.adoc[leveloffset=+1]
////
4.1
modules/efk-logging-fluentd-log-rotation.adoc[leveloffset=+1]
4.2
modules/efk-logging-fluentd-collector.adoc[leveloffset=+1]
////
include::modules/efk-logging-fluentd-collector.adoc[leveloffset=+1]
include::modules/efk-logging-fluentd-log-location.adoc[leveloffset=+1]

View File

@@ -7,13 +7,13 @@
////
An Elasticsearch index is a collection of primary shards and its corresponding replica
shards. This is how ES implements high availability internally, therefore there
shards. This is how Elasticsearch implements high availability internally, therefore there
is little need to use hardware based mirroring RAID variants. RAID 0 can still
be used to increase overall disk performance.
//Following paragraph also in nodes/efk-logging-elasticsearch
Elasticsearch is a memory-intensive application. Each Elasticsearch node needs 16G of memory for both memory requests and CPU limits.
Elasticsearch is a memory-intensive application. The default cluster logging installation deploys 16G of memory for both memory requests and CPU limits.
The initial set of {product-title} nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the
{product-title} cluster to run with the recommended or higher memory. Each Elasticsearch node can operate with a lower
memory setting though this is not recommended for production deployments.
@@ -89,9 +89,8 @@ absolute storage consumption around 50% and below 70% at all times]. This
helps to avoid Elasticsearch becoming unresponsive during large merge
operations.
By default, at 85% ES stops allocating new data to the node, at 90% ES starts de-allocating
existing shards from that node to other nodes if possible. But if no nodes have
free capacity below 85% then ES will effectively reject creating new indices
By default, at 85% Elasticsearch stops allocating new data to the node, at 90% Elasticsearch attempts to relocate
existing shards from that node to other nodes if possible. But if no nodes have free capacity below 85%, Elasticsearch effectively rejects creating new indices
and becomes RED.
[NOTE]

View File

@@ -125,7 +125,9 @@ spec:
type: "elasticsearch" <3>
elasticsearch:
nodeCount: 3
storage: {}
storage:
storageClassName: gp2
size: 200G
redundancyPolicy: "SingleRedundancy"
visualization:
type: "kibana" <4>

View File

@@ -5,14 +5,14 @@
[id="efk-logging-deploying-about-{context}"]
= About deploying and configuring cluster logging
{product-title} cluster logging is designed to be used with the default configuration that should support most {product-title} environments.
{product-title} cluster logging is designed to be used with the default configuration, which is tuned for small to medium sized {product-title} clusters.
The installation instructions that follow include a template Cluster Logging Custom Resource, which you can use to configure your cluster logging
deployment.
The installation instructions that follow include a sample Cluster Logging Custom Resource (CR), which you can use to create a cluster logging instance
and configure your cluster logging deployment.
If you want to use the default cluster logging install, you can use the template directly.
If you want to use the default cluster logging install, you can use the sample CR directly.
If you want to customize your deployment, make changes to that template as needed. The following describes the configurations you can make when installing your cluster logging instance or modify after installtion. See the Configuring sections for more information on working with each component, including modifications you can make outside of the Cluster Logging Custom Resource.
If you want to customize your deployment, make changes to the sample CR as needed. The following describes the configurations you can make when installing your cluster logging instance or modify after installtion. See the Configuring sections for more information on working with each component, including modifications you can make outside of the Cluster Logging Custom Resource.
[IMPORTANT]
====
@@ -105,7 +105,7 @@ You can configure a persistent storage class and size for the Elasticsearch clus
----
This example specifies each data node in the cluster will be bound to a `PersistentVolumeClaim` that
requests "200G" of "gp2" storage. Additionally, each primary shard will be backed by a single replica.
requests "200G" of "gp2" storage. Each primary shard will be backed by a single replica.
[NOTE]
====
@@ -129,6 +129,7 @@ You can set the policy that defines how Elasticsearch shards are replicated acro
* `SingleRedundancy`. A single copy of each shard. Logs are always available and recoverable as long as at least two data nodes exist.
* `ZeroRedundancy`. No copies of any shards. Logs may be unavailable (or lost) in the event a node is down or fails.
////
Log collectors::
You can select which log collector is deployed as a Daemonset to each node in the {product-title} cluster, either:
@@ -149,6 +150,7 @@ You can select which log collector is deployed as a Daemonset to each node in th
memory:
type: "fluentd"
----
////
Curator schedule::
You specify the schedule for Curator in the [cron format](https://en.wikipedia.org/wiki/Cron).
@@ -172,7 +174,8 @@ The following is an example of a Cluster Logging Custom Resource modified using
apiVersion: "logging.openshift.io/v1alpha1"
kind: "ClusterLogging"
metadata:
name: "customresourcefluentd"
name: "instance"
namespace: "openshift-logging"
spec:
managementState: "Managed"
logStore: