1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-06 06:46:26 +01:00
Files
openshift-docs/modules/cluster-logging-deploy-cli.adoc
2020-08-08 00:45:51 +00:00

446 lines
15 KiB
Plaintext

// Module included in the following assemblies:
//
// * logging/cluster-logging-deploying.adoc
[id="cluster-logging-deploy-cli_{context}"]
= Installing cluster logging using the CLI
You can use the {product-title} CLI to install the Elasticsearch and Cluster Logging operators.
.Prerequisites
* Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node
requires its own storage volume.
+
Elasticsearch is a memory-intensive application. Each Elasticsearch node needs 16G of memory for both memory requests and limits.
The initial set of {product-title} nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the
{product-title} cluster to run with the recommended or higher memory. Each Elasticsearch node can operate with a lower
memory setting though this is not recommended for production deployments.
.Procedure
To install the Elasticsearch Operator and Cluster Logging Operator using the CLI:
. Create a Namespace for the Elasticsearch Operator.
.. Create a Namespace object YAML file (for example, `eo-namespace.yaml`) for the Elasticsearch Operator:
+
[source,yaml]
----
apiVersion: v1
kind: Namespace
metadata:
name: openshift-operators-redhat <1>
annotations:
openshift.io/node-selector: ""
labels:
openshift.io/cluster-monitoring: "true" <2>
----
<1> You must specify the `openshift-operators-redhat` Namespace. To prevent
possible conflicts with metrics, you should configure the Prometheus Cluster
Monitoring stack to scrape metrics from the `openshift-operators-redhat`
Namespace and not the `openshift-operators` Namespace. The `openshift-operators`
Namespace might contain Community Operators, which are untrusted and could publish
a metric with the same name as an {product-title} metric, which would cause
conflicts.
<2> You must specify this label as shown to ensure that cluster monitoring
scrapes the `openshift-operators-redhat` Namespace.
.. Create the Namespace:
+
[source,terminal]
----
$ oc create -f <file-name>.yaml
----
+
For example:
+
[source,terminal]
----
$ oc create -f eo-namespace.yaml
----
. Create a Namespace for the Cluster Logging Operator:
.. Create a Namespace object YAML file (for example, `clo-namespace.yaml`) for the Cluster Logging Operator:
+
[source,yaml]
----
apiVersion: v1
kind: Namespace
metadata:
name: openshift-logging
annotations:
openshift.io/node-selector: ""
labels:
openshift.io/cluster-monitoring: "true"
----
.. Create the Namespace:
+
[source,terminal]
----
$ oc create -f <file-name>.yaml
----
+
For example:
+
[source,terminal]
----
$ oc create -f clo-namespace.yaml
----
. Install the Elasticsearch Operator by creating the following objects:
.. Create an Operator Group object YAML file (for example, `eo-og.yaml`) for the Elasticsearch operator:
+
[source,yaml]
----
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-operators-redhat
namespace: openshift-operators-redhat <1>
spec: {}
----
<1> You must specify the `openshift-operators-redhat` Namespace.
.. Create an Operator Group object:
+
[source,terminal]
----
$ oc create -f <file-name>.yaml
----
+
For example:
+
[source,terminal]
----
$ oc create -f eo-og.yaml
----
.. Create a Subscription object YAML file (for example, `eo-sub.yaml`) to
subscribe a Namespace to the Elasticsearch Operator.
+
.Example Subscription
[source,yaml]
----
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: "elasticsearch-operator"
namespace: "openshift-operators-redhat" <1>
spec:
channel: "4.5" <2>
installPlanApproval: "Automatic"
source: "redhat-operators" <3>
sourceNamespace: "openshift-marketplace"
name: "elasticsearch-operator"
----
<1> You must specify the `openshift-operators-redhat` Namespace.
<2> Specify `4.5` as the channel.
<3> Specify `redhat-operators`. If your {product-title} cluster is installed on a restricted network, also known as a disconnected cluster,
specify the name of the CatalogSource object created when you configured the Operator Lifecycle Manager (OLM).
.. Create the Subscription object:
+
[source,terminal]
----
$ oc create -f <file-name>.yaml
----
+
For example:
+
[source,terminal]
----
$ oc create -f eo-sub.yaml
----
+
The Elasticsearch Operator is installed to the `openshift-operators-redhat` Namespace and copied to each project in the cluster.
.. Verify the Operator installation:
+
[source,terminal]
----
oc get csv --all-namespaces
----
+
.Example output
[source,terminal]
----
NAMESPACE NAME DISPLAY VERSION REPLACES PHASE
default elasticsearch-operator.4.5.0-202007012112.p0 Elasticsearch Operator 4.5.0-202007012112.p0 Succeeded
kube-node-lease elasticsearch-operator.4.5.0-202007012112.p0 Elasticsearch Operator 4.5.0-202007012112.p0 Succeeded
kube-public elasticsearch-operator.4.5.0-202007012112.p0 Elasticsearch Operator 4.5.0-202007012112.p0 Succeeded
kube-system elasticsearch-operator.4.5.0-202007012112.p0 Elasticsearch Operator 4.5.0-202007012112.p0 Succeeded
openshift-apiserver-operator elasticsearch-operator.4.5.0-202007012112.p0 Elasticsearch Operator 4.5.0-202007012112.p0 Succeeded
openshift-apiserver elasticsearch-operator.4.5.0-202007012112.p0 Elasticsearch Operator 4.5.0-202007012112.p0 Succeeded
openshift-authentication-operator elasticsearch-operator.4.5.0-202007012112.p0 Elasticsearch Operator 4.5.0-202007012112.p0 Succeeded
openshift-authentication elasticsearch-operator.4.5.0-202007012112.p0 Elasticsearch Operator 4.5.0-202007012112.p0 Succeeded
...
----
+
There should be an Elasticsearch Operator in each Namespace. The version number might be different than shown.
. Install the Cluster Logging Operator by creating the following objects:
.. Create an OperatorGroup object YAML file (for example, `clo-og.yaml`) for the Cluster Logging Operator:
+
[source,yaml]
----
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: cluster-logging
namespace: openshift-logging <1>
spec:
targetNamespaces:
- openshift-logging <1>
----
<1> You must specify the `openshift-logging` namespace.
.. Create the OperatorGroup object:
+
[source,terminal]
----
$ oc create -f <file-name>.yaml
----
+
For example:
+
[source,terminal]
----
$ oc create -f clo-og.yaml
----
.. Create a Subscription object YAML file (for example, `clo-sub.yaml`) to
subscribe a Namespace to the Cluster Logging Operator.
+
[source,yaml]
----
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: cluster-logging
namespace: openshift-logging <1>
spec:
channel: "4.5" <2>
name: cluster-logging
source: redhat-operators <3>
sourceNamespace: openshift-marketplace
----
<1> You must specify the `openshift-logging` Namespace.
<2> Specify `4.5` as the channel.
<3> Specify `redhat-operators`. If your OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM).
.. Create the Subscription object:
+
[source,terminal]
----
$ oc create -f <file-name>.yaml
----
+
For example:
+
[source,terminal]
----
$ oc create -f clo-sub.yaml
----
+
The Cluster Logging Operator is installed to the `openshift-logging` Namespace.
.. Verify the Operator installation.
+
There should be a Cluster Logging Operator in the `openshift-logging` Namespace. The Version number might be different than shown.
+
[source,terminal]
----
oc get csv -n openshift-logging
----
+
.Example output
[source,terminal]
----
NAMESPACE NAME DISPLAY VERSION REPLACES PHASE
...
openshift-logging clusterlogging.4.5.0-202007012112.p0 Cluster Logging 4.5.0-202007012112.p0 Succeeded
...
----
. Create a Cluster Logging instance:
.. Create an instance object YAML file (for example, `clo-instance.yaml`) for the Cluster Logging Operator:
+
[NOTE]
====
This default Cluster Logging configuration should support a wide array of environments. Review the topics on tuning and
configuring the Cluster Logging components for information on modifications you can make to your Cluster Logging cluster.
====
+
ifdef::openshift-dedicated[]
[source,yaml]
----
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "instance"
namespace: "openshift-logging"
spec:
managementState: "Managed"
logStore:
type: "elasticsearch"
retentionPolicy:
application:
maxAge: 1d
infra:
maxAge: 7d
audit:
maxAge: 7d
elasticsearch:
nodeCount: 3
storage:
storageClassName: gp2
size: "200Gi"
redundancyPolicy: "SingleRedundancy"
nodeSelector:
node-role.kubernetes.io/worker: ""
resources:
request:
memory: 8G
visualization:
type: "kibana"
kibana:
replicas: 1
nodeSelector:
node-role.kubernetes.io/worker: ""
curation:
type: "curator"
curator:
schedule: "30 3 * * *"
nodeSelector:
node-role.kubernetes.io/worker: ""
collection:
logs:
type: "fluentd"
fluentd: {}
nodeSelector:
node-role.kubernetes.io/worker: ""
----
endif::[]
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
[source,yaml]
----
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "instance" <1>
namespace: "openshift-logging"
spec:
managementState: "Managed" <2>
logStore:
type: "elasticsearch" <3>
retentionPolicy: <4>
application:
maxAge: 1d
infra:
maxAge: 7d
audit:
maxAge: 7d
elasticsearch:
nodeCount: 3 <5>
storage:
storageClassName: "<storage-class-name>" <6>
size: 200G
redundancyPolicy: "SingleRedundancy"
visualization:
type: "kibana" <7>
kibana:
replicas: 1
curation:
type: "curator"
curator:
schedule: "30 3 * * *" <8>
collection:
logs:
type: "fluentd" <9>
fluentd: {}
----
<1> The name must be `instance`.
<2> The cluster logging management state. In some cases, if you change the cluster logging defaults, you must set this to `Unmanaged`.
However, an unmanaged deployment does not receive updates until cluster logging is placed back into a managed state. Placing a deployment back into a managed state might revert any modifications you made.
<3> Settings for configuring Elasticsearch. Using the Custom Resource (CR), you can configure shard replication policy and persistent storage.
<4> Specify the length of time that Elasticsearch should retain each log source. Enter an integer and a time designation: weeks(w), hours(h/H), minutes(m) and seconds(s). For example, `7d` for seven days. Logs older than the `maxAge` are deleted. You must specify a retention policy for each log source or the Elasticsearch indices will not be created for that source.
<5> Specify the number of Elasticsearch nodes. See the note that follows this list.
<6> Enter the name of an existing StorageClass for Elasticsearch storage. For best performance, specify a StorageClass that allocates block storage.
<7> Settings for configuring Kibana. Using the CR, you can scale Kibana for redundancy and configure the CPU and memory for your Kibana pods. For more information, see *Configuring Kibana*.
<8> Settings for configuring the Curator schedule. Curator is used to remove data that is in the Elasticsearch index format prior to {product-title} 4.5 and will be removed in a later release.
<9> Settings for configuring Fluentd. Using the CR, you can configure Fluentd CPU and memory limits. For more information, see *Configuring Fluentd*.
+
[NOTE]
+
====
The maximum number of Elasticsearch master nodes is three. If you specify a `nodeCount` greater than `3`, {product-title} creates three Elasticsearch nodes that are Master-eligible nodes, with the master, client, and data roles. The additional Elasticsearch nodes are created as Data-only nodes, using client and data roles. Master nodes perform cluster-wide actions such as creating or deleting an index, shard allocation, and tracking nodes. Data nodes hold the shards and perform data-related operations such as CRUD, search, and aggregations. Data-related operations are I/O-, memory-, and CPU-intensive. It is important to monitor these resources and to add more Data nodes if the current nodes are overloaded.
For example, if `nodeCount=4`, the following nodes are created:
[source,terminal]
----
$ oc get deployment
----
.Example output
[source,terminal]
----
cluster-logging-operator 1/1 1 1 18h
elasticsearch-cd-x6kdekli-1 1/1 1 0 6m54s
elasticsearch-cdm-x6kdekli-1 1/1 1 1 18h
elasticsearch-cdm-x6kdekli-2 1/1 1 0 6m49s
elasticsearch-cdm-x6kdekli-3 1/1 1 0 6m44s
----
The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes.
====
.. Create the instance:
+
[source,terminal]
----
$ oc create -f <file-name>.yaml
----
+
For example:
+
[source,terminal]
----
$ oc create -f clo-instance.yaml
----
+
This creates the Cluster Logging components, the Elasticsearch Custom Resource and components, and the Kibana interface.
. Verify the install by listing the Pods in the *openshift-logging* project.
+
You should see several Pods for Cluster Logging, Elasticsearch, Fluentd, and Kibana similar to the following list:
+
[source,terminal]
----
oc get pods -n openshift-logging
----
+
.Example output
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m
elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s
elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s
elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s
fluentd-587vb 1/1 Running 0 2m26s
fluentd-7mpb9 1/1 Running 0 2m30s
fluentd-flm6j 1/1 Running 0 2m33s
fluentd-gn4rn 1/1 Running 0 2m26s
fluentd-nlgb6 1/1 Running 0 2m30s
fluentd-snpkt 1/1 Running 0 2m28s
kibana-d6d5668c5-rppqm 2/2 Running 0 2m39s
----