mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-06 06:46:26 +01:00
397 lines
14 KiB
Plaintext
397 lines
14 KiB
Plaintext
// Module included in the following assemblies:
|
|
//
|
|
// * logging/cluster-logging-deploying.adoc
|
|
|
|
[id="cluster-logging-deploy-cli_{context}"]
|
|
= Installing cluster logging using the CLI
|
|
|
|
You can use the {product-title} CLI to install the Elasticsearch and Cluster Logging operators.
|
|
|
|
.Prerequisites
|
|
|
|
* Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node
|
|
requires its own storage volume.
|
|
+
|
|
Elasticsearch is a memory-intensive application. Each Elasticsearch node needs 16G of memory for both memory requests and limits.
|
|
The initial set of {product-title} nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the
|
|
{product-title} cluster to run with the recommended or higher memory. Each Elasticsearch node can operate with a lower
|
|
memory setting though this is not recommended for production deployments.
|
|
|
|
.Procedure
|
|
|
|
To install the Elasticsearch Operator and Cluster Logging Operator using the CLI:
|
|
|
|
. Create a Namespace for the Elasticsearch Operator.
|
|
|
|
.. Create a Namespace object YAML file (for example, `eo-namespace.yaml`) for the Elasticsearch Operator:
|
|
+
|
|
[source,yaml]
|
|
----
|
|
apiVersion: v1
|
|
kind: Namespace
|
|
metadata:
|
|
name: openshift-operators-redhat <1>
|
|
annotations:
|
|
openshift.io/node-selector: ""
|
|
labels:
|
|
openshift.io/cluster-monitoring: "true" <2>
|
|
----
|
|
<1> You must specify the `openshift-operators-redhat` Namespace. To prevent
|
|
possible conflicts with metrics, you should configure the Prometheus Cluster
|
|
Monitoring stack to scrape metrics from the `openshift-operators-redhat`
|
|
Namespace and not the `openshift-operators` Namespace. The `openshift-operators`
|
|
Namespace might contain Community Operators, which are untrusted and could publish
|
|
a metric with the same name as an {product-title} metric, which would cause
|
|
conflicts.
|
|
<2> You must specify this label as shown to ensure that cluster monitoring
|
|
scrapes the `openshift-operators-redhat` Namespace.
|
|
|
|
.. Create the Namespace:
|
|
+
|
|
----
|
|
$ oc create -f <file-name>.yaml
|
|
----
|
|
+
|
|
For example:
|
|
+
|
|
----
|
|
$ oc create -f eo-namespace.yaml
|
|
----
|
|
|
|
. Create a Namespace for the Cluster Logging Operator:
|
|
|
|
.. Create a Namespace object YAML file (for example, `clo-namespace.yaml`) for the Cluster Logging Operator:
|
|
+
|
|
[source,yaml]
|
|
----
|
|
apiVersion: v1
|
|
kind: Namespace
|
|
metadata:
|
|
name: openshift-logging
|
|
annotations:
|
|
openshift.io/node-selector: ""
|
|
labels:
|
|
openshift.io/cluster-monitoring: "true"
|
|
----
|
|
|
|
.. Create the Namespace:
|
|
+
|
|
----
|
|
$ oc create -f <file-name>.yaml
|
|
----
|
|
+
|
|
For example:
|
|
+
|
|
----
|
|
$ oc create -f clo-namespace.yaml
|
|
----
|
|
|
|
. Install the Elasticsearch Operator by creating the following objects:
|
|
|
|
.. Create an Operator Group object YAML file (for example, `eo-og.yaml`) for the Elasticsearch operator:
|
|
+
|
|
[source,yaml]
|
|
----
|
|
apiVersion: operators.coreos.com/v1
|
|
kind: OperatorGroup
|
|
metadata:
|
|
name: openshift-operators-redhat
|
|
namespace: openshift-operators-redhat <1>
|
|
spec: {}
|
|
----
|
|
<1> You must specify the `openshift-operators-redhat` Namespace.
|
|
|
|
.. Create an Operator Group object:
|
|
+
|
|
----
|
|
$ oc create -f <file-name>.yaml
|
|
----
|
|
+
|
|
For example:
|
|
+
|
|
----
|
|
$ oc create -f eo-og.yaml
|
|
----
|
|
|
|
.. Create a Subscription object YAML file (for example, `eo-sub.yaml`) to
|
|
subscribe a Namespace to the Elasticsearch Operator.
|
|
+
|
|
.Example Subscription
|
|
[source,yaml]
|
|
----
|
|
apiVersion: operators.coreos.com/v1alpha1
|
|
kind: Subscription
|
|
metadata:
|
|
name: "elasticsearch-operator"
|
|
namespace: "openshift-operators-redhat" <1>
|
|
spec:
|
|
channel: "4.4" <2>
|
|
installPlanApproval: "Automatic"
|
|
source: "redhat-operators" <3>
|
|
sourceNamespace: "openshift-marketplace"
|
|
name: "elasticsearch-operator"
|
|
----
|
|
<1> You must specify the `openshift-operators-redhat` Namespace.
|
|
<2> Specify `4.4` as the channel.
|
|
<3> Specify `redhat-operators`. If your {product-title} cluster is installed on a restricted network, also known as a disconnected cluster,
|
|
specify the name of the CatalogSource object created when you configured the Operator Lifecycle Manager (OLM).
|
|
|
|
.. Create the Subscription object:
|
|
+
|
|
----
|
|
$ oc create -f <file-name>.yaml
|
|
----
|
|
+
|
|
For example:
|
|
+
|
|
----
|
|
$ oc create -f eo-sub.yaml
|
|
----
|
|
+
|
|
The Elasticsearch Operator is installed to the `openshift-operators-redhat` Namespace and copied to each project in the cluster.
|
|
|
|
.. Verify the Operator installation:
|
|
+
|
|
----
|
|
oc get csv --all-namespaces
|
|
|
|
NAMESPACE NAME DISPLAY VERSION REPLACES PHASE
|
|
default elasticsearch-operator.4.4.0-202004222248 Elasticsearch Operator 4.4.0-202004222248 Succeeded
|
|
kube-node-lease elasticsearch-operator.4.4.0-202004222248 Elasticsearch Operator 4.4.0-202004222248 Succeeded
|
|
kube-public elasticsearch-operator.4.4.0-202004222248 Elasticsearch Operator 4.4.0-202004222248 Succeeded
|
|
kube-system elasticsearch-operator.4.4.0-202004222248 Elasticsearch Operator 4.4.0-202004222248 Succeeded
|
|
openshift-apiserver-operator elasticsearch-operator.4.4.0-202004222248 Elasticsearch Operator 4.4.0-202004222248 Succeeded
|
|
openshift-apiserver elasticsearch-operator.4.4.0-202004222248 Elasticsearch Operator 4.4.0-202004222248 Succeeded
|
|
openshift-authentication-operator elasticsearch-operator.4.4.0-202004222248 Elasticsearch Operator 4.4.0-202004222248 Succeeded
|
|
openshift-authentication elasticsearch-operator.4.4.0-202004222248 Elasticsearch Operator 4.4.0-202004222248 Succeeded
|
|
...
|
|
----
|
|
+
|
|
There should be an Elasticsearch Operator in each Namespace. The version number might be different than shown.
|
|
|
|
. Install the Cluster Logging Operator by creating the following objects:
|
|
|
|
.. Create an OperatorGroup object YAML file (for example, `clo-og.yaml`) for the Cluster Logging Operator:
|
|
+
|
|
[source,yaml]
|
|
----
|
|
apiVersion: operators.coreos.com/v1
|
|
kind: OperatorGroup
|
|
metadata:
|
|
name: cluster-logging
|
|
namespace: openshift-logging <1>
|
|
spec:
|
|
targetNamespaces:
|
|
- openshift-logging <1>
|
|
----
|
|
<1> You must specify the `openshift-logging` namespace.
|
|
|
|
.. Create the OperatorGroup object:
|
|
+
|
|
----
|
|
$ oc create -f <file-name>.yaml
|
|
----
|
|
+
|
|
For example:
|
|
+
|
|
----
|
|
$ oc create -f clo-og.yaml
|
|
----
|
|
|
|
.. Create a Subscription object YAML file (for example, `clo-sub.yaml`) to
|
|
subscribe a Namespace to the Cluster Logging Operator.
|
|
+
|
|
----
|
|
apiVersion: operators.coreos.com/v1alpha1
|
|
kind: Subscription
|
|
metadata:
|
|
name: cluster-logging
|
|
namespace: openshift-logging
|
|
spec:
|
|
channel: "4.4"
|
|
name: cluster-logging
|
|
source: redhat-operators
|
|
sourceNamespace: openshift-marketplace
|
|
----
|
|
<1> You must specify the `openshift-logging` Namespace.
|
|
<2> Specify `4.4` as the channel.
|
|
<3> Specify `redhat-operators`. If your OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM).
|
|
|
|
.. Create the Subscription object:
|
|
+
|
|
----
|
|
$ oc create -f <file-name>.yaml
|
|
----
|
|
+
|
|
For example:
|
|
+
|
|
----
|
|
$ oc create -f clo-sub.yaml
|
|
----
|
|
+
|
|
The Cluster Logging Operator is installed to the `openshift-logging` Namespace.
|
|
|
|
.. Verify the Operator installation.
|
|
+
|
|
There should be a Cluster Logging Operator in the `openshift-logging` Namespace. The Version number might be different than shown.
|
|
+
|
|
----
|
|
oc get csv -n openshift-logging
|
|
|
|
NAMESPACE NAME DISPLAY VERSION REPLACES PHASE
|
|
...
|
|
openshift-logging clusterlogging.4.4.0-202004222248 Cluster Logging 4.4.0-202004222248 Succeeded
|
|
...
|
|
----
|
|
|
|
. Create a Cluster Logging instance:
|
|
|
|
.. Create an instance object YAML file (for example, `clo-instance.yaml`) for the Cluster Logging Operator:
|
|
+
|
|
[NOTE]
|
|
====
|
|
This default Cluster Logging configuration should support a wide array of environments. Review the topics on tuning and
|
|
configuring the Cluster Logging components for information on modifications you can make to your Cluster Logging cluster.
|
|
====
|
|
+
|
|
ifdef::openshift-dedicated[]
|
|
[source,yaml]
|
|
----
|
|
apiVersion: "logging.openshift.io/v1"
|
|
kind: "ClusterLogging"
|
|
metadata:
|
|
name: "instance"
|
|
namespace: "openshift-logging"
|
|
spec:
|
|
managementState: "Managed"
|
|
logStore:
|
|
type: "elasticsearch"
|
|
elasticsearch:
|
|
nodeCount: 3
|
|
storage:
|
|
storageClassName: "gp2"
|
|
size: "200Gi"
|
|
redundancyPolicy: "SingleRedundancy"
|
|
nodeSelector:
|
|
node-role.kubernetes.io/worker: ""
|
|
resources:
|
|
request:
|
|
memory: 8G
|
|
visualization:
|
|
type: "kibana"
|
|
kibana:
|
|
replicas: 1
|
|
nodeSelector:
|
|
node-role.kubernetes.io/worker: ""
|
|
curation:
|
|
type: "curator"
|
|
curator:
|
|
schedule: "30 3 * * *"
|
|
nodeSelector:
|
|
node-role.kubernetes.io/worker: ""
|
|
collection:
|
|
logs:
|
|
type: "fluentd"
|
|
fluentd: {}
|
|
nodeSelector:
|
|
node-role.kubernetes.io/worker: ""
|
|
----
|
|
endif::[]
|
|
|
|
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
|
|
[source,yaml]
|
|
----
|
|
apiVersion: "logging.openshift.io/v1"
|
|
kind: "ClusterLogging"
|
|
metadata:
|
|
name: "instance" <1>
|
|
namespace: "openshift-logging"
|
|
spec:
|
|
managementState: "Managed" <2>
|
|
logStore:
|
|
type: "elasticsearch" <3>
|
|
elasticsearch:
|
|
nodeCount: 3 <4>
|
|
storage:
|
|
storageClassName: gp2 <5>
|
|
size: 200G
|
|
redundancyPolicy: "SingleRedundancy"
|
|
visualization:
|
|
type: "kibana" <6>
|
|
kibana:
|
|
replicas: 1
|
|
curation:
|
|
type: "curator" <7>
|
|
curator:
|
|
schedule: "30 3 * * *"
|
|
collection:
|
|
logs:
|
|
type: "fluentd" <8>
|
|
fluentd: {}
|
|
----
|
|
<1> The name must be `instance`.
|
|
<2> The Cluster Logging management state. In most cases, if you change the Cluster Logging defaults, you must set this to `Unmanaged`.
|
|
However, an unmanaged deployment does not receive updates until Cluster Logging is placed back into the `Managed` state.
|
|
<3> Settings for configuring Elasticsearch. Using the Custom Resource (CR), you can configure shard replication policy and persistent storage.
|
|
<4> Specify the number of Elasticsearch nodes. See the note that follows this list.
|
|
<5> Specify that each Elasticsearch node in the cluster is bound to a Persistent Volume Claim.
|
|
<6> Settings for configuring Kibana. Using the CR, you can scale Kibana for redundancy and configure the CPU and memory for your Kibana nodes. For more information, see *Configuring Kibana*.
|
|
<7> Settings for configuring Curator. Using the CR, you can set the Curator schedule. For more information, see *Configuring Curator*.
|
|
<8> Settings for configuring Fluentd. Using the CR, you can configure Fluentd CPU and memory limits. For more information, see *Configuring Fluentd*.
|
|
endif::[]
|
|
+
|
|
[NOTE]
|
|
+
|
|
====
|
|
The maximum number of Elasticsearch master nodes is three. If you specify a `nodeCount` greater than `3`, {product-title} creates three Elasticsearch nodes that are Master-eligible nodes, with the master, client, and data roles. The additional Elasticsearch nodes are created as Data-only nodes, using client and data roles. Master nodes perform cluster-wide actions such as creating or deleting an index, shard allocation, and tracking nodes. Data nodes hold the shards and perform data-related operations such as CRUD, search, and aggregations. Data-related operations are I/O-, memory-, and CPU-intensive. It is important to monitor these resources and to add more Data nodes if the current nodes are overloaded.
|
|
|
|
For example, if `nodeCount=4`, the following nodes are created:
|
|
|
|
----
|
|
$ oc get deployment
|
|
|
|
cluster-logging-operator 1/1 1 1 18h
|
|
elasticsearch-cd-x6kdekli-1 1/1 1 0 6m54s
|
|
elasticsearch-cdm-x6kdekli-1 1/1 1 1 18h
|
|
elasticsearch-cdm-x6kdekli-2 1/1 1 0 6m49s
|
|
elasticsearch-cdm-x6kdekli-3 1/1 1 0 6m44s
|
|
----
|
|
|
|
The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes.
|
|
====
|
|
|
|
.. Create the instance:
|
|
+
|
|
----
|
|
$ oc create -f <file-name>.yaml
|
|
----
|
|
+
|
|
For example:
|
|
+
|
|
----
|
|
$ oc create -f clo-instance.yaml
|
|
----
|
|
|
|
. Verify the install by listing the Pods in the *openshift-logging* project.
|
|
+
|
|
You should see several Pods for Cluster Logging, Elasticsearch, Fluentd, and Kibana similar to the following list:
|
|
+
|
|
----
|
|
oc get pods -n openshift-logging
|
|
|
|
NAME READY STATUS RESTARTS AGE
|
|
cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m
|
|
elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s
|
|
elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s
|
|
elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s
|
|
fluentd-587vb 1/1 Running 0 2m26s
|
|
fluentd-7mpb9 1/1 Running 0 2m30s
|
|
fluentd-flm6j 1/1 Running 0 2m33s
|
|
fluentd-gn4rn 1/1 Running 0 2m26s
|
|
fluentd-nlgb6 1/1 Running 0 2m30s
|
|
fluentd-snpkt 1/1 Running 0 2m28s
|
|
kibana-d6d5668c5-rppqm 2/2 Running 0 2m39s
|
|
----
|
|
|
|
|