mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-07 09:46:53 +01:00
149 lines
4.7 KiB
Plaintext
149 lines
4.7 KiB
Plaintext
// Module included in the following assemblies:
|
|
//
|
|
// * logging/dedicated-cluster-deploying.adoc
|
|
|
|
[id="dedicated-cluster-install-deploy"]
|
|
|
|
= Installing the Cluster Logging and Elasticsearch Operators
|
|
|
|
You can use the {product-title} console to install cluster logging by deploying instances of
|
|
the Cluster Logging and Elasticsearch Operators. The Cluster Logging Operator
|
|
creates and manages the components of the logging stack. The Elasticsearch Operator
|
|
creates and manages the Elasticsearch cluster used by cluster logging.
|
|
|
|
[NOTE]
|
|
====
|
|
The {product-title} cluster logging solution requires that you install both the
|
|
Cluster Logging Operator and Elasticsearch Operator. When you deploy an instance
|
|
of the Cluster Logging Operator, it also deploys an instance of the Elasticsearch
|
|
Operator.
|
|
====
|
|
|
|
Your OpenShift Dedicated cluster includes 600 GiB of persistent storage that is
|
|
exclusively available for deploying Elasticsearch for cluster logging.
|
|
|
|
Elasticsearch is a memory-intensive application. Each Elasticsearch node needs
|
|
8G of memory for both memory requests and limits. Each Elasticsearch node can
|
|
operate with a lower memory setting, though this is not recommended for
|
|
production deployments.
|
|
|
|
.Procedure
|
|
|
|
. Install the Elasticsearch Operator from the OperatorHub:
|
|
|
|
.. In the {product-title} web console, click *Operators* -> *OperatorHub*.
|
|
|
|
.. Choose *Elasticsearch* from the list of available Operators, and click *Install*.
|
|
|
|
.. On the *Install Operator* page, under *A specific namespace on the cluster* select *openshift-logging*.
|
|
Then, click *Install*.
|
|
|
|
. Install the Cluster Logging Operator from the OperatorHub:
|
|
|
|
.. In the {product-title} web console, click *Operators* -> *OperatorHub*.
|
|
|
|
.. Choose *Cluster Logging* from the list of available Operators, and click *Install*.
|
|
|
|
.. On the *Install Operator* page, under *A specific namespace on the cluster* select *openshift-logging*.
|
|
Then, click *Install*.
|
|
|
|
. Verify the operator installations:
|
|
|
|
.. Switch to the *Operators* → *Installed Operators* page.
|
|
|
|
.. Ensure that *Cluster Logging* and *Elasticsearch* Operators are listed in the
|
|
*openshift-logging* project with a *Status* of *InstallSucceeded*.
|
|
+
|
|
[NOTE]
|
|
====
|
|
During installation an operator might display a *Failed* status. If the operator then installs with an *InstallSucceeded* message,
|
|
you can safely ignore the *Failed* message.
|
|
====
|
|
+
|
|
If either operator does not appear as installed, to troubleshoot further:
|
|
+
|
|
* Switch to the *Operators* → *Installed Operators* page and inspect
|
|
the *Status* column for any errors or failures.
|
|
* Switch to the *Workloads* → *Pods* page and check the logs in each Pod in the
|
|
`openshift-logging` project that is reporting issues.
|
|
|
|
. Create and deploy a cluster logging instance:
|
|
|
|
.. Switch to the *Operators* → *Installed Operators* page.
|
|
|
|
.. Click the installed *Cluster Logging* Operator.
|
|
|
|
.. Under the *Details* tab, in the *Provided APIs* section, in the
|
|
*Cluster Logging* box, click *Create Instance* . Select the *YAML View*
|
|
radio button and paste the following YAML definition into the window
|
|
that displays.
|
|
+
|
|
.Cluster Logging Custom Resource (CR)
|
|
[source,yaml]
|
|
----
|
|
apiVersion: "logging.openshift.io/v1"
|
|
kind: "ClusterLogging"
|
|
metadata:
|
|
name: "instance"
|
|
namespace: "openshift-logging"
|
|
spec:
|
|
managementState: "Managed"
|
|
logStore:
|
|
type: "elasticsearch"
|
|
elasticsearch:
|
|
nodeCount: 3
|
|
storage:
|
|
storageClassName: gp2
|
|
size: "200Gi"
|
|
redundancyPolicy: "SingleRedundancy"
|
|
nodeSelector:
|
|
node-role.kubernetes.io/worker: ""
|
|
resources:
|
|
requests:
|
|
memory: 8G
|
|
visualization:
|
|
type: "kibana"
|
|
kibana:
|
|
replicas: 1
|
|
nodeSelector:
|
|
node-role.kubernetes.io/worker: ""
|
|
curation:
|
|
type: "curator"
|
|
curator:
|
|
schedule: "15 * * * *"
|
|
nodeSelector:
|
|
node-role.kubernetes.io/worker: ""
|
|
collection:
|
|
logs:
|
|
type: "fluentd"
|
|
fluentd: {}
|
|
nodeSelector:
|
|
node-role.kubernetes.io/worker: ""
|
|
----
|
|
|
|
.. Click *Create* to deploy the logging instance, which creates the Cluster
|
|
Logging and Elasticsearch Custom Resources.
|
|
|
|
. Verify that the pods for the Cluster Logging instance deployed:
|
|
|
|
.. Switch to the *Workloads* → *Pods* page.
|
|
|
|
.. Select the *openshift-logging* project.
|
|
+
|
|
You should see several pods for cluster logging, Elasticsearch, Fluentd, and Kibana similar to the following list:
|
|
+
|
|
* cluster-logging-operator-cb795f8dc-xkckc
|
|
* elasticsearch-cdm-b3nqzchd-1-5c6797-67kfz
|
|
* elasticsearch-cdm-b3nqzchd-2-6657f4-wtprv
|
|
* elasticsearch-cdm-b3nqzchd-3-588c65-clg7g
|
|
* fluentd-2c7dg
|
|
* fluentd-9z7kk
|
|
* fluentd-br7r2
|
|
* fluentd-fn2sb
|
|
* fluentd-pb2f8
|
|
* fluentd-zqgqx
|
|
* kibana-7fb4fd4cc9-bvt4p
|
|
|
|
. Access the Cluster Logging interface, *Kibana*, from the *Monitoring* →
|
|
*Logging* page of the {product-title} web console.
|