mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
146 lines
4.8 KiB
Plaintext
146 lines
4.8 KiB
Plaintext
// Module included in the following assemblies:
|
|
//
|
|
// * logging/dedicated-cluster-deploying.adoc
|
|
|
|
:_mod-docs-content-type: PROCEDURE
|
|
[id="dedicated-cluster-install-deploy_{context}"]
|
|
= Installing OpenShift Logging and OpenShift Elasticsearch Operators
|
|
|
|
You can use the {product-title} console to install OpenShift Logging by deploying instances of
|
|
the OpenShift Logging and OpenShift Elasticsearch Operators. The Red Hat OpenShift Logging Operator
|
|
creates and manages the components of the logging stack. The OpenShift Elasticsearch Operator
|
|
creates and manages the Elasticsearch cluster used by OpenShift Logging.
|
|
|
|
[NOTE]
|
|
====
|
|
The OpenShift Logging solution requires that you install both the
|
|
Red Hat OpenShift Logging Operator and OpenShift Elasticsearch Operator. When you deploy an instance
|
|
of the Red Hat OpenShift Logging Operator, it also deploys an instance of the OpenShift Elasticsearch
|
|
Operator.
|
|
====
|
|
|
|
Your OpenShift Dedicated cluster includes 600 GiB of persistent storage that is
|
|
exclusively available for deploying Elasticsearch for OpenShift Logging.
|
|
|
|
Elasticsearch is a memory-intensive application. Each Elasticsearch node needs
|
|
16Gi of memory for both memory requests and limits. Each Elasticsearch node can
|
|
operate with a lower memory setting, though this is not recommended for
|
|
production deployments.
|
|
|
|
.Procedure
|
|
|
|
. Install the OpenShift Elasticsearch Operator from the OperatorHub:
|
|
|
|
.. In the {product-title} web console, click *Operators* -> *OperatorHub*.
|
|
|
|
.. Choose *OpenShift Elasticsearch Operator* from the list of available Operators, and click *Install*.
|
|
|
|
.. On the *Install Operator* page, under *A specific namespace on the cluster* select *openshift-logging*.
|
|
Then, click *Install*.
|
|
|
|
. Install the Red Hat OpenShift Logging Operator from the OperatorHub:
|
|
|
|
.. In the {product-title} web console, click *Operators* -> *OperatorHub*.
|
|
|
|
.. Choose *Red Hat OpenShift Logging* from the list of available Operators, and click *Install*.
|
|
|
|
.. On the *Install Operator* page, under *A specific namespace on the cluster* select *openshift-logging*.
|
|
Then, click *Install*.
|
|
|
|
. Verify the operator installations:
|
|
|
|
.. Switch to the *Operators* → *Installed Operators* page.
|
|
|
|
.. Ensure that *Red Hat OpenShift Logging* and *OpenShift Elasticsearch* Operators are listed in the
|
|
*openshift-logging* project with a *Status* of *InstallSucceeded*.
|
|
+
|
|
[NOTE]
|
|
====
|
|
During installation an operator might display a *Failed* status. If the operator then installs with an *InstallSucceeded* message,
|
|
you can safely ignore the *Failed* message.
|
|
====
|
|
+
|
|
If either operator does not appear as installed, to troubleshoot further:
|
|
+
|
|
* Switch to the *Operators* → *Installed Operators* page and inspect
|
|
the *Status* column for any errors or failures.
|
|
* Switch to the *Workloads* → *Pods* page and check the logs in each pod in the
|
|
`openshift-logging` project that is reporting issues.
|
|
|
|
. Create and deploy an OpenShift Logging instance:
|
|
|
|
.. Switch to the *Operators* → *Installed Operators* page.
|
|
|
|
.. Click the installed *Red Hat OpenShift Logging* Operator.
|
|
|
|
.. Under the *Details* tab, in the *Provided APIs* section, in the
|
|
*Cluster Logging* box, click *Create Instance*.
|
|
|
|
. Select *YAML View* and paste the following YAML definition into the window
|
|
that displays.
|
|
+
|
|
.Cluster Logging custom resource (CR)
|
|
[source,yaml]
|
|
----
|
|
apiVersion: "logging.openshift.io/v1"
|
|
kind: "ClusterLogging"
|
|
metadata:
|
|
name: "instance"
|
|
namespace: "openshift-logging"
|
|
spec:
|
|
managementState: "Managed"
|
|
logStore:
|
|
type: "elasticsearch"
|
|
elasticsearch:
|
|
nodeCount: 3
|
|
storage:
|
|
storageClassName: gp2
|
|
size: "200Gi"
|
|
redundancyPolicy: "SingleRedundancy"
|
|
nodeSelector:
|
|
node-role.kubernetes.io/worker: ""
|
|
resources:
|
|
limits:
|
|
memory: "16Gi"
|
|
requests:
|
|
memory: "16Gi"
|
|
visualization:
|
|
type: "kibana"
|
|
kibana:
|
|
replicas: 1
|
|
nodeSelector:
|
|
node-role.kubernetes.io/worker: ""
|
|
collection:
|
|
logs:
|
|
type: "fluentd"
|
|
fluentd: {}
|
|
nodeSelector:
|
|
node-role.kubernetes.io/worker: ""
|
|
----
|
|
|
|
.. Click *Create* to deploy the logging instance, which creates the Cluster
|
|
Logging and Elasticsearch custom resources.
|
|
|
|
. Verify that the pods for the OpenShift Logging instance deployed:
|
|
|
|
.. Switch to the *Workloads* → *Pods* page.
|
|
|
|
.. Select the *openshift-logging* project.
|
|
+
|
|
You should see several pods for OpenShift Logging, Elasticsearch, Fluentd, and Kibana similar to the following list:
|
|
+
|
|
* cluster-logging-operator-cb795f8dc-xkckc
|
|
* elasticsearch-cdm-b3nqzchd-1-5c6797-67kfz
|
|
* elasticsearch-cdm-b3nqzchd-2-6657f4-wtprv
|
|
* elasticsearch-cdm-b3nqzchd-3-588c65-clg7g
|
|
* fluentd-2c7dg
|
|
* fluentd-9z7kk
|
|
* fluentd-br7r2
|
|
* fluentd-fn2sb
|
|
* fluentd-pb2f8
|
|
* fluentd-zqgqx
|
|
* kibana-7fb4fd4cc9-bvt4p
|
|
|
|
. Access the OpenShift Logging interface, *Kibana*, from the *Observe* →
|
|
*Logging* page of the {product-title} web console.
|