1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 21:46:22 +01:00

dedicated logging corrections

This commit is contained in:
Andrew Taylor
2019-12-02 16:47:47 -05:00
parent 9b25f1f054
commit 670ef75cdc
5 changed files with 69 additions and 45 deletions

View File

@@ -872,14 +872,12 @@ Topics:
Distros: openshift-dedicated
- Name: Upgrading cluster logging
File: cluster-logging-upgrading
- Name: Configuring cluster logging
File: dedicated-cluster-logging
Distros: openshift-dedicated
- Name: Deploying and Configuring the Event Router
File: cluster-logging-eventrouter
Distros: openshift-enterprise,openshift-origin
- Name: Viewing cluster logs
File: cluster-logging-viewing
Distros: openshift-enterprise,openshift-origin
- Name: Configuring your cluster logging deployment
Dir: config
Distros: openshift-enterprise,openshift-origin
@@ -910,6 +908,7 @@ Topics:
Distros: openshift-enterprise,openshift-origin
- Name: Moving the cluster logging resources with node selectors
File: cluster-logging-moving-nodes
Distros: openshift-enterprise,openshift-origin
- Name: Manually rolling out Elasticsearch
File: cluster-logging-manual-rollout
Distros: openshift-enterprise,openshift-origin

View File

@@ -6,8 +6,29 @@ include::modules/common-attributes.adoc[]
toc::[]
<<<<<<< HEAD
As an {product-title} cluster administrator, you can deploy cluster logging to aggregate logs for a range of {product-title} services.
=======
ifdef::openshift-enterprise,openshift-origin[]
As an {product-title} cluster administrator, you can deploy cluster logging to
aggregate logs for a range of {product-title} services.
endif::[]
ifdef::openshift-dedicated[]
As an {product-title} administrator, you can deploy cluster logging to
aggregate logs for a range of {product-title} services.
The Elasticsearch, Fluentd, and Kibana (EFK) stack runs on worker nodes. As an
{product-title} administrator, you can monitor resource consumption in the
console and via Prometheus and Grafana. Due to the high work load required for
logging, more worker nodes may be required for your environment.
Logs in {product-title} are retained for two days before rotation. Logging
storage is capped at 600GiB. This is independent of a cluster's allocated base
storage.
endif::[]
>>>>>>> 3c553ab51... dedicated logging corrections
// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference

View File

@@ -42,7 +42,7 @@ endif::openshift-enterprise,openshift-origin[]
ifndef::cnv-logging[]
ifdef::openshift-dedicated[]
As an {product-title} cluster administrator, you can deploy cluster logging to
As an {product-title} administrator, you can deploy cluster logging to
aggregate logs for your applications.
The cluster logging components are based upon Elasticsearch, Fluentd, and Kibana
@@ -54,7 +54,7 @@ link:https://www.elastic.co/guide/en/kibana/current/introduction.html[Kibana] is
the centralized, web UI where users and administrators can create rich
visualizations and dashboards with the aggregated data.
{product-title} cluster administrators can deploy Cluster Logging and
{product-title} administrators can deploy Cluster Logging and
Elasticsearch operators via OperatorHub and configure logging in the
`openshift-logging` namespace. Configuring logging will deploy Elasticsearch,
Fluentd, and Kibana in the `openshift-logging` namespace. The operators are

View File

@@ -11,13 +11,19 @@ You can remove cluster logging from your cluster.
* Cluster logging and Elasticsearch must be installed.
.Procedure
.Procedure
To remove cluster logging:
. Use the following command to remove everything generated during the deployment.
+
----
$ oc delete clusterlogging instance -n openshift-logging
$ oc delete clusterlogging instance -n openshift-logging
----
. Use the following command to remove the Persistent Volume Claims that remain
after the Operator instances are deleted:
+
----
$ oc delete pvc --all -n openshift-logging
----

View File

@@ -6,7 +6,7 @@
= Installing the Cluster Logging and Elasticsearch Operators
You can use the {product-title} console to install cluster logging, by deploying,
You can use the {product-title} console to install cluster logging by deploying instances of
the Cluster Logging and Elasticsearch Operators. The Cluster Logging Operator
creates and manages the components of the logging stack. The Elasticsearch Operator
creates and manages the Elasticsearch cluster used by cluster logging.
@@ -14,24 +14,31 @@ creates and manages the Elasticsearch cluster used by cluster logging.
[NOTE]
====
The {product-title} cluster logging solution requires that you install both the
Cluster Logging Operator and Elasticsearch Operator.
Cluster Logging Operator and Elasticsearch Operator. When you deploy an instance
of the Cluster Logging Operator, it also deploys an instance of the Elasticsearch
Operator.
====
.Prerequisites
Your OpenShift Dedicated cluster includes 600 GiB of persistent storage that is
exclusively available for deploying Elasticsearch for cluster logging.
Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node
requires its own storage volume.
Elasticsearch is a memory-intensive application. Each Elasticsearch node needs 16G of memory for both memory requests and limits.
The initial set of {product-title} nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the
{product-title} cluster to run with the recommended or higher memory. Each Elasticsearch node can operate with a lower
memory setting though this is not recommended for production deployments.
Elasticsearch is a memory-intensive application. Each Elasticsearch node needs
8G of memory for both memory requests and limits. Each Elasticsearch node can
operate with a lower memory setting, though this is not recommended for
production deployments.
.Procedure
. Install the Elasticsearch Operator via the OperatorHub. See: xref:../operators/olm-adding-operators-to-cluster.adoc[Adding Operators to a cluster].
. Install the Elasticsearch Operator from the OperatorHub:
. Install the Cluster Logging Operator using the {product-title} web console for best results:
.. In the {product-title} web console, click *Operators* -> *OperatorHub*.
.. Choose *Elasticsearch* from the list of available Operators, and click *Install*.
.. On the *Create Operator Subscription* page, under *A specific namespace on the cluster* select *openshift-logging*.
Then, click *Subscribe*.
. Install the Cluster Logging Operator from the OperatorHub:
.. In the {product-title} web console, click *Operators* -> *OperatorHub*.
@@ -44,10 +51,8 @@ Then, click *Subscribe*.
.. Switch to the *Operators* → *Installed Operators* page.
.. Ensure that *Cluster Logging* is listed in the *openshift-logging* project with a *Status* of *InstallSucceeded*.
.. Ensure that *Elasticsearch Operator* is listed in the *openshift-operator-redhat* project with a *Status* of *InstallSucceeded*.
The Elasticsearch Operator is copied to all other projects.
.. Ensure that *Cluster Logging* and *Elasticsearch* Operators are listed in the
*openshift-logging* project with a *Status* of *InstallSucceeded*.
+
[NOTE]
====
@@ -59,29 +64,19 @@ If either operator does not appear as installed, to troubleshoot further:
+
* Switch to the *Operators* → *Installed Operators* page and inspect
the *Status* column for any errors or failures.
* Switch to the *Workloads* → *Pods* page and check the logs in any Pods in the
`openshift-logging` and `openshift-operators-redhat` projects that are reporting issues.
* Switch to the *Workloads* → *Pods* page and check the logs in each Pod in the
`openshift-logging` project that is reporting issues.
. Create a cluster logging instance:
. Create and deploy a cluster logging instance:
.. Switch to the *Administration* -> *Custom Resource Definitions* page.
.. Switch to the *Operators* → *Installed Operators* page.
.. On the *Custom Resource Definitions* page, click *ClusterLogging*.
.. Click the installed *Cluster Logging* Operator.
.. On the *Custom Resource Definition Overview* page, select *View Instances* from the *Actions* menu.
.. On the *Cluster Loggings* page, click *Create Cluster Logging*.
+
You might have to refresh the page to load the data.
.. In the YAML, replace the code with the following:
+
[NOTE]
====
This default cluster logging configuration should support a wide array of environments. Review the topics on tuning and
configuring the cluster logging components for information on modifications you can make to your cluster logging cluster.
====
.. Under the *Overview* tab, click *Create Instance* . Paste the following YAML
definition into the window that displays.
+
.Cluster Logging Custom Resource (CR)
[source,yaml]
----
apiVersion: "logging.openshift.io/v1"
@@ -124,12 +119,12 @@ spec:
node-role.kubernetes.io/worker: ""
----
.. Click *Create*. This creates the Cluster Logging Custom Resource and Elasticsearch Custom Resource, which you
can edit to make changes to your cluster logging cluster.
.. Click *Create* to deploy the logging instance, which creates the Cluster
Logging and Elasticsearch Custom Resources.
. Verify the install:
. Verify that the Pods for the Cluster Logging instance deployed:
.. Switch to the *Workloads* -> *Pods* page.
.. Switch to the *Workloads* *Pods* page.
.. Select the *openshift-logging* project.
+
@@ -146,3 +141,6 @@ You should see several pods for cluster logging, Elasticsearch, Fluentd, and Kib
* fluentd-pb2f8
* fluentd-zqgqx
* kibana-7fb4fd4cc9-bvt4p
. Access the Cluster Logging interface, *Kibana*, from the *Monitoring* →
*Logging* page of the {product-title} web console.