mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-06 06:46:26 +01:00
OBSDOCS-120: Update formatting and install instructions
This commit is contained in:
@@ -2543,8 +2543,6 @@ Topics:
|
||||
File: cluster-logging-log-store
|
||||
- Name: Configuring the log visualizer
|
||||
File: cluster-logging-visualizer
|
||||
- Name: Configuring Logging storage
|
||||
File: cluster-logging-storage-considerations
|
||||
- Name: Configuring CPU and memory limits for Logging components
|
||||
File: cluster-logging-memory
|
||||
- Name: Using tolerations to control Logging pod placement
|
||||
|
||||
@@ -740,7 +740,7 @@ Topics:
|
||||
# cannot use oc adm cordon; cannot patch resource "machinesets"; cannot patch resource "nodes"
|
||||
# - Name: Working with nodes
|
||||
# File: nodes-nodes-working
|
||||
# cannot create resource "kubeletconfigs", "schedulers", "machineconfigs", "kubeletconfigs"
|
||||
# cannot create resource "kubeletconfigs", "schedulers", "machineconfigs", "kubeletconfigs"
|
||||
# - Name: Managing nodes
|
||||
# File: nodes-nodes-managing
|
||||
# cannot create resource "kubeletconfigs"
|
||||
@@ -773,7 +773,7 @@ Topics:
|
||||
# File: nodes-nodes-problem-detector
|
||||
- Name: Machine Config Daemon metrics
|
||||
File: nodes-nodes-machine-config-daemon-metrics
|
||||
# cannot patch resource "nodes"
|
||||
# cannot patch resource "nodes"
|
||||
# - Name: Creating infrastructure nodes
|
||||
# File: nodes-nodes-creating-infrastructure-nodes
|
||||
- Name: Working with containers
|
||||
@@ -871,8 +871,6 @@ Topics:
|
||||
File: cluster-logging-log-store
|
||||
- Name: Configuring the log visualizer
|
||||
File: cluster-logging-visualizer
|
||||
- Name: Configuring Logging storage
|
||||
File: cluster-logging-storage-considerations
|
||||
- Name: Configuring CPU and memory limits for Logging components
|
||||
File: cluster-logging-memory
|
||||
- Name: Using tolerations to control Logging pod placement
|
||||
|
||||
@@ -910,10 +910,10 @@ Topics:
|
||||
# cannot use oc adm cordon; cannot patch resource "machinesets"; cannot patch resource "nodes"
|
||||
# - Name: Working with nodes
|
||||
# File: nodes-nodes-working
|
||||
# cannot create resource "kubeletconfigs", "schedulers", "machineconfigs", "kubeletconfigs"
|
||||
# cannot create resource "kubeletconfigs", "schedulers", "machineconfigs", "kubeletconfigs"
|
||||
# - Name: Managing nodes
|
||||
# File: nodes-nodes-managing
|
||||
# cannot create resource "kubeletconfigs"
|
||||
# cannot create resource "kubeletconfigs"
|
||||
# - Name: Managing graceful node shutdown
|
||||
# File: nodes-nodes-graceful-shutdown
|
||||
# cannot create resource "kubeletconfigs"
|
||||
@@ -943,7 +943,7 @@ Topics:
|
||||
# File: nodes-nodes-problem-detector
|
||||
- Name: Machine Config Daemon metrics
|
||||
File: nodes-nodes-machine-config-daemon-metrics
|
||||
# cannot patch resource "nodes"
|
||||
# cannot patch resource "nodes"
|
||||
# - Name: Creating infrastructure nodes
|
||||
# File: nodes-nodes-creating-infrastructure-nodes
|
||||
- Name: Working with containers
|
||||
@@ -1044,8 +1044,6 @@ Topics:
|
||||
File: cluster-logging-log-store
|
||||
- Name: Configuring the log visualizer
|
||||
File: cluster-logging-visualizer
|
||||
- Name: Configuring Logging storage
|
||||
File: cluster-logging-storage-considerations
|
||||
- Name: Configuring CPU and memory limits for Logging components
|
||||
File: cluster-logging-memory
|
||||
- Name: Using tolerations to control Logging pod placement
|
||||
|
||||
@@ -1,45 +1,42 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
:context: cluster-logging-deploying
|
||||
[id="cluster-logging-deploying"]
|
||||
= Installing the {logging-title}
|
||||
= Installing Logging
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
include::_attributes/attributes-openshift-dedicated.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
You can install the {logging-title} by deploying the OpenShift Elasticsearch and Red Hat OpenShift Logging Operators. The OpenShift Elasticsearch Operator creates and manages the Elasticsearch cluster used by OpenShift Logging. The {logging} Operator creates and manages the components of the logging stack.
|
||||
|
||||
The process for deploying the {logging} to {product-title}
|
||||
ifdef::openshift-rosa[]
|
||||
(ROSA)
|
||||
endif::[]
|
||||
involves:
|
||||
|
||||
* Reviewing the xref:../logging/config/cluster-logging-storage-considerations#cluster-logging-storage[{logging-uc} storage considerations].
|
||||
|
||||
* Installing the logging subsystem for {product-title} using xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-console_cluster-logging-deploying[the web console] or xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-cli_cluster-logging-deploying[the CLI].
|
||||
You can install the {logging-title} by deploying the Red Hat OpenShift Logging Operator. The {logging} Operator creates and manages the components of the logging stack.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
For new installations, Vector and LokiStack are recommended. Documentation for logging is in the process of being updated to reflect these underlying component changes.
|
||||
====
|
||||
|
||||
include::snippets/logging-elastic-dep-snip.adoc[]
|
||||
ifdef::openshift-origin[]
|
||||
[id="prerequisites_cluster-logging-deploying"]
|
||||
== Prerequisites
|
||||
* Ensure that you have downloaded the {cluster-manager-url-pull} as shown in _Obtaining the installation program_ in the installation documentation for your platform.
|
||||
+
|
||||
If you have the pull secret, add the `redhat-operators` catalog to the OperatorHub custom resource (CR) as shown in _Configuring {product-title} to use Red Hat Operators_.
|
||||
endif::[]
|
||||
|
||||
include::snippets/logging-fluentd-dep-snip.adoc[]
|
||||
|
||||
include::modules/cluster-logging-deploy-console.adoc[leveloffset=+1]
|
||||
|
||||
[id="cluster-logging-deploying-es-operator"]
|
||||
== Installing the Elasticsearch Operator
|
||||
|
||||
include::snippets/logging-elastic-dep-snip.adoc[]
|
||||
include::modules/logging-es-storage-considerations.adoc[leveloffset=+2]
|
||||
include::modules/logging-install-es-operator.adoc[leveloffset=+2]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
ifdef::openshift-enterprise,openshift-origin[]
|
||||
* xref:../operators/admin/olm-adding-operators-to-cluster.adoc#olm-installing-operators-from-operatorhub_olm-adding-operators-to-a-cluster[Installing Operators from the OperatorHub]
|
||||
* xref:../logging/config/cluster-logging-log-store.adoc#cluster-logging-removing-unused-components-if-no-elasticsearch_cluster-logging-log-store[Removing unused components if you do not use the default Elasticsearch log store]
|
||||
endif::[]
|
||||
ifdef::openshift-rosa,openshift-dedicated[]
|
||||
* link:https://docs.openshift.com/container-platform/latest/operators/admin/olm-adding-operators-to-cluster.html[Installing Operators from OperatorHub]
|
||||
* link:https://docs.openshift.com/container-platform/latest/logging/config/cluster-logging-log-store.html#cluster-logging-removing-unused-components-if-no-elasticsearch_cluster-logging-log-store[Removing unused components if you do not use the default Elasticsearch log store]
|
||||
endif::[]
|
||||
|
||||
== Postinstallation tasks
|
||||
|
||||
@@ -73,9 +70,3 @@ ifdef::openshift-rosa,openshift-dedicated[]
|
||||
* link:https://docs.openshift.com/container-platform/latest/networking/openshift_sdn/about-openshift-sdn.html[About the OpenShift SDN default CNI network provider]
|
||||
* link:https://docs.openshift.com/container-platform/latest/networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.html[About the OVN-Kubernetes default Container Network Interface (CNI) network provider]
|
||||
endif::[]
|
||||
|
||||
// include::modules/cluster-logging-deploy-memory.adoc[leveloffset=+1]
|
||||
|
||||
// include::modules/cluster-logging-deploy-certificates.adoc[leveloffset=+1]
|
||||
|
||||
// include::modules/cluster-logging-deploy-label.adoc[leveloffset=+1]
|
||||
|
||||
@@ -15,13 +15,6 @@ You can make modifications to your log store, including:
|
||||
* shard replication across data nodes in the cluster, from full replication to no replication
|
||||
* external access to Elasticsearch data
|
||||
|
||||
//Following paragraph also in modules/cluster-logging-deploy-storage-considerations.adoc
|
||||
|
||||
Elasticsearch is a memory-intensive application. Each Elasticsearch node needs at least 16G of memory for both memory requests and limits, unless you specify otherwise in the `ClusterLogging` custom resource. The initial set of {product-title} nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the {product-title} cluster to run with the recommended
|
||||
or higher memory, up to a maximum of 64G for each Elasticsearch node.
|
||||
|
||||
Each Elasticsearch node can operate with a lower memory setting, though this is not recommended for production environments.
|
||||
|
||||
include::modules/cluster-logging-elasticsearch-audit.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
|
||||
@@ -1,25 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
:context: cluster-logging-storage
|
||||
[id="cluster-logging-storage"]
|
||||
= Configuring {logging} storage
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
|
||||
Elasticsearch is a memory-intensive application. The default {logging} installation deploys 16G of memory for both memory requests and memory limits.
|
||||
The initial set of {product-title} nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the {product-title} cluster to run with the recommended or higher memory. Each Elasticsearch node can operate with a lower memory setting, though this is not recommended for production environments.
|
||||
|
||||
// The following include statements pull in the module files that comprise
|
||||
// the assembly. Include any combination of concept, procedure, or reference
|
||||
// modules required to cover the user story. You can also include other
|
||||
// assemblies.
|
||||
|
||||
|
||||
include::modules/cluster-logging-deploy-storage-considerations.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="cluster-logging-storage-considerations-addtl-resources"]
|
||||
== Additional resources
|
||||
|
||||
* xref:../../logging/config/cluster-logging-log-store.adoc#cluster-logging-elasticsearch-storage_cluster-logging-log-store[Configuring persistent storage for the log store]
|
||||
@@ -6,9 +6,6 @@ include::_attributes/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
//Installing the Red Hat OpenShift Logging Operator via webconsole
|
||||
include::modules/logging-deploy-RHOL-console.adoc[leveloffset=+1]
|
||||
|
||||
//Installing the Loki Operator via webconsole
|
||||
include::modules/logging-deploy-loki-console.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
@@ -6,9 +6,6 @@ include::_attributes/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
//Installing the Red Hat OpenShift Logging Operator via webconsole
|
||||
include::modules/logging-deploy-RHOL-console.adoc[leveloffset=+1]
|
||||
|
||||
//Installing the Loki Operator via webconsole
|
||||
include::modules/logging-deploy-loki-console.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
@@ -6,9 +6,6 @@ include::_attributes/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
//Installing the Red Hat OpenShift Logging Operator via webconsole
|
||||
include::modules/logging-deploy-RHOL-console.adoc[leveloffset=+1]
|
||||
|
||||
//Installing the Loki Operator via webconsole
|
||||
include::modules/logging-deploy-loki-console.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
@@ -1,29 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * logging/cluster-logging-deploy.adoc
|
||||
|
||||
[id="cluster-logging-deploy-certificates_{context}"]
|
||||
= Deploying custom certificates
|
||||
|
||||
You can specify custom certificates using the following variables
|
||||
instead of relying on those generated during the deployment process. These
|
||||
certificates are used to encrypt and secure communication between a user's
|
||||
browser and Kibana. The security-related files will be generated if they are not
|
||||
supplied.
|
||||
|
||||
[cols="3,7",options="header"]
|
||||
|===
|
||||
|File Name
|
||||
|Description
|
||||
|
||||
|`openshift_logging_kibana_cert`
|
||||
|A browser-facing certificate for the Kibana server.
|
||||
|
||||
|`openshift_logging_kibana_key`
|
||||
|A key to be used with the browser-facing Kibana certificate.
|
||||
|
||||
|`openshift_logging_kibana_ca`
|
||||
|The absolute path on the control node to the CA file to use
|
||||
for the browser facing Kibana certs.
|
||||
|
||||
|===
|
||||
@@ -6,110 +6,41 @@
|
||||
[id="cluster-logging-deploy-console_{context}"]
|
||||
= Installing the {logging-title} using the web console
|
||||
|
||||
ifndef::openshift-rosa,openshift-dedicated[]
|
||||
You can use the {product-title} web console to install the OpenShift Elasticsearch and Red Hat OpenShift Logging Operators.
|
||||
endif::[]
|
||||
ifdef::openshift-rosa,openshift-dedicated[]
|
||||
You can install the OpenShift Elasticsearch and Red Hat OpenShift Logging Operators by using the {product-title} {cluster-manager-url}.
|
||||
endif::[]
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
If you do not want to use the default Elasticsearch log store, you can remove the internal Elasticsearch `logStore` and Kibana `visualization` components from the `ClusterLogging` custom resource (CR). Removing these components is optional but saves resources. For more information, see the additional resources of this section.
|
||||
If you do not want to use the default Elasticsearch log store, you can remove the internal Elasticsearch `logStore` and Kibana `visualization` components from the `ClusterLogging` custom resource (CR). Removing these components is optional but saves resources.
|
||||
====
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node
|
||||
requires its own storage volume.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
If you use a local volume for persistent storage, do not use a raw block volume, which is described with `volumeMode: block` in the `LocalVolume` object. Elasticsearch cannot use raw block volumes.
|
||||
====
|
||||
+
|
||||
Elasticsearch is a memory-intensive application. By default, {product-title} installs three Elasticsearch nodes with memory requests and limits of 16 GB. This initial set of three {product-title} nodes might not have enough memory to run Elasticsearch within your cluster. If you experience memory issues that are related to Elasticsearch, add more Elasticsearch nodes to your cluster rather than increasing the memory on existing nodes.
|
||||
|
||||
ifdef::openshift-origin[]
|
||||
* Ensure that you have downloaded the {cluster-manager-url-pull} as shown in _Obtaining the installation program_ in the installation documentation for your platform.
|
||||
+
|
||||
If you have the pull secret, add the `redhat-operators` catalog to the OperatorHub custom resource (CR) as shown in _Configuring {product-title} to use Red Hat Operators_.
|
||||
endif::[]
|
||||
include::snippets/logging-compatibility-snip.adoc[]
|
||||
|
||||
.Procedure
|
||||
|
||||
ifndef::openshift-rosa,openshift-dedicated[]
|
||||
To install the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator by using the {product-title} web console:
|
||||
endif::[]
|
||||
ifdef::openshift-rosa,openshift-dedicated[]
|
||||
To install the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator by using the {product-title} {cluster-manager-url}:
|
||||
endif::[]
|
||||
|
||||
. Install the OpenShift Elasticsearch Operator:
|
||||
|
||||
ifndef::openshift-rosa,openshift-dedicated[]
|
||||
.. In the {product-title} web console, click *Operators* -> *OperatorHub*.
|
||||
endif::[]
|
||||
ifdef::openshift-rosa,openshift-dedicated[]
|
||||
.. In the {hybrid-console}, click *Operators* -> *OperatorHub*.
|
||||
endif::[]
|
||||
|
||||
.. Choose *OpenShift Elasticsearch Operator* from the list of available Operators, and click *Install*.
|
||||
|
||||
.. Ensure that the *All namespaces on the cluster* is selected under *Installation Mode*.
|
||||
|
||||
.. Ensure that *openshift-operators-redhat* is selected under *Installed Namespace*.
|
||||
+
|
||||
You must specify the `openshift-operators-redhat` namespace. The `openshift-operators` namespace might contain Community Operators, which are untrusted and could publish a metric with the same name as
|
||||
ifdef::openshift-rosa[]
|
||||
a ROSA
|
||||
endif::[]
|
||||
ifdef::openshift-dedicated[]
|
||||
an {product-title}
|
||||
endif::[]
|
||||
metric, which would cause conflicts.
|
||||
|
||||
.. Select *Enable operator recommended cluster monitoring on this namespace*.
|
||||
+
|
||||
This option sets the `openshift.io/cluster-monitoring: "true"` label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the `openshift-operators-redhat` namespace.
|
||||
|
||||
.. Select *stable-5.x* as the *Update Channel*.
|
||||
|
||||
.. Select an *Approval Strategy*.
|
||||
+
|
||||
* The *Automatic* strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
|
||||
+
|
||||
* The *Manual* strategy requires a user with appropriate credentials to approve the Operator update.
|
||||
|
||||
.. Click *Install*.
|
||||
|
||||
.. Verify that the OpenShift Elasticsearch Operator installed by switching to the *Operators* → *Installed Operators* page.
|
||||
|
||||
.. Ensure that *OpenShift Elasticsearch Operator* is listed in all projects with a *Status* of *Succeeded*.
|
||||
|
||||
. Install the Red Hat OpenShift Logging Operator:
|
||||
|
||||
.. In the {product-title} web console, click *Operators* -> *OperatorHub*.
|
||||
|
||||
.. Choose *Red Hat OpenShift Logging* from the list of available Operators, and click *Install*.
|
||||
|
||||
.. Ensure that *A specific namespace on the cluster* is selected under *Installation Mode*.
|
||||
|
||||
.. Ensure that *Operator recommended namespace* is *openshift-logging* under *Installed Namespace*.
|
||||
|
||||
.. Select *Enable operator recommended cluster monitoring on this namespace*.
|
||||
. In the {product-title} web console, click *Operators* -> *OperatorHub*.
|
||||
. Choose *Red Hat OpenShift Logging* from the list of available Operators, and click *Install*.
|
||||
. Ensure that *A specific namespace on the cluster* is selected under *Installation mode*.
|
||||
. Ensure that *Operator recommended namespace* is *openshift-logging* under *Installed Namespace*.
|
||||
. Select *Enable operator recommended cluster monitoring on this namespace*.
|
||||
+
|
||||
This option sets the `openshift.io/cluster-monitoring: "true"` label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the `openshift-logging` namespace.
|
||||
|
||||
.. Select *stable-5.x* as the *Update Channel*.
|
||||
|
||||
.. Select an *Approval Strategy*.
|
||||
. Select *stable-5.x* as the *Update channel*.
|
||||
+
|
||||
* The *Automatic* strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
|
||||
+
|
||||
* The *Manual* strategy requires a user with appropriate credentials to approve the Operator update.
|
||||
--
|
||||
include::snippets/logging-stable-updates-snip.adoc[]
|
||||
--
|
||||
|
||||
.. Click *Install*.
|
||||
. Select an *Update approval*.
|
||||
** The *Automatic* strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
|
||||
** The *Manual* strategy requires a user with appropriate credentials to approve the Operator update.
|
||||
|
||||
. Select *Enable* or *Disable* for the Console plugin.
|
||||
. Click *Install*.
|
||||
|
||||
.Verification
|
||||
|
||||
. Verify that the *Red Hat OpenShift Logging Operator* is installed by switching to the *Operators* → *Installed Operators* page.
|
||||
|
||||
.. Ensure that *Red Hat OpenShift Logging* is listed in the *openshift-logging* project with a *Status* of *Succeeded*.
|
||||
|
||||
.. Verify that the Red Hat OpenShift Logging Operator installed by switching to the *Operators* → *Installed Operators* page.
|
||||
|
||||
@@ -120,6 +51,26 @@ If the Operator does not appear as installed, to troubleshoot further:
|
||||
* Switch to the *Operators* → *Installed Operators* page and inspect the *Status* column for any errors or failures.
|
||||
* Switch to the *Workloads* → *Pods* page and check the logs in any pods in the `openshift-logging` project that are reporting issues.
|
||||
|
||||
. Create a *ClusterLogging* instance.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
The form view of the web console does not include all available options. The *YAML view* is recommended for completing your setup.
|
||||
====
|
||||
+
|
||||
.. In the *collection* section, select a Collector Implementation.
|
||||
+
|
||||
--
|
||||
include::snippets/logging-fluentd-dep-snip.adoc[]
|
||||
--
|
||||
.. In the *logStore* section, select a type.
|
||||
+
|
||||
--
|
||||
include::snippets/logging-elastic-dep-snip.adoc[]
|
||||
--
|
||||
|
||||
.. Click *Create*.
|
||||
|
||||
. Create an OpenShift Logging instance:
|
||||
|
||||
.. Switch to the *Administration* -> *Custom Resource Definitions* page.
|
||||
@@ -244,4 +195,4 @@ You should see several pods for OpenShift Logging, Elasticsearch, Fluentd, and K
|
||||
* kibana-7fb4fd4cc9-bvt4p
|
||||
|
||||
.Troubleshooting
|
||||
* If Alertmanager logs alerts such as `Prometheus could not scrape fluentd for more than 10m`, make sure that `openshift.io/cluster-monitoring` is set to `"true"` for the OpenShift Elasticsearch Operator and OpenShift Logging Operator. See the Red Hat KnowledgeBase for more information: link:https://access.redhat.com/solutions/5692801[Prometheus could not scrape fluentd for more than 10m alert in Alertmanager]
|
||||
* If Alertmanager logs alerts such as `Prometheus could not scrape fluentd for more than 10m`, make sure that `openshift.io/cluster-monitoring` is set to `"true"` for the OpenShift Elasticsearch Operator and OpenShift Logging Operator. See the Red Hat KnowledgeBase for more information: link:https://access.redhat.com/solutions/5692801[Prometheus could not scrape fluentd for more than 10m alert in Alertmanager]
|
||||
|
||||
@@ -1,30 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * logging/cluster-logging-deploy.adoc
|
||||
|
||||
[id="cluster-logging-deploy-label_{context}"]
|
||||
= Labeling nodes
|
||||
|
||||
At 100 nodes or more, pre-pull the logging images from the registry. After deploying the logging pods, such as Elasticsearch and Kibana, node labeling should be done in steps of 20 nodes at a time. For example:
|
||||
|
||||
Using a simple loop:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ while read node; do oc label nodes $node elasticsearch-fluentd=true; done < 20_fluentd.lst
|
||||
----
|
||||
|
||||
The following also works:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc label nodes 10.10.0.{100..119} elasticsearch-fluentd=true
|
||||
----
|
||||
|
||||
Labeling nodes in groups paces the daemon sets used by the {logging}, helping to avoid contention on shared resources such as the image registry.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Check for the occurrence of any "CrashLoopBackOff | ImagePullFailed | Error" issues.
|
||||
`oc logs <pod>`, `oc describe pod <pod>` and `oc get event` are helpful diagnostic commands.
|
||||
====
|
||||
@@ -1,16 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * logging/cluster-logging-deploy.adoc
|
||||
|
||||
[id="cluster-logging-deploy-memory_{context}"]
|
||||
= Configure memory for Elasticsearch instances
|
||||
|
||||
By default, the amount of RAM allocated to each ES instance is 16GB. You can change this value as needed.
|
||||
|
||||
Keep in mind that *half* of this value will be passed to the individual
|
||||
Elasticsearch pods java processes
|
||||
link:https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html#_give_half_your_memory_to_lucene[heap
|
||||
size].
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -1,75 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * logging/cluster-logging-deploy.adoc
|
||||
|
||||
[id="cluster-logging-deploy-storage-considerations_{context}"]
|
||||
= Storage considerations for the {logging-title}
|
||||
|
||||
////
|
||||
An Elasticsearch index is a collection of primary shards and their corresponding replica shards. This is how Elasticsearch implements high availability internally, so there is little requirement to use hardware based mirroring RAID variants. RAID 0 can still be used to increase overall disk performance.
|
||||
////
|
||||
|
||||
A persistent volume is required for each Elasticsearch deployment configuration. On {product-title} this is achieved using persistent volume claims.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
If you use a local volume for persistent storage, do not use a raw block volume, which is described with `volumeMode: block` in the `LocalVolume` object. Elasticsearch cannot use raw block volumes.
|
||||
====
|
||||
|
||||
The OpenShift Elasticsearch Operator names the PVCs using the Elasticsearch resource name.
|
||||
|
||||
////
|
||||
Below are capacity planning guidelines for {product-title} aggregate logging.
|
||||
|
||||
*Example scenario*
|
||||
|
||||
Assumptions:
|
||||
|
||||
. Which application: Apache
|
||||
. Bytes per line: 256
|
||||
. Lines per second load on application: 1
|
||||
. Raw text data -> JSON
|
||||
|
||||
Baseline (256 characters per minute -> 15KB/min)
|
||||
|
||||
[cols="3,4",options="header"]
|
||||
|===
|
||||
|Logging pods
|
||||
|Storage Throughput
|
||||
|
||||
|3 es
|
||||
1 kibana
|
||||
1 fluentd
|
||||
| 6 pods total: 90000 x 86400 = 7,7 GB/day
|
||||
|
||||
|3 es
|
||||
1 kibana
|
||||
11 fluentd
|
||||
| 16 pods total: 225000 x 86400 = 24,0 GB/day
|
||||
|
||||
|3 es
|
||||
1 kibana
|
||||
20 fluentd
|
||||
|25 pods total: 225000 x 86400 = 32,4 GB/day
|
||||
|===
|
||||
|
||||
|
||||
Calculating the total logging throughput and disk space required for your {product-title} cluster requires knowledge of your applications. For example, if one of your applications on average logs 10 lines-per-second, each 256 bytes-per-line, calculate per-application throughput and disk space as follows:
|
||||
|
||||
----
|
||||
(bytes-per-line * (lines-per-second) = 2560 bytes per app per second
|
||||
(2560) * (number-of-pods-per-node,100) = 256,000 bytes per second per node
|
||||
256k * (number-of-nodes) = total logging throughput per cluster per second
|
||||
----
|
||||
////
|
||||
|
||||
Fluentd ships any logs from *systemd journal* and **/var/log/containers/*.log** to Elasticsearch.
|
||||
|
||||
Elasticsearch requires sufficient memory to perform large merge operations. If it does not have enough memory, it becomes unresponsive. To avoid this problem, evaluate how much application log data you need, and allocate approximately double that amount of free storage capacity.
|
||||
|
||||
By default, when storage capacity is 85% full, Elasticsearch stops allocating new data to the node. At 90%, Elasticsearch attempts to relocate existing shards from that node to other nodes if possible. But if no nodes have a free capacity below 85%, Elasticsearch effectively rejects creating new indices and becomes RED.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
These low and high watermark values are Elasticsearch defaults in the current release. You can modify these default values. Although the alerts use the same default values, you cannot change these values in the alerts.
|
||||
====
|
||||
@@ -13,12 +13,12 @@ endif::[]
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* {logging-title-uc} Operator 5.5 and later
|
||||
* You have installed the Cluster Logging Operator.
|
||||
* Supported Log Store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation)
|
||||
|
||||
.Procedure
|
||||
|
||||
. Install the `Loki Operator` Operator:
|
||||
. Install the Loki Operator:
|
||||
|
||||
ifndef::openshift-rosa,openshift-dedicated[]
|
||||
.. In the {product-title} web console, click *Operators* -> *OperatorHub*.
|
||||
@@ -29,7 +29,7 @@ endif::[]
|
||||
|
||||
.. Choose *Loki Operator* from the list of available Operators, and click *Install*.
|
||||
|
||||
.. Under *Installation Mode*, select *All namespaces on the cluster*.
|
||||
.. Under *Installation mode*, select *All namespaces on the cluster*.
|
||||
|
||||
.. Under *Installed Namespace*, select *openshift-operators-redhat*.
|
||||
+
|
||||
@@ -45,7 +45,7 @@ endif::[]
|
||||
+
|
||||
This option sets the `openshift.io/cluster-monitoring: "true"` label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the `openshift-operators-redhat` namespace.
|
||||
|
||||
.. Select an *Approval Strategy*.
|
||||
.. Select an *Update approval*.
|
||||
+
|
||||
* The *Automatic* strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
|
||||
+
|
||||
@@ -131,14 +131,14 @@ spec:
|
||||
$ oc apply -f cr-lokistack.yaml
|
||||
----
|
||||
|
||||
. Enable the RedHat OpenShift Logging Console Plugin:
|
||||
. Enable the Red Hat OpenShift Logging Console Plugin:
|
||||
ifndef::openshift-rosa,openshift-dedicated[]
|
||||
.. In the {product-title} web console, click *Operators* -> *Installed Operators*.
|
||||
endif::[]
|
||||
ifdef::openshift-rosa,openshift-dedicated[]
|
||||
.. In the {hybrid-console}, click *Operators* -> *Installed Operators*.
|
||||
endif::[]
|
||||
.. Select the *RedHat OpenShift Logging* Operator.
|
||||
.. Select the *Red Hat OpenShift Logging* Operator.
|
||||
.. Under Console plugin, click *Disabled*.
|
||||
.. Select *Enable* and then *Save*. This change restarts the `openshift-console` pods.
|
||||
.. After the pods restart, you will receive a notification that a web console update is available, prompting you to refresh.
|
||||
|
||||
@@ -1,72 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// logging/v5_5/logging-5-5-administration.adoc
|
||||
// logging/v5_6/logging-5-6-administration.adoc
|
||||
// logging/v5_7/logging-5-7-administration.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="logging-deploy-RHOL-console_{context}"]
|
||||
= Deploying Red Hat OpenShift Logging Operator using the web console
|
||||
|
||||
You can use the {product-title} web console to deploy the Red Hat OpenShift Logging Operator.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
include::snippets/logging-compatibility-snip.adoc[]
|
||||
|
||||
.Procedure
|
||||
|
||||
To deploy the Red Hat OpenShift Logging Operator using the {product-title} web console:
|
||||
|
||||
. Install the Red Hat OpenShift Logging Operator:
|
||||
|
||||
.. In the {product-title} web console, click *Operators* -> *OperatorHub*.
|
||||
|
||||
.. Type *Logging* in the *Filter by keyword* field.
|
||||
|
||||
.. Choose *Red Hat OpenShift Logging* from the list of available Operators, and click *Install*.
|
||||
|
||||
.. Select *stable* or *stable-5.y* as the *Update Channel*.
|
||||
+
|
||||
--
|
||||
include::snippets/logging-stable-updates-snip.adoc[]
|
||||
--
|
||||
.. Ensure that *A specific namespace on the cluster* is selected under *Installation Mode*.
|
||||
|
||||
.. Ensure that *Operator recommended namespace* is *openshift-logging* under *Installed Namespace*.
|
||||
|
||||
.. Select *Enable Operator recommended cluster monitoring on this Namespace*.
|
||||
|
||||
.. Select an option for *Update approval*.
|
||||
+
|
||||
* The *Automatic* option allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
|
||||
+
|
||||
* The *Manual* option requires a user with appropriate credentials to approve the Operator update.
|
||||
|
||||
.. Select *Enable* or *Disable* for the Console plugin.
|
||||
|
||||
.. Click *Install*.
|
||||
|
||||
. Verify that the *Red Hat OpenShift Logging Operator* is installed by switching to the *Operators* → *Installed Operators* page.
|
||||
|
||||
.. Ensure that *Red Hat OpenShift Logging* is listed in the *openshift-logging* project with a *Status* of *Succeeded*.
|
||||
|
||||
. Create a *ClusterLogging* instance.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
The form view of the web console does not include all available options. The *YAML view* is recommended for completing your setup.
|
||||
====
|
||||
+
|
||||
.. In the *collection* section, select a Collector Implementation.
|
||||
+
|
||||
--
|
||||
include::snippets/logging-fluentd-dep-snip.adoc[]
|
||||
--
|
||||
.. In the *logStore* section, select a type.
|
||||
+
|
||||
--
|
||||
include::snippets/logging-elastic-dep-snip.adoc[]
|
||||
--
|
||||
|
||||
.. Click *Create*.
|
||||
@@ -22,12 +22,12 @@ To install the Loki Operator using the {product-title} web console:
|
||||
|
||||
.. Choose *Loki Operator* from the list of available Operators, and click *Install*.
|
||||
|
||||
. Select *stable* or *stable-5.y* as the *Update Channel*.
|
||||
. Select *stable* or *stable-5.y* as the *Update channel*.
|
||||
+
|
||||
--
|
||||
include::snippets/logging-stable-updates-snip.adoc[]
|
||||
--
|
||||
. Ensure that *All namespaces on the cluster* is selected under *Installation Mode*.
|
||||
. Ensure that *All namespaces on the cluster* is selected under *Installation mode*.
|
||||
|
||||
. Ensure that *openshift-operators-redhat* is selected under *Installed Namespace*.
|
||||
|
||||
|
||||
27
modules/logging-es-storage-considerations.adoc
Normal file
27
modules/logging-es-storage-considerations.adoc
Normal file
@@ -0,0 +1,27 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * logging/cluster-logging-deploying.adoc
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="logging-es-storage-considerations_{context}"]
|
||||
= Storage considerations for Elasticsearch
|
||||
|
||||
A persistent volume is required for each Elasticsearch deployment configuration. On {product-title} this is achieved using persistent volume claims (PVCs).
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
If you use a local volume for persistent storage, do not use a raw block volume, which is described with `volumeMode: block` in the `LocalVolume` object. Elasticsearch cannot use raw block volumes.
|
||||
====
|
||||
|
||||
The OpenShift Elasticsearch Operator names the PVCs using the Elasticsearch resource name.
|
||||
|
||||
Fluentd ships any logs from *systemd journal* and **/var/log/containers/*.log** to Elasticsearch.
|
||||
|
||||
Elasticsearch requires sufficient memory to perform large merge operations. If it does not have enough memory, it becomes unresponsive. To avoid this problem, evaluate how much application log data you need, and allocate approximately double that amount of free storage capacity.
|
||||
|
||||
By default, when storage capacity is 85% full, Elasticsearch stops allocating new data to the node. At 90%, Elasticsearch attempts to relocate existing shards from that node to other nodes if possible. But if no nodes have a free capacity below 85%, Elasticsearch effectively rejects creating new indices and becomes RED.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
These low and high watermark values are Elasticsearch defaults in the current release. You can modify these default values. Although the alerts use the same default values, you cannot change these values in the alerts.
|
||||
====
|
||||
52
modules/logging-install-es-operator.adoc
Normal file
52
modules/logging-install-es-operator.adoc
Normal file
@@ -0,0 +1,52 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * logging/cluster-logging-deploying.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="logging-install-es-operator_{context}"]
|
||||
= Installing the OpenShift Elasticsearch Operator by using the web console
|
||||
|
||||
The OpenShift Elasticsearch Operator creates and manages the Elasticsearch cluster used by OpenShift Logging.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Elasticsearch is a memory-intensive application. Each Elasticsearch node needs at least 16GB of memory for both memory requests and limits, unless you specify otherwise in the `ClusterLogging` custom resource.
|
||||
+
|
||||
The initial set of {product-title} nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the {product-title} cluster to run with the recommended or higher memory, up to a maximum of 64GB for each Elasticsearch node.
|
||||
+
|
||||
Elasticsearch nodes can operate with a lower memory setting, though this is not recommended for production environments.
|
||||
|
||||
* Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node
|
||||
requires its own storage volume.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
If you use a local volume for persistent storage, do not use a raw block volume, which is described with `volumeMode: block` in the `LocalVolume` object. Elasticsearch cannot use raw block volumes.
|
||||
====
|
||||
|
||||
.Procedure
|
||||
|
||||
. In the {product-title} web console, click *Operators* -> *OperatorHub*.
|
||||
. Click *OpenShift Elasticsearch Operator* from the list of available Operators, and click *Install*.
|
||||
. Ensure that the *All namespaces on the cluster* is selected under *Installation mode*.
|
||||
. Ensure that *openshift-operators-redhat* is selected under *Installed Namespace*.
|
||||
+
|
||||
You must specify the `openshift-operators-redhat` namespace. The `openshift-operators` namespace might contain Community Operators, which are untrusted and could publish a metric with the same name as {product-title} metric, which would cause conflicts.
|
||||
|
||||
. Select *Enable operator recommended cluster monitoring on this namespace*.
|
||||
+
|
||||
This option sets the `openshift.io/cluster-monitoring: "true"` label in the `Namespace` object. You must select this option to ensure that cluster monitoring scrapes the `openshift-operators-redhat` namespace.
|
||||
|
||||
. Select *stable-5.x* as the *Update channel*.
|
||||
. Select an *Update approval* strategy:
|
||||
+
|
||||
* The *Automatic* strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
|
||||
+
|
||||
* The *Manual* strategy requires a user with appropriate credentials to approve the Operator update.
|
||||
|
||||
. Click *Install*.
|
||||
|
||||
.Verification
|
||||
|
||||
. Verify that the OpenShift Elasticsearch Operator installed by switching to the *Operators* → *Installed Operators* page.
|
||||
. Ensure that *OpenShift Elasticsearch Operator* is listed in all projects with a *Status* of *Succeeded*.
|
||||
Reference in New Issue
Block a user