mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
RHDEVDOCS-4448 - w/SME & Peer review feedback.
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
76426155b9
commit
08bcd45dc8
@@ -2366,6 +2366,38 @@ Distros: openshift-enterprise,openshift-origin
|
||||
Topics:
|
||||
- Name: Release notes
|
||||
File: cluster-logging-release-notes
|
||||
- Name: Logging 5.6 (EUS)
|
||||
Dir: v5_6
|
||||
Distros: openshift-enterprise,openshift-origin
|
||||
Topics:
|
||||
- Name: Logging 5.6 Release Notes
|
||||
File: logging-5-6-release-notes
|
||||
- Name: Getting started with logging
|
||||
File: logging-5-6-getting-started
|
||||
- Name: Understanding Logging
|
||||
File: logging-5-6-architecture
|
||||
- Name: Configuring Logging
|
||||
File: logging-5-6-configuration
|
||||
- Name: Administering Logging
|
||||
File: logging-5-6-administration
|
||||
- Name: Logging Reference
|
||||
File: logging-5-6-reference
|
||||
- Name: Logging 5.5
|
||||
Dir: v5_5
|
||||
Distros: openshift-enterprise,openshift-origin
|
||||
Topics:
|
||||
- Name: Logging 5.5 Release Notes
|
||||
File: logging-5-5-release-notes
|
||||
- Name: Getting started with logging
|
||||
File: logging-5-5-getting-started
|
||||
- Name: Understanding Logging
|
||||
File: logging-5-5-architecture
|
||||
- Name: Administering Logging
|
||||
File: logging-5-5-administration
|
||||
- Name: Configuring Logging
|
||||
File: logging-5-5-configuration
|
||||
- Name: Logging Reference
|
||||
File: logging-5-5-reference
|
||||
- Name: About Logging
|
||||
File: cluster-logging
|
||||
- Name: Installing Logging
|
||||
@@ -2436,34 +2468,6 @@ Topics:
|
||||
- Name: Exported fields
|
||||
File: cluster-logging-exported-fields
|
||||
Distros: openshift-enterprise,openshift-origin
|
||||
#- Name: Logging 5.5
|
||||
# Dir: v5_5
|
||||
# Distros: openshift-enterprise,openshift-origin
|
||||
# Topics:
|
||||
# - Name: Logging 5.5 Release Notes
|
||||
# File: logging-5-5-release-notes
|
||||
# - Name: Understanding Logging
|
||||
# File: logging-5-5-architecture
|
||||
# - Name: Administering Logging
|
||||
# File: logging-5-5-administration
|
||||
# - Name: Configuring Logging
|
||||
# File: logging-5-5-configuration
|
||||
# - Name: Logging Reference
|
||||
# File: logging-5-5-reference
|
||||
#- Name: Logging 5.6
|
||||
# Dir: v5_6
|
||||
# Distros: openshift-enterprise,openshift-origin
|
||||
# Topics:
|
||||
# - Name: Logging 5.6 Release Notes
|
||||
# File: logging-5-6-release-notes
|
||||
# - Name: Understanding Logging
|
||||
# File: logging-5-6-architecture
|
||||
# - Name: Configuring Logging
|
||||
# File: logging-5-6-configuration
|
||||
# - Name: Administering Logging
|
||||
# File: logging-5-6-administration
|
||||
# - Name: Logging Reference
|
||||
# File: logging-5-6-reference
|
||||
---
|
||||
Name: Monitoring
|
||||
Dir: monitoring
|
||||
|
||||
@@ -5,3 +5,18 @@ include::_attributes/common-attributes.adoc[]
|
||||
:context: logging-5.5-administration
|
||||
|
||||
toc::[]
|
||||
|
||||
//Installing the Red Hat OpenShift Logging Operator via webconsole
|
||||
include::modules/logging-deploy-RHOL-console.adoc[leveloffset=+1]
|
||||
|
||||
//Installing the Loki Operator via webconsole
|
||||
include::modules/logging-deploy-loki-console.adoc[leveloffset=+1]
|
||||
|
||||
//Generic installing operators from operator hub using CLI
|
||||
include::modules/olm-installing-from-operatorhub-using-cli.adoc[leveloffset=+1]
|
||||
|
||||
//Generic deleting operators from cluster using web console
|
||||
include::modules/olm-deleting-operators-from-a-cluster-using-web-console.adoc[leveloffset=+1]
|
||||
|
||||
//Generic deleting operators from a cluster using CLI
|
||||
include::modules/olm-deleting-operators-from-a-cluster-using-cli.adoc[leveloffset=+1]
|
||||
|
||||
@@ -5,3 +5,6 @@ include::_attributes/common-attributes.adoc[]
|
||||
:context: logging-5.5-architecture
|
||||
|
||||
toc::[]
|
||||
|
||||
:context: logging-5-5-architecture
|
||||
include::modules/logging-architecture-overview.adoc[lines=9..31]
|
||||
|
||||
@@ -5,3 +5,7 @@ include::_attributes/common-attributes.adoc[]
|
||||
:context: logging-5.5-configuration
|
||||
|
||||
toc::[]
|
||||
|
||||
include::snippets/logging-crs-by-operator-snip.adoc[]
|
||||
|
||||
include::snippets/logging-supported-config-snip.adoc[]
|
||||
|
||||
40
logging/v5_5/logging-5-5-getting-started.adoc
Normal file
40
logging/v5_5/logging-5-5-getting-started.adoc
Normal file
@@ -0,0 +1,40 @@
|
||||
:_content-type: ASSEMBLY
|
||||
|
||||
[id="logging-5-5-getting-started"]
|
||||
= Getting started with logging 5.5
|
||||
|
||||
This overview of the logging deployment process is provided for ease of reference. It is not a substitute for full documentation. For new installations, *Vector* and *LokiStack* are recommended.
|
||||
|
||||
--
|
||||
include::snippets/logging-under-construction-snip.adoc[]
|
||||
--
|
||||
|
||||
--
|
||||
include::snippets/logging-compatibility-snip.adoc[]
|
||||
--
|
||||
|
||||
.Prerequisites
|
||||
* LogStore preference: *Elasticsearch* or *LokiStack*
|
||||
* Collector implementation preference: *Fluentd* or *Vector* See the xref:../logging-5-5-reference.adoc[Logging reference] to review the differences between Fluentd and Vector implementations.
|
||||
* Credentials for your log forwarding outputs
|
||||
|
||||
.Procedure
|
||||
--
|
||||
include::snippets/logging-elastic-dep-snip.adoc[]
|
||||
--
|
||||
. Install the Operator for the logstore you'd like to use.
|
||||
** For *Elasticsearch*, install the *OpenShift Elasticsearch Operator*.
|
||||
** For *LokiStack*, install the *Loki Operator*.
|
||||
*** Create a `LokiStack` custom resource (CR) instance.
|
||||
|
||||
. Install the *Red Hat OpenShift Logging Operator*.
|
||||
|
||||
. Create a `ClusterLogging` custom resource (CR) instance.
|
||||
.. Select your Collector Implementation.
|
||||
+
|
||||
--
|
||||
include::snippets/logging-fluentd-dep-snip.adoc[]
|
||||
--
|
||||
. Create a `ClusterLogForwarder` custom resource (CR) instance. See the xref:../logging-5-5-reference.adoc[Logging reference] to review the available output options for the collector implementation you've selected.
|
||||
|
||||
. Create a secret for the selected output pipeline.
|
||||
@@ -5,3 +5,18 @@ include::_attributes/common-attributes.adoc[]
|
||||
:context: logging-5.6-administration
|
||||
|
||||
toc::[]
|
||||
|
||||
//Installing the Red Hat OpenShift Logging Operator via webconsole
|
||||
include::modules/logging-deploy-RHOL-console.adoc[leveloffset=+1]
|
||||
|
||||
//Installing the Loki Operator via webconsole
|
||||
include::modules/logging-deploy-loki-console.adoc[leveloffset=+1]
|
||||
|
||||
//Generic installing operators from operator hub using CLI
|
||||
include::modules/olm-installing-from-operatorhub-using-cli.adoc[leveloffset=+1]
|
||||
|
||||
//Generic deleting operators from cluster using web console
|
||||
include::modules/olm-deleting-operators-from-a-cluster-using-web-console.adoc[leveloffset=+1]
|
||||
|
||||
//Generic deleting operators from a cluster using CLI
|
||||
include::modules/olm-deleting-operators-from-a-cluster-using-cli.adoc[leveloffset=+1]
|
||||
|
||||
@@ -5,3 +5,6 @@ include::_attributes/common-attributes.adoc[]
|
||||
:context: logging-5.6-architecture
|
||||
|
||||
toc::[]
|
||||
|
||||
:context: logging-5-6-architecture
|
||||
include::modules/logging-architecture-overview.adoc[lines=9..31]
|
||||
|
||||
@@ -5,3 +5,7 @@ include::_attributes/common-attributes.adoc[]
|
||||
:context: logging-5.6-configuration
|
||||
|
||||
toc::[]
|
||||
|
||||
include::snippets/logging-crs-by-operator-snip.adoc[]
|
||||
|
||||
include::snippets/logging-supported-config-snip.adoc[]
|
||||
|
||||
40
logging/v5_6/logging-5-6-getting-started.adoc
Normal file
40
logging/v5_6/logging-5-6-getting-started.adoc
Normal file
@@ -0,0 +1,40 @@
|
||||
:_content-type: ASSEMBLY
|
||||
|
||||
[id="logging-getting-started-5-6"]
|
||||
= Getting started with logging 5.6
|
||||
|
||||
This overview of the logging deployment process is provided for ease of reference. It is not a substitute for full documentation. For new installations, *Vector* and *LokiStack* are recommended.
|
||||
+
|
||||
--
|
||||
include::snippets/logging-under-construction-snip.adoc[]
|
||||
--
|
||||
|
||||
--
|
||||
include::snippets/logging-compatibility-snip.adoc[]
|
||||
--
|
||||
|
||||
.Prerequisites
|
||||
* LogStore preference: *Elasticsearch* or *LokiStack*
|
||||
* Collector implementation preference: *Fluentd* or *Vector* See the xref:../logging-5-6-reference.adoc[Logging reference] to review the differences between Fluentd and Vector implementations.
|
||||
* Credentials for your log forwarding outputs
|
||||
|
||||
.Procedure
|
||||
--
|
||||
include::snippets/logging-elastic-dep-snip.adoc[]
|
||||
--
|
||||
. Install the Operator for the logstore you'd like to use.
|
||||
** For *Elasticsearch*, install the *OpenShift Elasticsearch Operator*.
|
||||
** For *LokiStack*, install the *Loki Operator*.
|
||||
*** Create a `LokiStack` custom resource (CR) instance.
|
||||
|
||||
. Install the *Red Hat OpenShift Logging Operator*.
|
||||
|
||||
. Create a `ClusterLogging` custom resource (CR) instance.
|
||||
.. Select your Collector Implementation.
|
||||
+
|
||||
--
|
||||
include::snippets/logging-fluentd-dep-snip.adoc[]
|
||||
--
|
||||
. Create a `ClusterLogForwarder` custom resource (CR) instance. See the xref:../logging-5-5-reference.adoc[Logging reference] to review the available output options for the collector implementation you've selected.
|
||||
|
||||
. Create a secret for the selected output pipeline.
|
||||
@@ -7,3 +7,13 @@ include::_attributes/common-attributes.adoc[]
|
||||
toc::[]
|
||||
|
||||
include::snippets/logging-compatibility-snip.adoc[]
|
||||
|
||||
include::snippets/logging-stable-updates-snip.adoc[]
|
||||
|
||||
include::modules/logging-rn-5.6.3.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/logging-rn-5.6.2.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/logging-rn-5.6.1.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/logging-rn-5.6.0.adoc[leveloffset=+1]
|
||||
|
||||
24
modules/logging-architecture-overview.adoc
Normal file
24
modules/logging-architecture-overview.adoc
Normal file
@@ -0,0 +1,24 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
//
|
||||
|
||||
:_content-type: CONCEPT
|
||||
[id="logging-architecture-overview_{context}"]
|
||||
= Logging architecture
|
||||
|
||||
The {logging} consists of these logical components:
|
||||
|
||||
* `Collector` - Reads container log data from each node and forwards log data to configured outputs.
|
||||
|
||||
* `Store` - Stores log data for analysis; the default output for the forwarder.
|
||||
|
||||
* `Visualization` - Graphical interface for searching, querying, and viewing stored logs.
|
||||
|
||||
These components are managed by Operators and Custom Resource (CR) YAML files.
|
||||
|
||||
include::snippets/logging-log-types-snip.adoc[]
|
||||
|
||||
|
||||
The logging collector is a daemonset that deploys pods to each {product-title} node. System and infrastructure logs are generated by journald log messages from the operating system, the container runtime, and {product-title}.
|
||||
|
||||
Container logs are generated by containers running in pods running on the cluster. Each container generates a separate log stream. The collector collects the logs from these sources and forwards them internally or externally as configured in the `ClusterLogForwarder` custom resource.
|
||||
71
modules/logging-deploy-RHOL-console.adoc
Normal file
71
modules/logging-deploy-RHOL-console.adoc
Normal file
@@ -0,0 +1,71 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// *
|
||||
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="logging-deploy-RHOL-console_{context}"]
|
||||
= Deploying Red Hat OpenShift Logging Operator using the web console
|
||||
|
||||
You can use the {product-title} web console to deploy the Red Hat OpenShift Logging Operator.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
include::snippets/logging-compatibility-snip.adoc[]
|
||||
|
||||
.Procedure
|
||||
|
||||
To deploy the Red Hat OpenShift Logging Operator using the {product-title} web console:
|
||||
|
||||
. Install the Red Hat OpenShift Logging Operator:
|
||||
|
||||
.. In the {product-title} web console, click *Operators* -> *OperatorHub*.
|
||||
|
||||
.. Type *Logging* in the *Filter by keyword* field.
|
||||
|
||||
.. Choose *Red Hat OpenShift Logging* from the list of available Operators, and click *Install*.
|
||||
|
||||
.. Select *stable* or *stable-5.y* as the *Update Channel*.
|
||||
+
|
||||
--
|
||||
include::snippets/logging-stable-updates-snip.adoc[]
|
||||
--
|
||||
.. Ensure that *A specific namespace on the cluster* is selected under *Installation Mode*.
|
||||
|
||||
.. Ensure that *Operator recommended namespace* is *openshift-logging* under *Installed Namespace*.
|
||||
|
||||
.. Select *Enable Operator recommended cluster monitoring on this Namespace*.
|
||||
|
||||
.. Select an option for *Update approval*.
|
||||
+
|
||||
* The *Automatic* option allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
|
||||
+
|
||||
* The *Manual* option requires a user with appropriate credentials to approve the Operator update.
|
||||
|
||||
.. Select *Enable* or *Disable* for the Console plugin.
|
||||
|
||||
.. Click *Install*.
|
||||
|
||||
. Verify that the *Red Hat OpenShift Logging Operator* is installed by switching to the *Operators* → *Installed Operators* page.
|
||||
|
||||
.. Ensure that *Red Hat OpenShift Logging* is listed in the *openshift-logging* project with a *Status* of *Succeeded*.
|
||||
|
||||
. Create a *ClusterLogging* instance.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
The form view of the web console does not include all available options. The *YAML view* is recommended for completing your setup.
|
||||
====
|
||||
+
|
||||
.. In the *collection* section, select a Collector Implementation.
|
||||
+
|
||||
--
|
||||
include::snippets/logging-fluentd-dep-snip.adoc[]
|
||||
--
|
||||
.. In the *logStore* section, select a type.
|
||||
+
|
||||
--
|
||||
include::snippets/logging-elastic-dep-snip.adoc[]
|
||||
--
|
||||
|
||||
.. Click *Create*.
|
||||
127
modules/logging-deploy-loki-console.adoc
Normal file
127
modules/logging-deploy-loki-console.adoc
Normal file
@@ -0,0 +1,127 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// *
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="logging-deploy-loki-console_{context}"]
|
||||
= Deploying the Loki Operator using the web console
|
||||
|
||||
You can use the {product-title} web console to install the Loki Operator.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Supported Log Store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation)
|
||||
|
||||
.Procedure
|
||||
|
||||
To install the Loki Operator using the {product-title} web console:
|
||||
|
||||
. In the {product-title} web console, click *Operators* -> *OperatorHub*.
|
||||
|
||||
. Type *Loki* in the *Filter by keyword* field.
|
||||
|
||||
.. Choose *Loki Operator* from the list of available Operators, and click *Install*.
|
||||
|
||||
. Select *stable* or *stable-5.y* as the *Update Channel*.
|
||||
+
|
||||
--
|
||||
include::snippets/logging-stable-updates-snip.adoc[]
|
||||
--
|
||||
. Ensure that *All namespaces on the cluster* is selected under *Installation Mode*.
|
||||
|
||||
. Ensure that *openshift-operators-redhat* is selected under *Installed Namespace*.
|
||||
|
||||
. Select *Enable Operator recommended cluster monitoring on this Namespace*.
|
||||
+
|
||||
This option sets the `openshift.io/cluster-monitoring: "true"` label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the `openshift-operators-redhat` namespace.
|
||||
|
||||
. Select an option for *Update approval*.
|
||||
+
|
||||
* The *Automatic* option allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
|
||||
+
|
||||
* The *Manual* option requires a user with appropriate credentials to approve the Operator update.
|
||||
|
||||
. Click *Install*.
|
||||
|
||||
. Verify that the *LokiOperator* installed by switching to the *Operators* → *Installed Operators* page.
|
||||
|
||||
.. Ensure that *LokiOperator* is listed with *Status* as *Succeeded* in all the projects.
|
||||
|
||||
+
|
||||
. Create a `Secret` YAML file that uses the `access_key_id` and `access_key_secret` fields to specify your credentials and `bucketnames`, `endpoint`, and `region` to define the object storage location. AWS is used in the following example:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: logging-loki-s3
|
||||
namespace: openshift-logging
|
||||
stringData:
|
||||
access_key_id: AKIAIOSFODNN7EXAMPLE
|
||||
access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
|
||||
bucketnames: s3-bucket-name
|
||||
endpoint: https://s3.eu-central-1.amazonaws.com
|
||||
region: eu-central-1
|
||||
----
|
||||
+
|
||||
. Select *Create instance* under LokiStack on the *Details* tab. Then select *YAML view*. Paste in the following template, subsituting values where appropriate.
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: loki.grafana.com/v1
|
||||
kind: LokiStack
|
||||
metadata:
|
||||
name: logging-loki <1>
|
||||
namespace: openshift-logging
|
||||
spec:
|
||||
size: 1x.small <2>
|
||||
storage:
|
||||
schemas:
|
||||
- version: v12
|
||||
effectiveDate: '2022-06-01'
|
||||
secret:
|
||||
name: logging-loki-s3 <3>
|
||||
type: s3 <4>
|
||||
storageClassName: <storage_class_name> <5>
|
||||
tenants:
|
||||
mode: openshift-logging
|
||||
----
|
||||
<1> Name should be `logging-loki`.
|
||||
<2> Select your Loki deployment size.
|
||||
<3> Define the secret used for your log storage.
|
||||
<4> Define corresponding storage type.
|
||||
<5> Enter the name of an existing storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed using `oc get storageclasses`.
|
||||
+
|
||||
.. Apply the configuration:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
oc apply -f logging-loki.yaml
|
||||
----
|
||||
+
|
||||
. Create or edit a `ClusterLogging` CR:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: logging.openshift.io/v1
|
||||
kind: ClusterLogging
|
||||
metadata:
|
||||
name: instance
|
||||
namespace: openshift-logging
|
||||
spec:
|
||||
managementState: Managed
|
||||
logStore:
|
||||
type: lokistack
|
||||
lokistack:
|
||||
name: logging-loki
|
||||
collection:
|
||||
type: vector
|
||||
----
|
||||
+
|
||||
.. Apply the configuration:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
oc apply -f cr-lokistack.yaml
|
||||
----
|
||||
@@ -7,8 +7,9 @@ With Logging version 5.6 and higher, you can configure retention policies based
|
||||
|
||||
. To enable stream-based retention, create or edit the `LokiStack` custom resource (CR):
|
||||
+
|
||||
--
|
||||
include::snippets/logging-create-apply-cr-snip.adoc[lines=9..12]
|
||||
|
||||
--
|
||||
. You can refer to the examples below to configure your LokiStack CR.
|
||||
+
|
||||
.Example global stream-based retention
|
||||
@@ -17,7 +18,7 @@ include::snippets/logging-create-apply-cr-snip.adoc[lines=9..12]
|
||||
apiVersion: loki.grafana.com/v1
|
||||
kind: LokiStack
|
||||
metadata:
|
||||
name: lokistack-sample
|
||||
name: logging-loki
|
||||
namespace: openshift-logging
|
||||
spec:
|
||||
limits:
|
||||
@@ -34,13 +35,22 @@ spec:
|
||||
managementState: Managed
|
||||
replicationFactor: 1
|
||||
size: 1x.small
|
||||
. To enable stream-based retention, create or edit the `LokiStack` custom resource (CR):
|
||||
+
|
||||
--
|
||||
include::snippets/logging-create-apply-cr-snip.adoc[lines=9..12]
|
||||
|
||||
--
|
||||
. You can refer to the examples below to configure your LokiStack CR.
|
||||
+
|
||||
.Example global stream-based retention
|
||||
storage:
|
||||
schemas:
|
||||
- effectiveDate: "2020-10-11"
|
||||
version: v11
|
||||
secret:
|
||||
name: gcs-secret
|
||||
type: gcstest
|
||||
name: logging-loki-s3
|
||||
type: aws
|
||||
storageClassName: standard
|
||||
tenants:
|
||||
mode: openshift-logging
|
||||
@@ -55,7 +65,7 @@ spec:
|
||||
apiVersion: loki.grafana.com/v1
|
||||
kind: LokiStack
|
||||
metadata:
|
||||
name: lokistack-sample
|
||||
name: logging-loki
|
||||
namespace: openshift-logging
|
||||
spec:
|
||||
limits:
|
||||
@@ -83,8 +93,8 @@ spec:
|
||||
- effectiveDate: "2020-10-11"
|
||||
version: v11
|
||||
secret:
|
||||
name: gcs-secret
|
||||
type: gcs
|
||||
name: logging-loki-s3
|
||||
type: aws
|
||||
storageClassName: standard
|
||||
tenants:
|
||||
mode: openshift-logging
|
||||
@@ -92,9 +102,11 @@ spec:
|
||||
<1> Sets retention policy by tenant. Valid tenant types are `application`, `audit`, and `infrastructure`.
|
||||
<2> Contains the link:https://grafana.com/docs/loki/latest/logql/query_examples/#query-examples[LogQL query] used to define the log stream.
|
||||
|
||||
. Then apply your configuration:
|
||||
. Apply your configuration:
|
||||
+
|
||||
--
|
||||
include::snippets/logging-create-apply-cr-snip.adoc[lines=14..17]
|
||||
--
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
|
||||
22
modules/logging-rn-5.5.8.adoc
Normal file
22
modules/logging-rn-5.5.8.adoc
Normal file
@@ -0,0 +1,22 @@
|
||||
:_content-type: REFERENCE
|
||||
[id="logging-release-notes-5-5-8_{context}"]
|
||||
= Logging 5.5.8
|
||||
This release includes link:https://access.redhat.com/errata/RHSA-2023:0930[OpenShift Logging Bug Fix Release 5.5.8].
|
||||
|
||||
[id="logging-5-5-8-bug-fixes"]
|
||||
== Bug fixes
|
||||
* Before this update, the `priority` field was missing from `systemd` logs due to an error in how the collector set `level` fields. With this update, these fields are set correctly, resolving the issue. (link:https://issues.redhat.com/browse/LOG-3630[LOG-3630])
|
||||
|
||||
[id="logging-5-5-8-CVEs"]
|
||||
== CVEs
|
||||
* link:https://access.redhat.com/security/cve/CVE-2020-10735[CVE-2020-10735]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2021-28861[CVE-2021-28861]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-2873[CVE-2022-2873]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-4415[CVE-2022-4415]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-24999[CVE-2022-24999]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-40897[CVE-2022-40897]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-41222[CVE-2022-41222]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-41717[CVE-2022-41717]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-43945[CVE-2022-43945]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-45061[CVE-2022-45061]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-48303[CVE-2022-48303]
|
||||
86
modules/logging-rn-5.6.0.adoc
Normal file
86
modules/logging-rn-5.6.0.adoc
Normal file
@@ -0,0 +1,86 @@
|
||||
//included in cluster-logging-release-notes.adoc
|
||||
:_content-type: ASSEMBLY
|
||||
[id="logging-release-notes-5-6-0_{context}"]
|
||||
= Logging 5.6.0
|
||||
|
||||
This release includes link:https://access.redhat.com/errata/RHSA-2023:0264[OpenShift Logging Release 5.6].
|
||||
|
||||
[id="logging-5-6-dep-notice_{context}"]
|
||||
== Deprecation notice
|
||||
In logging version 5.6, Fluentd is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Fluentd, you can use Vector instead.
|
||||
|
||||
[id="logging-5-6-enhancements_{context}"]
|
||||
== Enhancements
|
||||
* With this update, Logging is compliant with {product-title} cluster-wide cryptographic policies.
|
||||
(link:https://issues.redhat.com/browse/LOG-895[LOG-895])
|
||||
|
||||
* With this update, you can declare per-tenant, per-stream, and global policies retention policies through the LokiStack custom resource, ordered by priority. (link:https://issues.redhat.com/browse/LOG-2695[LOG-2695])
|
||||
|
||||
* With this update, Splunk is an available output option for log forwarding. (link:https://issues.redhat.com/browse/LOG-2913[LOG-2913])
|
||||
|
||||
* With this update, Vector replaces Fluentd as the default Collector. (link:https://issues.redhat.com/browse/LOG-2222[LOG-2222])
|
||||
|
||||
* With this update, the *Developer* role can access the per-project workload logs they are assigned to within the Log Console Plugin on clusters running {product-title} 4.11 and higher. (link:https://issues.redhat.com/browse/LOG-3388[LOG-3388])
|
||||
|
||||
* With this update, logs from any source contain a field `openshift.cluster_id`, the unique identifier of the cluster in which the Operator is deployed. You can view the `clusterID` value with the command below. (link:https://issues.redhat.com/browse/LOG-2715[LOG-2715])
|
||||
|
||||
include::snippets/logging-get-clusterid-snip.adoc[lines=9..12]
|
||||
|
||||
[id="logging-5-6-known-issues_{context}"]
|
||||
== Known Issues
|
||||
* Before this update, Elasticsearch would reject logs if multiple label keys had the same prefix and some keys included the `.` character. This fixes the limitation of Elasticsearch by replacing `.` in the label keys with `_`. As a workaround for this issue, remove the labels that cause errors, or add a namespace to the label. (link:https://issues.redhat.com/browse/LOG-3463[LOG-3463])
|
||||
|
||||
[id="logging-5-6-bug-fixes_{context}"]
|
||||
== Bug fixes
|
||||
* Before this update, if you deleted the Kibana Custom Resource, the {product-title} web console continued displaying a link to Kibana. With this update, removing the Kibana Custom Resource also removes that link. (link:https://issues.redhat.com/browse/LOG-2993[LOG-2993])
|
||||
|
||||
* Before this update, a user was not able to view the application logs of namespaces they have access to. With this update, the Loki Operator automatically creates a cluster role and cluster role binding allowing users to read application logs. (link:https://issues.redhat.com/browse/LOG-3072[LOG-3072])
|
||||
|
||||
* Before this update, the Operator removed any custom outputs defined in the `ClusterLogForwarder` custom resource when using LokiStack as the default log storage. With this update, the Operator merges custom outputs with the default outputs when processing the `ClusterLogForwarder` custom resource. (link:https://issues.redhat.com/browse/LOG-3090[LOG-3090])
|
||||
|
||||
* Before this update, the CA key was used as the volume name for mounting the CA into Loki, causing error states when the CA Key included non-conforming characters, such as dots. With this update, the volume name is standardized to an internal string which resolves the issue. (link:https://issues.redhat.com/browse/LOG-3331[LOG-3331])
|
||||
|
||||
* Before this update, a default value set within the LokiStack Custom Resource Definition, caused an inability to create a LokiStack instance without a `ReplicationFactor` of `1`. With this update, the operator sets the actual value for the size used. (link:https://issues.redhat.com/browse/LOG-3296[LOG-3296])
|
||||
|
||||
* Before this update, Vector parsed the message field when JSON parsing was enabled without also defining `structuredTypeKey` or `structuredTypeName` values. With this update, a value is required for either `structuredTypeKey` or `structuredTypeName` when writing structured logs to Elasticsearch. (link:https://issues.redhat.com/browse/LOG-3195[LOG-3195])
|
||||
|
||||
* Before this update, the secret creation component of the Elasticsearch Operator modified internal secrets constantly. With this update, the existing secret is properly handled. (link:https://issues.redhat.com/browse/LOG-3161[LOG-3161])
|
||||
|
||||
* Before this update, the Operator could enter a loop of removing and recreating the collector daemonset while the Elasticsearch or Kibana deployments changed their status. With this update, a fix in the status handling of the Operator resolves the issue. (link:https://issues.redhat.com/browse/LOG-3157[LOG-3157])
|
||||
|
||||
* Before this update, Kibana had a fixed `24h` OAuth cookie expiration time, which resulted in 401 errors in Kibana whenever the `accessTokenInactivityTimeout` field was set to a value lower than `24h`. With this update, Kibana's OAuth cookie expiration time synchronizes to the `accessTokenInactivityTimeout`, with a default value of `24h`. (link:https://issues.redhat.com/browse/LOG-3129[LOG-3129])
|
||||
|
||||
* Before this update, the Operators general pattern for reconciling resources was to try and create before attempting to get or update which would lead to constant HTTP 409 responses after creation. With this update, Operators first attempt to retrieve an object and only create or update it if it is either missing or not as specified. (link:https://issues.redhat.com/browse/LOG-2919[LOG-2919])
|
||||
|
||||
* Before this update, the `.level` and`.structure.level` fields in Fluentd could contain different values. With this update, the values are the same for each field. (link:https://issues.redhat.com/browse/LOG-2819[LOG-2819])
|
||||
|
||||
* Before this update, the Operator did not wait for the population of the trusted CA bundle and deployed the collector a second time once the bundle updated. With this update, the Operator waits briefly to see if the bundle has been populated before it continues the collector deployment. (link:https://issues.redhat.com/browse/LOG-2789[LOG-2789])
|
||||
|
||||
* Before this update, logging telemetry info appeared twice when reviewing metrics. With this update, logging telemetry info displays as expected. (link:https://issues.redhat.com/browse/LOG-2315[LOG-2315])
|
||||
|
||||
* Before this update, Fluentd pod logs contained a warning message after enabling the JSON parsing addition. With this update, that warning message does not appear. (link:https://issues.redhat.com/browse/LOG-1806[LOG-1806])
|
||||
|
||||
* Before this update, the `must-gather` script did not complete because `oc` needs a folder with write permission to build its cache. With this update, `oc` has write permissions to a folder, and the `must-gather` script completes successfully. (link:https://issues.redhat.com/browse/LOG-3446[LOG-3446])
|
||||
|
||||
* Before this update the log collector SCC could be superseded by other SCCs on the cluster, rendering the collector unusable. This update sets the priority of the log collector SCC so that it takes precedence over the others. (link:https://issues.redhat.com/browse/LOG-3235[LOG-3235])
|
||||
|
||||
* Before this update, Vector was missing the field `sequence`, which was added to fluentd as a way to deal with a lack of actual nanoseconds precision. With this update, the field `openshift.sequence` has been added to the event logs. (link:https://issues.redhat.com/browse/LOG-3106[LOG-3106])
|
||||
|
||||
[id="logging-5-6-cves_{context}"]
|
||||
== CVEs
|
||||
* https://access.redhat.com/security/cve/CVE-2020-36518[CVE-2020-36518]
|
||||
* https://access.redhat.com/security/cve/CVE-2021-46848[CVE-2021-46848]
|
||||
* https://access.redhat.com/security/cve/CVE-2022-2879[CVE-2022-2879]
|
||||
* https://access.redhat.com/security/cve/CVE-2022-2880[CVE-2022-2880]
|
||||
* https://access.redhat.com/security/cve/CVE-2022-27664[CVE-2022-27664]
|
||||
* https://access.redhat.com/security/cve/CVE-2022-32190[CVE-2022-32190]
|
||||
* https://access.redhat.com/security/cve/CVE-2022-35737[CVE-2022-35737]
|
||||
* https://access.redhat.com/security/cve/CVE-2022-37601[CVE-2022-37601]
|
||||
* https://access.redhat.com/security/cve/CVE-2022-41715[CVE-2022-41715]
|
||||
* https://access.redhat.com/security/cve/CVE-2022-42003[CVE-2022-42003]
|
||||
* https://access.redhat.com/security/cve/CVE-2022-42004[CVE-2022-42004]
|
||||
* https://access.redhat.com/security/cve/CVE-2022-42010[CVE-2022-42010]
|
||||
* https://access.redhat.com/security/cve/CVE-2022-42011[CVE-2022-42011]
|
||||
* https://access.redhat.com/security/cve/CVE-2022-42012[CVE-2022-42012]
|
||||
* https://access.redhat.com/security/cve/CVE-2022-42898[CVE-2022-42898]
|
||||
* https://access.redhat.com/security/cve/CVE-2022-43680[CVE-2022-43680]
|
||||
35
modules/logging-rn-5.6.1.adoc
Normal file
35
modules/logging-rn-5.6.1.adoc
Normal file
@@ -0,0 +1,35 @@
|
||||
//module included in cluster-logging-release-notes.adoc
|
||||
:_content-type: REFERENCE
|
||||
[id="logging-release-notes-5-6-1_{context}"]
|
||||
= Logging 5.6.1
|
||||
This release includes link:https://access.redhat.com/errata/RHSA-2023:0634[OpenShift Logging Bug Fix Release 5.6.1].
|
||||
|
||||
[id="logging-5-6-1-bug-fixes"]
|
||||
== Bug fixes
|
||||
* Before this update, the compactor would report TLS certificate errors from communications with the querier when retention was active. With this update, the compactor and querier no longer communicate erroneously over HTTP. (link:https://issues.redhat.com/browse/LOG-3494[LOG-3494])
|
||||
|
||||
* Before this update, the Loki Operator would not retry setting the status of the `LokiStack` CR, which caused stale status information. With this update, the Operator retries status information updates on conflict. (link:https://issues.redhat.com/browse/LOG-3496[LOG-3496])
|
||||
|
||||
* Before this update, the Loki Operator Webhook server caused TLS errors when the `kube-apiserver-operator` Operator checked the webhook validity. With this update, the Loki Operator Webhook PKI is managed by the Operator Lifecycle Manager (OLM), resolving the issue. (link:https://issues.redhat.com/browse/LOG-3510[LOG-3510])
|
||||
|
||||
* Before this update, the LokiStack Gateway Labels Enforcer generated parsing errors for valid LogQL queries when using combined label filters with boolean expressions. With this update, the LokiStack LogQL implementation supports label filters with boolean expression and resolves the issue. (link:https://issues.redhat.com/browse/LOG-3441[LOG-3441]), (link:https://issues.redhat.com/browse/LOG-3397[LOG-3397])
|
||||
|
||||
* Before this update, records written to Elasticsearch would fail if multiple label keys had the same prefix and some keys included dots. With this update, underscores replace dots in label keys, resolving the issue. (link:https://issues.redhat.com/browse/LOG-3463[LOG-3463])
|
||||
|
||||
* Before this update, the `Red Hat OpenShift Logging` Operator was not available for {product-title} 4.10 clusters because of an incompatibility between {product-title} console and the logging-view-plugin. With this update, the plugin is properly integrated with the {product-title} 4.10 admin console. (link:https://issues.redhat.com/browse/LOG-3447[LOG-3447])
|
||||
|
||||
* Before this update the reconciliation of the `ClusterLogForwarder` custom resource would incorrectly report a degraded status of pipelines that reference the default logstore. With this update, the pipeline validates properly.(link:https://issues.redhat.com/browse/LOG-3477[LOG-3477])
|
||||
|
||||
|
||||
[id="logging-5-6-1-CVEs"]
|
||||
== CVEs
|
||||
* link:https://access.redhat.com/security/cve/CVE-2021-46848[CVE-2021-46848]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-3821[CVE-2022-3821]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-35737[CVE-2022-35737]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-42010[CVE-2022-42010]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-42011[CVE-2022-42011]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-42012[CVE-2022-42012]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-42898[CVE-2022-42898]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-43680[CVE-2022-43680]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2021-35065[CVE-2021-35065]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-46175[CVE-2022-46175]
|
||||
29
modules/logging-rn-5.6.2.adoc
Normal file
29
modules/logging-rn-5.6.2.adoc
Normal file
@@ -0,0 +1,29 @@
|
||||
//module included in cluster-logging-release-notes.adoc
|
||||
:_content-type: REFERENCE
|
||||
[id="logging-release-notes-5-6-2_{context}"]
|
||||
= Logging 5.6.2
|
||||
This release includes link:https://access.redhat.com/errata/RHBA-2023:0793[OpenShift Logging Bug Fix Release 5.6.2].
|
||||
|
||||
[id="logging-5-6-2-bug-fixes"]
|
||||
== Bug fixes
|
||||
* Before this update, the collector did not set `level` fields correctly based on priority for systemd logs. With this update, `level` fields are set correctly. (link:https://issues.redhat.com/browse/LOG-3429[LOG-3429])
|
||||
|
||||
* Before this update, the Operator incorrectly generated incompatibility warnings on {product-title} 4.12 or later. With this update, the Operator max {product-title} version value has been corrected, resolving the issue. (link:https://issues.redhat.com/browse/LOG-3584[LOG-3584])
|
||||
|
||||
* Before this update, creating a `ClusterLogForwarder` custom resource (CR) with an output value of `default` did not generate any errors. With this update, an error warning that this value is invalid generates appropriately. (link:https://issues.redhat.com/browse/LOG-3437[LOG-3437])
|
||||
|
||||
* Before this update, when the `ClusterLogForwarder` custom resource (CR) had multiple pipelines configured with one output set as `default`, the collector pods restarted. With this update, the logic for output validation has been corrected, resolving the issue. (link:https://issues.redhat.com/browse/LOG-3559[LOG-3559])
|
||||
|
||||
* Before this update, collector pods restarted after being created. With this update, the deployed collector does not restart on its own. (link:https://issues.redhat.com/browse/LOG-3608[LOG-3608])
|
||||
|
||||
* Before this update, patch releases removed previous versions of the Operators from the catalog. This made installing the old versions impossible. This update changes bundle configurations so that previous releases of the same minor version stay in the catalog. (link:https://issues.redhat.com/browse/LOG-3635[LOG-3635])
|
||||
|
||||
[id="logging-5-6-2-CVEs"]
|
||||
== CVEs
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-23521[CVE-2022-23521]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-40303[CVE-2022-40303]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-40304[CVE-2022-40304]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-41903[CVE-2022-41903]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-47629[CVE-2022-47629]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2023-21835[CVE-2023-21835]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2023-21843[CVE-2023-21843]
|
||||
22
modules/logging-rn-5.6.3.adoc
Normal file
22
modules/logging-rn-5.6.3.adoc
Normal file
@@ -0,0 +1,22 @@
|
||||
:_content-type: REFERENCE
|
||||
[id="logging-release-notes-5-6-3_{context}"]
|
||||
= Logging 5.6.3
|
||||
This release includes link:https://access.redhat.com/errata/RHSA-2023:0932[OpenShift Logging Bug Fix Release 5.6.3].
|
||||
|
||||
[id="logging-5-6-3-bug-fixes"]
|
||||
== Bug fixes
|
||||
* Before this update, the operator stored gateway tenant secret information in a config map. With this update, the operator stores this information in a secret. (link:https://issues.redhat.com/browse/LOG-3717[LOG-3717])
|
||||
|
||||
* Before this update, the Fluentd collector did not capture OAuth login events stored in `/var/log/auth-server/audit.log`. With this update, Fluentd captures these OAuth login events, resolving the issue. (link:https://issues.redhat.com/browse/LOG-3729[LOG-3729])
|
||||
|
||||
[id="logging-5-6-3-CVEs"]
|
||||
== CVEs
|
||||
* link:https://access.redhat.com/security/cve/CVE-2020-10735[CVE-2020-10735]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2021-28861[CVE-2021-28861]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-2873[CVE-2022-2873]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-4415[CVE-2022-4415]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-40897[CVE-2022-40897]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-41222[CVE-2022-41222]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-43945[CVE-2022-43945]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-45061[CVE-2022-45061]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2022-48303[CVE-2022-48303]
|
||||
30
snippets/logging-crs-by-operator-snip.adoc
Normal file
30
snippets/logging-crs-by-operator-snip.adoc
Normal file
@@ -0,0 +1,30 @@
|
||||
// Text snippet included in the following assemblies:
|
||||
//
|
||||
//
|
||||
// Text snippet included in the following modules:
|
||||
//
|
||||
//
|
||||
:_content-type: SNIPPET
|
||||
|
||||
You can configure your {logging} deployment with Custom Resource (CR) YAML files implemented by each Operator.
|
||||
|
||||
*Red Hat Openshift Logging Operator*:
|
||||
|
||||
* `ClusterLogging` (CL) - Deploys the collector and forwarder which currently are both implemented by a daemonset running on each node.
|
||||
|
||||
* `ClusterLogForwarder` (CLF) - Generates collector configuration to forward logs per user configuration.
|
||||
|
||||
*Loki Operator*:
|
||||
|
||||
* `LokiStack` - Controls the Loki cluster as log store and the web proxy with OpenShift Container Platform authentication integration to enforce multi-tenancy.
|
||||
|
||||
*OpenShift Elasticsearch Operator*:
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
These CRs are generated and managed by the `ClusterLogging` Operator, manual changes cannot be made without being overwritten by the Operator.
|
||||
====
|
||||
|
||||
* `ElasticSearch` - Configure and deploy an Elasticsearch instance as the default log store.
|
||||
|
||||
* `Kibana` - Configure and deploy Kibana instance to search, query and view logs.
|
||||
@@ -8,5 +8,5 @@
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
As of logging version 5.4.3 the Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the Elasticsearch Operator to manage the default log storage, you can use the Loki Operator.
|
||||
As of logging version 5.4.3 the Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the Elasticsearch Operator to manage the default log storage, you can use the Loki Operator.
|
||||
====
|
||||
|
||||
@@ -8,5 +8,5 @@
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
In Logging 5.6, Fluentd is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Fluentd, you can use Vector instead.
|
||||
As of logging version 5.6 Fluentd is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Fluentd, you can use Vector instead.
|
||||
====
|
||||
|
||||
@@ -10,10 +10,6 @@ The {logging-title} collects container logs and node logs. These are categorized
|
||||
|
||||
* `application` - Container logs generated by non-infrastructure containers.
|
||||
|
||||
* `infrastructure` - Container logs from namespaces `kube-*` and `openshift-*`, and node logs from `journald`.
|
||||
* `infrastructure` - Container logs from namespaces `kube-\*` and `openshift-\*`, and node logs from `journald`.
|
||||
|
||||
* `audit` - Node logs from `/var/log/audit/audit.log` and `/var/log/openshift-apiserver/audit.log`.
|
||||
|
||||
* `kube` audit logs from `/var/log/kube-apiserver/audit.log`.
|
||||
|
||||
* `ovn` audit logs from `/var/log/ovn/acl-audit-log.log`.
|
||||
* `audit` - Logs from `auditd`, `kube-apiserver`, `openshift-apiserver`, and `ovn` if enabled.
|
||||
|
||||
12
snippets/logging-under-construction-snip.adoc
Normal file
12
snippets/logging-under-construction-snip.adoc
Normal file
@@ -0,0 +1,12 @@
|
||||
// Text snippet included in the following assemblies:
|
||||
//
|
||||
//
|
||||
// Text snippet included in the following modules:
|
||||
//
|
||||
//
|
||||
:_content-type: SNIPPET
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
As of logging version 5.5, you have the option of choosing from *Fluentd* or *Vector* collector implementations, and *Elasticsearch* or *LokiStack* as log stores. Documentation for logging is in the process of being updated to reflect these underlying component changes.
|
||||
====
|
||||
Reference in New Issue
Block a user