mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
77 lines
7.4 KiB
Plaintext
77 lines
7.4 KiB
Plaintext
//module included in logging-5-8-release-notes.adoc
|
|
:_mod-docs-content-type: REFERENCE
|
|
[id="logging-release-notes-5-8-0_{context}"]
|
|
= Logging 5.8.0
|
|
|
|
This release includes link:https://access.redhat.com/errata/RHBA-2023:6139[OpenShift Logging Bug Fix Release 5.8.0] and link:https://access.redhat.com/errata/RHBA-2023:6134[OpenShift Logging Bug Fix Release 5.8.0 Kibana].
|
|
|
|
[id="logging-release-notes-5-8-0-deprecation-notice"]
|
|
== Deprecation notice
|
|
|
|
In Logging 5.8, Elasticsearch, Fluentd, and Kibana are deprecated and are planned to be removed in Logging 6.0, which is expected to be shipped alongside a future release of {product-title}. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. The Vector-based collector provided by the {clo} and LokiStack provided by the {loki-op} are the preferred Operators for log collection and storage. We encourage all users to adopt the Vector and Loki log stack, as this will be the stack that will be enhanced going forward.
|
|
|
|
[id="logging-release-notes-5-8-0-enhancements"]
|
|
== Enhancements
|
|
|
|
[id="logging-release-notes-5-8-0-log-collection"]
|
|
=== Log Collection
|
|
|
|
* With this update, the LogFileMetricExporter is no longer deployed with the collector by default. You must manually create a `LogFileMetricExporter` custom resource (CR) to generate metrics from the logs produced by running containers. If you do not create the `LogFileMetricExporter` CR, you may see a *No datapoints found* message in the {product-title} web console dashboard for *Produced Logs*. (link:https://issues.redhat.com/browse/LOG-3819[LOG-3819])
|
|
|
|
* With this update, you can deploy multiple, isolated, and RBAC-protected `ClusterLogForwarder` custom resource (CR) instances in any namespace. This allows independent groups to forward desired logs to any destination while isolating their configuration from other collector deployments. (link:https://issues.redhat.com/browse/LOG-1343[LOG-1343])
|
|
+
|
|
[IMPORTANT]
|
|
====
|
|
In order to support multi-cluster log forwarding in additional namespaces other than the `openshift-logging` namespace, you must update the {clo} to watch all namespaces. This functionality is supported by default in new {clo} version 5.8 installations.
|
|
====
|
|
|
|
* With this update, you can use the flow control or rate limiting mechanism to limit the volume of log data that can be collected or forwarded by dropping excess log records. The input limits prevent poorly-performing containers from overloading the {logging-uc} and the output limits put a ceiling on the rate of logs shipped to a given data store. (link:https://issues.redhat.com/browse/LOG-884[LOG-884])
|
|
|
|
* With this update, you can configure the log collector to look for HTTP connections and receive logs as an HTTP server, also known as a webhook. (link:https://issues.redhat.com/browse/LOG-4562[LOG-4562])
|
|
|
|
* With this update, you can configure audit policies to control which Kubernetes and OpenShift API server events are forwarded by the log collector. (link:https://issues.redhat.com/browse/LOG-3982[LOG-3982])
|
|
|
|
[id="logging-release-notes-5-8-0-log-storage"]
|
|
=== Log Storage
|
|
|
|
* With this update, LokiStack administrators can have more fine-grained control over who can access which logs by granting access to logs on a namespace basis. (link:https://issues.redhat.com/browse/LOG-3841[LOG-3841])
|
|
|
|
* With this update, the {loki-op} introduces `PodDisruptionBudget` configuration on LokiStack deployments to ensure normal operations during {product-title} cluster restarts by keeping ingestion and the query path available. (link:https://issues.redhat.com/browse/LOG-3839[LOG-3839])
|
|
|
|
* With this update, the reliability of existing LokiStack installations are seamlessly improved by applying a set of default Affinity and Anti-Affinity policies.
|
|
(link:https://issues.redhat.com/browse/LOG-3840[LOG-3840])
|
|
|
|
* With this update, you can manage zone-aware data replication as an administrator in LokiStack, in order to enhance reliability in the event of a zone failure. (link:https://issues.redhat.com/browse/LOG-3266[LOG-3266])
|
|
|
|
* With this update, a new supported small-scale LokiStack size of 1x.extra-small is introduced for {product-title} clusters hosting a few workloads and smaller ingestion volumes (up to 100GB/day). (link:https://issues.redhat.com/browse/LOG-4329[LOG-4329])
|
|
|
|
* With this update, the LokiStack administrator has access to an official Loki dashboard to inspect the storage performance and the health of each component. (link:https://issues.redhat.com/browse/LOG-4327[LOG-4327])
|
|
|
|
[id="logging-release-notes-5-8-0-log-console"]
|
|
=== Log Console
|
|
|
|
* With this update, you can enable the Logging Console Plugin when Elasticsearch is the default Log Store. (link:https://issues.redhat.com/browse/LOG-3856[LOG-3856])
|
|
|
|
* With this update, {product-title} application owners can receive notifications for application log-based alerts on the {product-title} web console *Developer* perspective for {product-title} version 4.14 and later. (link:https://issues.redhat.com/browse/LOG-3548[LOG-3548])
|
|
|
|
[id="logging-release-notes-5-8-0-known-issues"]
|
|
== Known Issues
|
|
|
|
* Currently, Splunk log forwarding might not work after upgrading to version 5.8 of the {clo}. This issue is caused by transitioning from OpenSSL version 1.1.1 to version 3.0.7. In the newer OpenSSL version, there is a default behavior change, where connections to TLS 1.2 endpoints are rejected if they do not expose the link:https://datatracker.ietf.org/doc/html/rfc5746[RFC 5746] extension.
|
|
+
|
|
As a workaround, enable TLS 1.3 support on the TLS terminating load balancer in front of the Splunk HEC (HTTP Event Collector) endpoint. Splunk is a third-party system and this should be configured from the Splunk end.
|
|
|
|
* Currently, there is a flaw in handling multiplexed streams in the HTTP/2 protocol, where you can repeatedly make a request for a new multiplex stream and immediately send an `RST_STREAM` frame to cancel it. This created extra work for the server set up and tore down the streams, resulting in a denial of service due to server resource consumption. There is currently no workaround for this issue. (link:https://issues.redhat.com/browse/LOG-4609[LOG-4609])
|
|
|
|
* Currently, when using FluentD as the collector, the collector pod cannot start on the {product-title} IPv6-enabled cluster. The pod logs produce the `fluentd pod [error]: unexpected error error_class=SocketError error="getaddrinfo: Name or service not known` error. There is currently no workaround for this issue. (link:https://issues.redhat.com/browse/LOG-4706[LOG-4706])
|
|
|
|
* Currently, the log alert is not available on an IPv6-enabled cluster. There is currently no workaround for this issue. (link:https://issues.redhat.com/browse/LOG-4709[LOG-4709])
|
|
|
|
* Currently, `must-gather` cannot gather any logs on a FIPS-enabled cluster, because the required OpenSSL library is not available in the `cluster-logging-rhel9-operator`. There is currently no workaround for this issue. (link:https://issues.redhat.com/browse/LOG-4403[LOG-4403])
|
|
|
|
* Currently, when deploying the {logging} version 5.8 on a FIPS-enabled cluster, the collector pods cannot start and are stuck in `CrashLoopBackOff` status, while using FluentD as a collector. There is currently no workaround for this issue. (link:https://issues.redhat.com/browse/LOG-3933[LOG-3933])
|
|
|
|
[id="logging-release-notes-5-8-0-CVEs"]
|
|
== CVEs
|
|
* link:https://access.redhat.com/security/cve/CVE-2023-40217[CVE-2023-40217]
|