1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 21:46:22 +01:00

RHDEVDOCS-3298 Update 5.2 bugs that are missing release notes text.

This commit is contained in:
Rolfe Dlugy-Hegwer
2021-09-23 16:26:08 -04:00
committed by openshift-cherrypick-robot
parent 1ae454c4c8
commit 5ddbfcd654

View File

@@ -1,4 +1,4 @@
[id="cluster-logging-release-notes"]
such as[id="cluster-logging-release-notes"]
= Release notes for Red Hat OpenShift Logging 5.2
include::modules/common-attributes.adoc[]
:context: cluster-logging-release-notes-v5x
@@ -32,59 +32,69 @@ The following advisories are available for OpenShift Logging 5.2.x:
* With this update, you can forward log data to Grafana Loki, a horizontally scalable, highly available, multi-tenant log aggregation system. For more information, see xref:../logging/cluster-logging-external.html#cluster-logging-collector-log-forward-loki_cluster-logging-external[Forwarding logs to Grafana Loki]. (link:https://issues.redhat.com/browse/LOG-684[LOG-684])
* With this update, if you use the Fluentd forward protocol to forward log data over a TLS-encrypted connection, you can now use a password-encrypted private key file and specify the passphrase in the Cluster Log Forwarder configuration. For more information, see xref:../logging/cluster-logging-external.html#cluster-logging-collector-log-forward-fluentd_cluster-logging-external[Forwarding logs using the Fluentd forward protocol]. (link:https://issues.redhat.com/browse/LOG-1525[LOG-1525])
* With this update, if you use the Fluentd forward protocol to forward log data over a TLS-encrypted connection, now you can use a password-encrypted private key file and specify the passphrase in the Cluster Log Forwarder configuration. For more information, see xref:../logging/cluster-logging-external.html#cluster-logging-collector-log-forward-fluentd_cluster-logging-external[Forwarding logs using the Fluentd forward protocol]. (link:https://issues.redhat.com/browse/LOG-1525[LOG-1525])
* This enhancement enables you to use a username and password to authenticate a log forwarding connection to an external Elasticsearch instance. For example, if you cannot use mutual TLS (mTLS) because a third-party operates the Elasticsearch instance, you can use HTTP or HTTPS and set a secret that contains the username and password. For more information see xref:../logging/cluster-logging-external.adoc#cluster-logging-collector-log-forward-es_cluster-logging-external[Forwarding logs to an external Elasticsearch instance]. (link:https://issues.redhat.com/browse/LOG-1022[LOG-1022])
* This enhancement enables you to use a username and password to authenticate a log forwarding connection to an external Elasticsearch instance. For example, if you cannot use mutual TLS (mTLS) because a third-party operates the Elasticsearch instance, you can use HTTP or HTTPS and set a secret that contains the username and password. For more information, see xref:../logging/cluster-logging-external.adoc#cluster-logging-collector-log-forward-es_cluster-logging-external[Forwarding logs to an external Elasticsearch instance]. (link:https://issues.redhat.com/browse/LOG-1022[LOG-1022])
* With this update, you can collect OVN network policy audit logs for forwarding to a logging server. For more information, see xref:../logging/cluster-logging-external.html#cluster-logging-collecting-ovn-audit-logs_cluster-logging-external[Collecting OVN network policy audit logs]. (link:https://issues.redhat.com/browse/LOG-1526[LOG-1526])
* By default, the data model introduced in {product-title} 4.5 gave logs from different namespaces a single index in common. This change made it harder to see which namespaces produced the most logs.
+
The current release, OpenShift Logging 5.2, adds namespace metrics to the *Logging* dashboard in the {product-title} console. With these metrics, you can see which namespaces produce logs and how many logs each namespace produces for a given timestamp.
The current release adds namespace metrics to the *Logging* dashboard in the {product-title} console. With these metrics, you can see which namespaces produce logs and how many logs each namespace produces for a given timestamp.
+
To see these metrics, open the *Administrator* perspective in the {product-title} web console, and navigate to *Monitoring* -> *Dashboards* -> *Logging/Elasticsearch*. (link:https://issues.redhat.com/browse/LOG-1680[LOG-1680])
* The current release, OpenShift Logging 5.2, enables two new metrics: For a given timestamp or duration, you can see the total logs produced or logged by individual containers, and the total logs collected by the collector. These metrics are labeled by namespace, pod, and container name, so you can see how many logs each namespace and pod collects and produces. (link:https://issues.redhat.com/browse/LOG-1213[LOG-1213])
* The current release, OpenShift Logging 5.2, enables two new metrics: For a given timestamp or duration, you can see the total logs produced or logged by individual containers, and the total logs collected by the collector. These metrics are labeled by namespace, pod, and container name so that you can see how many logs each namespace and pod collects and produces. (link:https://issues.redhat.com/browse/LOG-1213[LOG-1213])
[id="openshift-logging-5-2-0-bug-fixes"]
=== Bug fixes
* link:https://issues.redhat.com/browse/LOG-1130[LOG-1130] "BZ#1927249 - fieldmanager.go:186 - SHOULD NOT HAPPEN - failed to update managedFields...duplicate entries for key 'name="POLICY_MAPPING'"
* link:https://issues.redhat.com/browse/LOG-1268[LOG-1268] "elasticsearch-im-{app,infra,audit} successfully run but fail to run their tasks."
* link:https://issues.redhat.com/browse/LOG-1271[LOG-1271] "Logs Produced Line Chart does not display podname and namespace labels"
* link:https://issues.redhat.com/browse/LOG-1273[LOG-1273] "The index management job status is always 'Completed' even when there has an error in the job log."
* link:https://issues.redhat.com/browse/LOG-1385[LOG-1385] "APIRemovedInNextReleaseInUse alert for priorityclasses"
* link:https://issues.redhat.com/browse/LOG-1420[LOG-1420] "Operators missing disconnected annotation"
* link:https://issues.redhat.com/browse/LOG-1440[LOG-1440] "BZ#1966561 - OLM bug workaround for workload partitioning (PR#1042)"
* link:https://issues.redhat.com/browse/LOG-1499[LOG-1499] "release-5.2 - Error 'com.fasterxml.jackson.core.JsonParseException: Invalid UTF-8 start byte 0x92' in Elasticsearch/Fluentd logs"
* link:https://issues.redhat.com/browse/LOG-1567[LOG-1567] "Use correct variable for nextIndex"
* link:https://issues.redhat.com/browse/LOG-1446[LOG-1446] "kibana-proxy CrashLoopBackoff with error Invalid configuration cookie_secret must be 16, 24, or 32 bytes to create an AES cipher"
* link:https://issues.redhat.com/browse/LOG-1625[LOG-1625] "ds/fluentd is not created due to: 'system:serviceaccount:openshift-logging:cluster-logging-operator' cannot create resource 'securitycontextconstraints' in API group 'security.openshift.io' at the cluster scope"
* link:https://issues.redhat.com/browse/LOG-1071[LOG-1071] "fluentd configuration posting all messages to its own log"
* link:https://issues.redhat.com/browse/LOG-1276[LOG-1276] "Update Elasticsearch/kibana to use opendistro security plugin 2.10.5.1"
* link:https://issues.redhat.com/browse/LOG-1353[LOG-1353] "No datapoints found on top 10 containers dashboard"
* link:https://issues.redhat.com/browse/LOG-1411[LOG-1411] "Underestimate queued_chunks_limit_size value with chunkLimitSize and totalLimitSize tuning parameters"
* link:https://issues.redhat.com/browse/LOG-1558[LOG-1558] "Update OD security dependency to resolve kibana index migration issue"
* link:https://issues.redhat.com/browse/LOG-1562[LOG-1562] "CVE-2021-32740 logging-fluentd-container: rubygem-addressable: ReDoS in templates - openshift-logging-5"
* link:https://issues.redhat.com/browse/LOG-1570[LOG-1570] "Bug 1981579: Fix built-in application behavior to collect all of logs"
* link:https://issues.redhat.com/browse/LOG-1589[LOG-1589] "There are lots of dockercfg secrets and service-account-token secrets for ES and Kibana in openshift-logging namespace after deploying EFK pods."
* link:https://issues.redhat.com/browse/LOG-1590[LOG-1590] "Vendored viaq/logerr dependency is missing a license file"
* link:https://issues.redhat.com/browse/LOG-1623[LOG-1623] "Metric`log_collected_bytes_total` is not exposed"
* link:https://issues.redhat.com/browse/LOG-1624[LOG-1624] "Index management cronjobs are using wrong image in CSV/elasticsearch-operator.5.2.0-1"
* link:https://issues.redhat.com/browse/LOG-1634[LOG-1634] "Logging 5.2 - The CSV version is not changed in new bundles."
* link:https://issues.redhat.com/browse/LOG-1647[LOG-1647] "Fluentd pods raise error `Prometheus::Client::LabelSetValidator::InvalidLabelSetError` when forward logs to external logStore"
* link:https://issues.redhat.com/browse/LOG-1657[LOG-1657] "Index management jobs failing with error while attemping to determine the active write alias no permissions"
* link:https://issues.redhat.com/browse/LOG-1681[LOG-1681] "Fluentd pod metric not able to scrape , Fluentd target down"
* link:https://issues.redhat.com/browse/LOG-1683[LOG-1683] "Loki output not present in CLO CRD"
* link:https://issues.redhat.com/browse/LOG-1702[LOG-1702] "Entry out of order when forward logs to loki"
* link:https://issues.redhat.com/browse/LOG-1714[LOG-1714] "Memory/CPU spike issues seen with Logging 5.2 on Power"
* link:https://issues.redhat.com/browse/LOG-1722[LOG-1722] "The value of card `Total Namespace Count` in Logging/Elasticsearch dashboard is not correct."
* link:https://issues.redhat.com/browse/LOG-1723[LOG-1723] "In fluentd config, flush_interval can't be set with flush_mode=immediate"
* Before this update, when the OpenShift Elasticsearch Operator created index management cronjobs, it added the `POLICY_MAPPING` environment variable twice, which caused the apiserver to report the duplication. This update fixes the issue so that the `POLICY_MAPPING` environment variable is set only once per cronjob, and there is no duplication for the apiserver to report. (link:https://issues.redhat.com/browse/LOG-1130[LOG-1130])
* Before this update, suspending an Elasticsearch cluster to zero nodes did not suspend the index-management cronjobs, which put these cronjobs into maximum backoff. Then, after unsuspending the Elasticsearch cluster, these cronjobs stayed halted due to maximum backoff reached. This update resolves the issue by suspending the cronjobs and the cluster. (link:https://issues.redhat.com/browse/LOG-1268[LOG-1268])
* Before this update, in the *Logging* dashboard in the {product-title} console, the list of top 10 log-producing containers was missing the "chart namespace" label and provided the incorrect metric name, `fluentd_input_status_total_bytes_logged`. With this update, the chart shows the namespace label and the correct metric name, `log_logged_bytes_total`. (link:https://issues.redhat.com/browse/LOG-1271[LOG-1271])
* Before this update, if an index management cronjob terminated with an error, it did not report the error exit code: instead, its job status was "complete." This update resolves the issue by reporting the error exit codes of index management cronjobs that terminate with errors. (link:https://issues.redhat.com/browse/LOG-1273[LOG-1273])
* The `priorityclasses.v1beta1.scheduling.k8s.io` was removed in 1.22 and replaced by `priorityclasses.v1.scheduling.k8s.io` (`v1beta1` was replaced by `v1`). Before this update, `APIRemovedInNextReleaseInUse` alerts were generated for `priorityclasses` because `v1beta1` was still present . This update resolves the issue by replacing `v1beta1` with `v1`. The alert is no longer generated. (link:https://issues.redhat.com/browse/LOG-1385[LOG-1385])
* Previously, the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator did not have the annotation that was required for them to appear in the {product-title} web console list of operators that can run in a disconnected environment. This update adds the `operators.openshift.io/infrastructure-features: '["Disconnected"]'` annotation to these two operators so that they appear in the list of operators that run in disconnected environments. (link:https://issues.redhat.com/browse/LOG-1420[LOG-1420])
* Before this update, Red Hat OpenShift Logging Operator pods were scheduled on CPU cores that were reserved for customer workloads on performance-optimized single-node clusters. With this update, cluster logging operator pods are scheduled on the correct CPU cores. (link:https://issues.redhat.com/browse/LOG-1440[LOG-1440])
* Before this update, some log entries had unrecognized UTF-8 bytes, which caused Elasticsearch to reject the messages and block the entire buffered payload. With this update, rejected payloads drop the invalid log entries and resubmit the remaining entries to resolve the issue. (link:https://issues.redhat.com/browse/LOG-1499[LOG-1499])
* Before this update, the `kibana-proxy` Pod sometimes entered the `CrashLoopBackoff` state and logged the following message `Invalid configuration: cookie_secret must be 16, 24, or 32 bytes to create an AES cipher when pass_access_token == true or cookie_refresh != 0, but is 29 bytes.` The exact actual number of bytes could vary. With this update, the generation of the Kibana session secret has been corrected, and the kibana-proxy Pod no longer enters a `CrashLoopBackoff` state due to this error. (link:https://issues.redhat.com/browse/LOG-1446[LOG-1446])
* Before this update, the AWS CloudWatch Fluentd plugin logged its AWS API calls to the Fluentd log at all log levels, consuming additional {product-title} node resources. With this update, the AWS CloudWatch Fluentd plugin logs AWS API calls only at the "debug" and "trace" log levels. This way, at the default "warn" log level, Fluentd does not consume extra node resources. (link:https://issues.redhat.com/browse/LOG-1071[LOG-1071])
* Before this update, the Elasticsearch OpenDistro security plugin caused user index migrations to fail. This update resolves the issue by providing a newer version of the plugin. Now, index migrations proceed without errors. (link:https://issues.redhat.com/browse/LOG-1276[LOG-1276])
* Before this update, in the *Logging* dashboard in the {product-title} console, the list of top 10 log-producing containers lacked data points. This update resolves the issue, and the dashboard displays all data points. (link:https://issues.redhat.com/browse/LOG-1353[LOG-1353])
* Before this update, if you were tuning the performance of the Fluentd log forwarder by adjusting the `chunkLimitSize` and `totalLimitSize` values, the `Setting queued_chunks_limit_size for each buffer to` message reported values that were too low. The current update fixes this issue so that this message reports the correct values. (link:https://issues.redhat.com/browse/LOG-1411[LOG-1411])
* Before this update, the Kibana OpenDistro security plugin caused user index migrations to fail. This update resolves the issue by providing a newer version of the plugin. Now, index migrations proceed without errors. (link:https://issues.redhat.com/browse/LOG-1558[LOG-1558])
* Before this update, using a namespace input filter prevented logs in that namespace from appearing in other inputs. With this update, logs are sent to all inputs that can accept them. (link:https://issues.redhat.com/browse/LOG-1570[LOG-1570])
* Before this update, a missing license file for the `viaq/logerr` dependency caused license scanners to abort without success. With this update, the `viaq/logerr` dependency is licensed under Apache 2.0 and the license scanners run successfully. (link:https://issues.redhat.com/browse/LOG-1590[LOG-1590])
* Before this update, an incorrect brew tag for `curator5` within the `elasticsearch-operator-bundle` build pipeline caused the pull of an image pinned to a dummy SHA1. With this update, the build pipeline uses the `logging-curator5-rhel8` reference for `curator5`, which and enables index management cronjobs to pull the correct image from `registry.redhat.io`. (link:https://issues.redhat.com/browse/LOG-1624[LOG-1624])
* Before this update, an issue with the `ServiceAccount` permissions caused errors such as `no permissions for [indices:admin/aliases/get]`. With this update, a permission fix resolves the issue. (link:https://issues.redhat.com/browse/LOG-1657[LOG-1657])
* Before this update, the Custom Resource Definition (CRD) for the Red Hat OpenShift Logging Operator was missing the Loki output type, which caused the admission controller to reject the `ClusterLogForwarder` custom resource object. With this update, the CRD includes Loki as an output type so that administrators can configure `ClusterLogForwarder` to send logs to a Loki server. (link:https://issues.redhat.com/browse/LOG-1683[LOG-1683])
* Before this update, OpenShift Elasticsearch Operator reconciliation of the `ServiceAccounts` overwrote third-party-owned fields that contained secrets. This issue caused memory and CPU spikes due to frequent recreation of secrets. This update resolves the issue. Now, the OpenShift Elasticsearch Operator does not overwrite third-party-owned fields. (link:https://issues.redhat.com/browse/LOG-1714[LOG-1714])
* Before this update, in the `ClusterLogging` custom resource (CR) definition, if you specified a `flush_interval` value but did not set `flush_mode` to `interval`, the Red Hat OpenShift Logging Operator generated a Fluentd configuration. However, the Fluentd collector generated an error at runtime. With this update, the Red Hat OpenShift Logging Operator validates the `ClusterLogging` CR definition and only generates the Fluentd configuration if both fields are specified. (link:https://issues.redhat.com/browse/LOG-1723[LOG-1723])
[id="openshift-logging-5-2-0-known-issues"]
=== Known issues
* If you forward logs to an external Elasticsearch server and then change a configured value in the pipeline secret, such as the username and password, the fluentd forwarder loads the new secret but uses the old value to connect to an external Elasticsearch server. This issue happens because the Red Hat OpenShift Logging Operator does not currently monitor secrets for content changes. (link:https://issues.redhat.com/browse/LOG-1652[LOG-1652])
* If you forward logs to an external Elasticsearch server and then change a configured value in the pipeline secret, such as the username and password, the Fluentd forwarder loads the new secret but uses the old value to connect to an external Elasticsearch server. This issue happens because the Red Hat OpenShift Logging Operator does not currently monitor secrets for content changes. (link:https://issues.redhat.com/browse/LOG-1652[LOG-1652])
+
As a workaround, if you change the secret, you can force the Fluentd pods to redeploy by entering:
+