mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 21:46:22 +01:00
OBSDOCS-1361
This commit is contained in:
committed by
Kathryn Alexander
parent
432ad1dbde
commit
f82eeb8bc1
@@ -2972,6 +2972,23 @@ Topics:
|
||||
File: log6x-visual
|
||||
# - Name: API reference 6.0
|
||||
# File: log6x-api-reference
|
||||
- Name: Logging 6.1
|
||||
Dir: logging-6.1
|
||||
Topics:
|
||||
- Name: Release notes
|
||||
File: log6x-release-notes-6.1
|
||||
- Name: About logging 6.1
|
||||
File: log6x-about-6.1
|
||||
- Name: Configuring log forwarding
|
||||
File: log6x-clf-6.1
|
||||
- Name: Configuring LokiStack storage
|
||||
File: log6x-loki-6.1
|
||||
- Name: Configuring LokiStack for OTLP
|
||||
File: log6x-configuring-lokistack-otlp-6.1
|
||||
- Name: OpenTelemetry data model
|
||||
File: log6x-opentelemetry-data-model-6.1
|
||||
- Name: Visualization for logging
|
||||
File: log6x-visual-6.1
|
||||
- Name: Support
|
||||
File: cluster-logging-support
|
||||
- Name: Troubleshooting logging
|
||||
|
||||
42
modules/log6x-6-1-0-rn.adoc
Normal file
42
modules/log6x-6-1-0-rn.adoc
Normal file
@@ -0,0 +1,42 @@
|
||||
// Module included in the following assemblies:
|
||||
//log6x-release-notes-6.1
|
||||
= Logging 6.1.0 Release Notes
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="logging-release-notes-6-1-0_{context}"]
|
||||
|
||||
This release includes link:https://access.redhat.com/errata/RHBA-2024:9038[{OCP-short}{logging} Release 6.1.0].
|
||||
|
||||
[id="openshift-logging-release-notes-6-1-0-enhancements"]
|
||||
== New Features and Enhancements
|
||||
|
||||
=== Log Collection
|
||||
|
||||
* This enhancement adds the source `iostream` to the attributes sent from collected container logs. The value is set to either `stdout` or `stderr` based on how the collector received it. (link:https://issues.redhat.com/browse/LOG-5292[LOG-5292])
|
||||
|
||||
* With this update, the default memory limit for the collector increases from 1024 Mi to 2048 Mi. Users should adjust resource limits based on their cluster’s specific needs and specifications. (link:https://issues.redhat.com/browse/LOG-6072[LOG-6072])
|
||||
|
||||
* With this update, users can now set the syslog output delivery mode of the `ClusterLogForwarder` CR to either `AtLeastOnce` or `AtMostOnce.` (link:https://issues.redhat.com/browse/LOG-6355[LOG-6355])
|
||||
|
||||
=== Log Storage
|
||||
|
||||
* With this update, the new `1x.pico` LokiStack size supports clusters with fewer workloads and lower log volumes (up to 50GB/day). (link:https://issues.redhat.com/browse/LOG-5939[LOG-5939])
|
||||
|
||||
[id="logging-release-notes-6-1-0-technology-preview-features"]
|
||||
== Technology Preview
|
||||
|
||||
:FeatureName: The OpenTelemetry Protocol (OTLP) output log forwarder
|
||||
include::snippets/technology-preview.adoc[]
|
||||
|
||||
* With this update, OpenTelemetry logs can now be forwarded using the `OTel` (OpenTelemetry) data model to a Red Hat Managed LokiStack instance. To enable this feature, add the `observability.openshift.io/tech-preview-otlp-output: "enabled"` annotation to your `ClusterLogForwarder` configuration. For additional configuration information, see link:https://github.com/openshift/cluster-logging-operator/blob/master/docs/features/logforwarding/outputs/opentelemetry-lokistack-forwarding.adoc[OTLP Forwarding].
|
||||
|
||||
* With this update, a `dataModel` field has been added to the `lokiStack` output specification. Set the `dataModel` to `Otel` to configure log forwarding using the OpenTelemetry data format. The default is set to `Viaq`. For information about data mapping see link:https://opentelemetry.io/docs/specs/otlp/[OTLP Specification].
|
||||
|
||||
[id="logging-release-notes-6-1-0-bug-fixes_{context}"]
|
||||
== Bug Fixes
|
||||
None.
|
||||
|
||||
[id="logging-release-notes-6-1-0-CVEs_{context}"]
|
||||
== CVEs
|
||||
|
||||
* link:https://access.redhat.com/security/cve/CVE-2024-6119[CVE-2024-6119]
|
||||
* link:https://access.redhat.com/security/cve/CVE-2024-6232[CVE-2024-6232]
|
||||
42
modules/log6x-configuring-lokistack-otlp-data-ingestion.adoc
Normal file
42
modules/log6x-configuring-lokistack-otlp-data-ingestion.adoc
Normal file
@@ -0,0 +1,42 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/logging/logging-6.0/log6x-configuring-lokistack-otlp.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="log6x-configuring-lokistack-otlp-data-ingestion_{context}"]
|
||||
= Configuring LokiStack for OTLP data ingestion
|
||||
|
||||
:FeatureName: The OpenTelemetry Protocol (OTLP) output log forwarder
|
||||
include::snippets/technology-preview.adoc[]
|
||||
|
||||
To configure a `LokiStack` custom resource (CR) for OTLP ingestion, follow these steps:
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Ensure that your Loki setup supports structured metadata, introduced in schema version 13 to enable OTLP log ingestion.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Set the schema version:
|
||||
+
|
||||
** When creating a new `LokiStack` CR, set `version: v13` in the storage schema configuration.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
For existing configurations, add a new schema entry with `version: v13` and an `effectiveDate` in the future. For more information on updating schema versions, see link:https://grafana.com/docs/loki/latest/configure/storage/#upgrading-schemas[Upgrading Schemas] (Grafana documentation).
|
||||
====
|
||||
|
||||
. Configure the storage schema as follows:
|
||||
+
|
||||
.Example configure storage schema
|
||||
[source,yaml]
|
||||
----
|
||||
# ...
|
||||
spec:
|
||||
storage:
|
||||
schemas:
|
||||
- version: v13
|
||||
effectiveDate: 2024-10-25
|
||||
----
|
||||
+
|
||||
Once the `effectiveDate` has passed, the v13 schema takes effect, enabling your `LokiStack` to store structured metadata.
|
||||
56
modules/log6x-configuring-otlp-output.adoc
Normal file
56
modules/log6x-configuring-otlp-output.adoc
Normal file
@@ -0,0 +1,56 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/logging/logging-6.0/log6x-clf.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="log6x-configuring-otlp-output_{context}"]
|
||||
= Configuring OTLP output
|
||||
|
||||
Cluster administrators can use the OpenTelemetry Protocol (OTLP) output to collect and forward logs to OTLP receivers. The OTLP output uses the specification defined by the https://opentelemetry.io/docs/specs/otlp/[OpenTelemetry Observability framework] to send data over HTTP with JSON encoding.
|
||||
|
||||
:FeatureName: The OpenTelemetry Protocol (OTLP) output log forwarder
|
||||
include::snippets/technology-preview.adoc[]
|
||||
|
||||
.Procedure
|
||||
|
||||
* Create or edit a `ClusterLogForwarder` custom resource (CR) to enable forwarding using OTLP by adding the following annotation:
|
||||
+
|
||||
.Example `ClusterLogForwarder` CR
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: observability.openshift.io/v1
|
||||
kind: ClusterLogForwarder
|
||||
metadata:
|
||||
annotations:
|
||||
observability.openshift.io/tech-preview-otlp-output: "enabled" # <1>
|
||||
name: clf-otlp
|
||||
spec:
|
||||
serviceAccount:
|
||||
name: <service_account_name>
|
||||
outputs:
|
||||
- name: otlp
|
||||
type: otlp
|
||||
otlp:
|
||||
tuning:
|
||||
compression: gzip
|
||||
deliveryMode: AtLeastOnce
|
||||
maxRetryDuration: 20
|
||||
maxWrite: 10M
|
||||
minRetryDuration: 5
|
||||
url: <otlp_url> # <2>
|
||||
pipelines:
|
||||
- inputRefs:
|
||||
- application
|
||||
- infrastructure
|
||||
- audit
|
||||
name: otlp-logs
|
||||
outputRefs:
|
||||
- otlp
|
||||
----
|
||||
<1> Use this annotation to enable the OpenTelemetry Protocol (OTLP) output, which is a Technology Preview feature.
|
||||
<2> This URL must be absolute and is a placeholder for the OTLP endpoint where logs are sent.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
The OTLP output uses the OpenTelemetry data model, which is different from the ViaQ data model that is used by other output types. It adheres to the OTLP using https://opentelemetry.io/docs/specs/semconv/[OpenTelemetry Semantic Conventions] defined by the OpenTelemetry Observability framework.
|
||||
====
|
||||
93
modules/log6x-loki-sizing.adoc
Normal file
93
modules/log6x-loki-sizing.adoc
Normal file
@@ -0,0 +1,93 @@
|
||||
// Module is included in the following assemblies:
|
||||
// * observability/logging/logging-6.1/log6x-loki-6.1.adoc
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="log6x-loki-sizing_{context}"]
|
||||
= Loki deployment sizing
|
||||
|
||||
Sizing for Loki follows the format of `1x.<size>` where the value `1x` is number of instances and `<size>` specifies performance capabilities.
|
||||
|
||||
The `1x.pico` configuration defines a single Loki deployment with minimal resource and limit requirements, offering high availability (HA) support for all Loki components. This configuration is suited for deployments that do not require a single replication factor or auto-compaction.
|
||||
|
||||
Disk requests are similar across size configurations, allowing customers to test different sizes to determine the best fit for their deployment needs.
|
||||
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
It is not possible to change the number `1x` for the deployment size.
|
||||
====
|
||||
|
||||
.Loki sizing
|
||||
[cols="1h,5*",options="header"]
|
||||
|===
|
||||
|
|
||||
|1x.demo
|
||||
|1x.pico [6.1+ only]
|
||||
|1x.extra-small
|
||||
|1x.small
|
||||
|1x.medium
|
||||
|
||||
|Data transfer
|
||||
|Demo use only
|
||||
|50GB/day
|
||||
|100GB/day
|
||||
|500GB/day
|
||||
|2TB/day
|
||||
|
||||
|Queries per second (QPS)
|
||||
|Demo use only
|
||||
|1-25 QPS at 200ms
|
||||
|1-25 QPS at 200ms
|
||||
|25-50 QPS at 200ms
|
||||
|25-75 QPS at 200ms
|
||||
|
||||
|Replication factor
|
||||
|None
|
||||
|2
|
||||
|2
|
||||
|2
|
||||
|2
|
||||
|
||||
|Total CPU requests
|
||||
|None
|
||||
|7 vCPUs
|
||||
|14 vCPUs
|
||||
|34 vCPUs
|
||||
|54 vCPUs
|
||||
|
||||
|Total CPU requests if using the ruler
|
||||
|None
|
||||
|8 vCPUs
|
||||
|16 vCPUs
|
||||
|42 vCPUs
|
||||
|70 vCPUs
|
||||
|
||||
|Total memory requests
|
||||
|None
|
||||
|17Gi
|
||||
|31Gi
|
||||
|67Gi
|
||||
|139Gi
|
||||
|
||||
|
||||
|Total memory requests if using the ruler
|
||||
|None
|
||||
|18Gi
|
||||
|35Gi
|
||||
|83Gi
|
||||
|171Gi
|
||||
|
||||
|Total disk requests
|
||||
|40Gi
|
||||
|590Gi
|
||||
|430Gi
|
||||
|430Gi
|
||||
|590Gi
|
||||
|
||||
|Total disk requests if using the ruler
|
||||
|80Gi
|
||||
|910Gi
|
||||
|750Gi
|
||||
|750Gi
|
||||
|910Gi
|
||||
|===
|
||||
158
modules/log6x-quickstart-opentelemetry.adoc
Normal file
158
modules/log6x-quickstart-opentelemetry.adoc
Normal file
@@ -0,0 +1,158 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/logging/logging-6.0/log6x-about.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="quick-start-opentelemetry_{context}"]
|
||||
= Quick start with OpenTelemetry
|
||||
|
||||
:FeatureName: The OpenTelemetry Protocol (OTLP) output log forwarder
|
||||
include::snippets/technology-preview.adoc[]
|
||||
|
||||
To configure OTLP ingestion and enable the OpenTelemetry data model, follow these steps:
|
||||
|
||||
.Prerequisites
|
||||
* Cluster administrator permissions
|
||||
|
||||
.Procedure
|
||||
|
||||
. Install the {clo}, {loki-op}, and {coo-first} from OperatorHub.
|
||||
|
||||
. Create a `LokiStack` custom resource (CR) in the `openshift-logging` namespace:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: loki.grafana.com/v1
|
||||
kind: LokiStack
|
||||
metadata:
|
||||
name: logging-loki
|
||||
namespace: openshift-logging
|
||||
spec:
|
||||
managementState: Managed
|
||||
size: 1x.extra-small
|
||||
storage:
|
||||
schemas:
|
||||
- effectiveDate: '2024-10-01'
|
||||
version: v13
|
||||
secret:
|
||||
name: logging-loki-s3
|
||||
type: s3
|
||||
storageClassName: gp3-csi
|
||||
tenants:
|
||||
mode: openshift-logging
|
||||
----
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
Ensure that the `logging-loki-s3` secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see "Secrets and TLS Configuration".
|
||||
====
|
||||
|
||||
. Create a service account for the collector:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create sa collector -n openshift-logging
|
||||
----
|
||||
|
||||
. Allow the collector's service account to write data to the `LokiStack` CR:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector
|
||||
----
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
The `ClusterRole` resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually.
|
||||
====
|
||||
|
||||
. Allow the collector's service account to collect logs:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc project openshift-logging
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector
|
||||
----
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
The example binds the collector to all three roles (application, infrastructure, and audit). By default, only application and infrastructure logs are collected. To collect audit logs, update your `ClusterLogForwarder` configuration to include them. Assign roles based on the specific log types required for your environment.
|
||||
====
|
||||
|
||||
. Create a `UIPlugin` CR to enable the *Log* section in the *Observe* tab:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: observability.openshift.io/v1alpha1
|
||||
kind: UIPlugin
|
||||
metadata:
|
||||
name: logging
|
||||
spec:
|
||||
type: Logging
|
||||
logging:
|
||||
lokiStack:
|
||||
name: logging-loki
|
||||
----
|
||||
|
||||
. Create a `ClusterLogForwarder` CR to configure log forwarding:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: observability.openshift.io/v1
|
||||
kind: ClusterLogForwarder
|
||||
metadata:
|
||||
name: collector
|
||||
namespace: openshift-logging
|
||||
annotations:
|
||||
observability.openshift.io/tech-preview-otlp-output: "enabled" # <1>
|
||||
spec:
|
||||
serviceAccount:
|
||||
name: collector
|
||||
outputs:
|
||||
- name: loki-otlp
|
||||
type: lokiStack # <2>
|
||||
lokiStack:
|
||||
target:
|
||||
name: logging-loki
|
||||
namespace: openshift-logging
|
||||
dataModel: Otel # <3>
|
||||
authentication:
|
||||
token:
|
||||
from: serviceAccount
|
||||
tls:
|
||||
ca:
|
||||
key: service-ca.crt
|
||||
configMapName: openshift-service-ca.crt
|
||||
pipelines:
|
||||
- name: my-pipeline
|
||||
inputRefs:
|
||||
- application
|
||||
- infrastructure
|
||||
outputRefs:
|
||||
- loki-otlp
|
||||
----
|
||||
<1> Use the annotation to enable the `Otel` data model, which is a Technology Preview feature.
|
||||
<2> Define the output type as `lokiStack`.
|
||||
<3> Specifies the OpenTelemetry data model.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
You cannot use `lokiStack.labelKeys` when `dataModel` is `Otel`. To achieve similar functionality when `dataModel` is `Otel`, refer to "Configuring LokiStack for OTLP data ingestion".
|
||||
====
|
||||
|
||||
.Verification
|
||||
* Verify that OTLP is functioning correctly by going to *Observe* -> *OpenShift Logging* -> *LokiStack* -> *Writes* in the OpenShift web console, and checking *Distributor - Structured Metadata*.
|
||||
149
modules/log6x-quickstart-viaq.adoc
Normal file
149
modules/log6x-quickstart-viaq.adoc
Normal file
@@ -0,0 +1,149 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/logging/logging-6.0/log6x-about.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="quick-start-viaq_{context}"]
|
||||
= Quick start with ViaQ
|
||||
|
||||
To use the default ViaQ data model, follow these steps:
|
||||
|
||||
.Prerequisites
|
||||
* Cluster administrator permissions
|
||||
|
||||
.Procedure
|
||||
|
||||
. Install the {clo}, {loki-op}, and {coo-first} from OperatorHub.
|
||||
|
||||
. Create a `LokiStack` custom resource (CR) in the `openshift-logging` namespace:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: loki.grafana.com/v1
|
||||
kind: LokiStack
|
||||
metadata:
|
||||
name: logging-loki
|
||||
namespace: openshift-logging
|
||||
spec:
|
||||
managementState: Managed
|
||||
size: 1x.extra-small
|
||||
storage:
|
||||
schemas:
|
||||
- effectiveDate: '2024-10-01'
|
||||
version: v13
|
||||
secret:
|
||||
name: logging-loki-s3
|
||||
type: s3
|
||||
storageClassName: gp3-csi
|
||||
tenants:
|
||||
mode: openshift-logging
|
||||
----
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
Ensure that the `logging-loki-s3` secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see Secrets and TLS Configuration.
|
||||
====
|
||||
|
||||
. Create a service account for the collector:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create sa collector -n openshift-logging
|
||||
----
|
||||
|
||||
. Allow the collector's service account to write data to the `LokiStack` CR:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector
|
||||
----
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
The `ClusterRole` resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually.
|
||||
====
|
||||
|
||||
. Allow the collector's service account to collect logs:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc project openshift-logging
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc adm policy add-cluster-role-to-user collect-application-logs -z collector
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector
|
||||
----
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
The example binds the collector to all three roles (application, infrastructure, and audit), but by default, only application and infrastructure logs are collected. To collect audit logs, update your `ClusterLogForwarder` configuration to include them. Assign roles based on the specific log types required for your environment.
|
||||
====
|
||||
|
||||
. Create a `UIPlugin` CR to enable the *Log* section in the *Observe* tab:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: observability.openshift.io/v1alpha1
|
||||
kind: UIPlugin
|
||||
metadata:
|
||||
name: logging
|
||||
spec:
|
||||
type: Logging
|
||||
logging:
|
||||
lokiStack:
|
||||
name: logging-loki
|
||||
----
|
||||
|
||||
. Create a `ClusterLogForwarder` CR to configure log forwarding:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: observability.openshift.io/v1
|
||||
kind: ClusterLogForwarder
|
||||
metadata:
|
||||
name: collector
|
||||
namespace: openshift-logging
|
||||
spec:
|
||||
serviceAccount:
|
||||
name: collector
|
||||
outputs:
|
||||
- name: default-lokistack
|
||||
type: lokiStack
|
||||
lokiStack:
|
||||
authentication:
|
||||
token:
|
||||
from: serviceAccount
|
||||
target:
|
||||
name: logging-loki
|
||||
namespace: openshift-logging
|
||||
tls:
|
||||
ca:
|
||||
key: service-ca.crt
|
||||
configMapName: openshift-service-ca.crt
|
||||
pipelines:
|
||||
- name: default-logstore
|
||||
inputRefs:
|
||||
- application
|
||||
- infrastructure
|
||||
outputRefs:
|
||||
- default-lokistack
|
||||
----
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
The `dataModel` field is optional and left unset (`dataModel: ""`) by default. This allows the Cluster Logging Operator (CLO) to automatically select a data model. Currently, the CLO defaults to the ViaQ model when the field is unset, but this will change in future releases. Specifying `dataModel: ViaQ` ensures the configuration remains compatible if the default changes.
|
||||
====
|
||||
|
||||
.Verification
|
||||
* Verify that logs are visible in the *Log* section of the *Observe* tab in the OpenShift web console.
|
||||
@@ -164,4 +164,4 @@ spec:
|
||||
- default-lokistack
|
||||
----
|
||||
|
||||
. Verify that logs are visible in the Log section of the Observe tab in the OpenShift web console.
|
||||
. Verify that logs are visible in the Log section of the Observe tab in the OpenShift web console.
|
||||
@@ -1,7 +1,7 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
include::_attributes/common-attributes.adoc[leveloffset=+1]
|
||||
[id="log6x-loki"]
|
||||
= Storing logs with LokiStack
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: logging-6x
|
||||
|
||||
toc::[]
|
||||
@@ -18,6 +18,7 @@ You can configure a `LokiStack` CR to store application, audit, and infrastructu
|
||||
=== Core Setup and Configuration
|
||||
*Role-based access controls, basic monitoring, and pod placement to deploy Loki.*
|
||||
|
||||
include::modules/log6x-loki-sizing.adoc[leveloffset=+1]
|
||||
include::modules/log6x-loki-rbac-rules-perms.adoc[leveloffset=+1]
|
||||
include::modules/log6x-enabling-loki-alerts.adoc[leveloffset=+1]
|
||||
include::modules/log6x-loki-memberlist-ip.adoc[leveloffset=+1]
|
||||
|
||||
@@ -9,14 +9,14 @@ include::_attributes/common-attributes.adoc[]
|
||||
Do not include this file in the topic map. This is a guide meant for contributors, and is not intended to be published.
|
||||
====
|
||||
|
||||
Logging consists of the Red Hat Openshift Logging Operator (aka the Cluster Logging Operator), and an accompanying log store Operator. Either the Loki Operator (current/future), or Elasticsearch (deprecated). Either vector (current/future) or fluentd (deprecated) handles log collection and aggregation. Operators use custom resources (CR) to manage applications and their components. High-level configuration and settings are provided by the user within a CR. The Operator translates high-level directives into low-level actions, based on best practices embedded within the Operator’s logic. A custom resource definition (CRD) defines a CR and lists all the configurations available to users of the Operator. Installing an Operator creates the CRDs, which are then used to generate CRs.
|
||||
Logging consists of the Red Hat Openshift Logging Operator (also known as the Cluster Logging Operator), and an accompanying log store Operator. Either the Loki Operator (current/future), or Elasticsearch (deprecated). Either vector (current/future) or fluentd (deprecated) handles log collection and aggregation. Operators use custom resources (CR) to manage applications and their components. High-level configuration and settings are provided by the user within a CR. The Operator translates high-level directives into low-level actions, based on best practices embedded within the Operator’s logic. A custom resource definition (CRD) defines a CR and lists all the configurations available to users of the Operator. Installing an Operator creates the CRDs, which are then used to generate CRs.
|
||||
|
||||
== Operator CRs:
|
||||
* `Red Hat Openshift Logging Operator`
|
||||
** (Deprecated) `ClusterLogging` (CL) - Deploys the collector and forwarder which currently are both implemented by a daemonset running on each node.
|
||||
** `ClusterLogForwarder` (CLF) - Generates collector configuration to forward logs per user configuration.
|
||||
* `Loki Operator`:
|
||||
** `LokiStack` - Controls the Loki cluster as log store and the web proxy with OpenShift Container Platform authentication integration to enforce multi-tenancy.
|
||||
** `LokiStack` - Controls the Loki cluster as log store and the web proxy with {product-title} authentication integration to enforce multi-tenancy.
|
||||
** `AlertingRule` - Alerting rules allow you to define alert conditions based on LogQL expressions.
|
||||
** `RecordingRule` - Recording rules allow you to precompute frequently needed or computationally expensive expressions and save their result as a new set of time series.
|
||||
** `RulerConfig` - The ruler API endpoints require to configure a backend object storage to store the recording rules and alerts.
|
||||
|
||||
1
observability/logging/logging-6.1/_attributes
Symbolic link
1
observability/logging/logging-6.1/_attributes
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../_attributes/
|
||||
1
observability/logging/logging-6.1/images
Symbolic link
1
observability/logging/logging-6.1/images
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../images/
|
||||
58
observability/logging/logging-6.1/log6x-about-6.1.adoc
Normal file
58
observability/logging/logging-6.1/log6x-about-6.1.adoc
Normal file
@@ -0,0 +1,58 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
[id="log6x-about-6-1"]
|
||||
= Logging 6.1
|
||||
context: logging-6x-6.1
|
||||
|
||||
toc::[]
|
||||
|
||||
The `ClusterLogForwarder` custom resource (CR) is the central configuration point for log collection and forwarding.
|
||||
|
||||
[id="inputs-and-outputs_6-1_{context}"]
|
||||
== Inputs and outputs
|
||||
|
||||
Inputs specify the sources of logs to be forwarded. Logging provides built-in input types: `application`, `receiver`, `infrastructure`, and `audit`, which select logs from different parts of your cluster. You can also define custom inputs based on namespaces or pod labels to fine-tune log selection.
|
||||
|
||||
Outputs define the destinations where logs are sent. Each output type has its own set of configuration options, allowing you to customize the behavior and authentication settings.
|
||||
|
||||
[id="receiver-input-type_6-1_{context}"]
|
||||
== Receiver input type
|
||||
The receiver input type enables the Logging system to accept logs from external sources. It supports two formats for receiving logs: `http` and `syslog`.
|
||||
|
||||
The `ReceiverSpec` defines the configuration for a receiver input.
|
||||
|
||||
[id="pipelines-and-filters_6-1_{context}"]
|
||||
== Pipelines and filters
|
||||
|
||||
Pipelines determine the flow of logs from inputs to outputs. A pipeline consists of one or more input refs, output refs, and optional filter refs. Filters can be used to transform or drop log messages within a pipeline. The order of filters matters, as they are applied sequentially, and earlier filters can prevent log messages from reaching later stages.
|
||||
|
||||
[id="operator-behavior_6-1_{context}"]
|
||||
== Operator behavior
|
||||
|
||||
The Cluster Logging Operator manages the deployment and configuration of the collector based on the `managementState` field of the `ClusterLogForwarder` resource:
|
||||
|
||||
- When set to `Managed` (default), the operator actively manages the logging resources to match the configuration defined in the spec.
|
||||
- When set to `Unmanaged`, the operator does not take any action, allowing you to manually manage the logging components.
|
||||
|
||||
[id="validation_6-1_{context}"]
|
||||
== Validation
|
||||
Logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The `ClusterLogForwarder` resource enforces validation checks on required fields, dependencies between fields, and the format of input values. Default values are provided for certain fields, reducing the need for explicit configuration in common scenarios.
|
||||
|
||||
[id="quick-start_6-1_{context}"]
|
||||
== Quick start
|
||||
|
||||
OpenShift Logging supports two data models:
|
||||
|
||||
* ViaQ (General Availability)
|
||||
* OpenTelemetry (Technology Preview)
|
||||
|
||||
You can select either of these data models based on your requirement by configuring the `lokiStack.dataModel` field in the `ClusterLogForwarder`. ViaQ is the default data model when forwarding logs to LokiStack.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
In future releases of OpenShift Logging, the default data model will change from ViaQ to OpenTelemetry.
|
||||
====
|
||||
|
||||
include::modules/log6x-quickstart-viaq.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/log6x-quickstart-opentelemetry.adoc[leveloffset=+2]
|
||||
118
observability/logging/logging-6.1/log6x-clf-6.1.adoc
Normal file
118
observability/logging/logging-6.1/log6x-clf-6.1.adoc
Normal file
@@ -0,0 +1,118 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
[id="log6x-clf-6-1"]
|
||||
= Configuring log forwarding
|
||||
:context: logging-6x-6.1
|
||||
|
||||
toc::[]
|
||||
|
||||
The `ClusterLogForwarder` (CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs.
|
||||
|
||||
.Key Functions of the ClusterLogForwarder
|
||||
* Selects log messages using inputs
|
||||
* Forwards logs to external destinations using outputs
|
||||
* Filters, transforms, and drops log messages using filters
|
||||
* Defines log forwarding pipelines connecting inputs, filters and outputs
|
||||
|
||||
include::modules/log6x-collection-setup.adoc[leveloffset=+1]
|
||||
|
||||
[id="modifying-log-level_6-1_{context}"]
|
||||
== Modifying log level in collector
|
||||
|
||||
To modify the log level in the collector, you can set the `observability.openshift.io/log-level` annotation to `trace`, `debug`, `info`, `warn`, `error`, and `off`.
|
||||
|
||||
.Example log level annotation
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: observability.openshift.io/v1
|
||||
kind: ClusterLogForwarder
|
||||
metadata:
|
||||
name: collector
|
||||
annotations:
|
||||
observability.openshift.io/log-level: debug
|
||||
# ...
|
||||
----
|
||||
|
||||
[id="managing-the-operator_6-1_{context}"]
|
||||
== Managing the Operator
|
||||
|
||||
The `ClusterLogForwarder` resource has a `managementState` field that controls whether the operator actively manages its resources or leaves them Unmanaged:
|
||||
|
||||
Managed:: (default) The operator will drive the logging resources to match the desired state in the CLF spec.
|
||||
|
||||
Unmanaged:: The operator will not take any action related to the logging components.
|
||||
|
||||
This allows administrators to temporarily pause log forwarding by setting `managementState` to `Unmanaged`.
|
||||
|
||||
[id="clf-structure_6-1_{context}"]
|
||||
== Structure of the ClusterLogForwarder
|
||||
|
||||
The CLF has a `spec` section that contains the following key components:
|
||||
|
||||
Inputs:: Select log messages to be forwarded. Built-in input types `application`, `infrastructure` and `audit` forward logs from different parts of the cluster. You can also define custom inputs.
|
||||
|
||||
Outputs:: Define destinations to forward logs to. Each output has a unique name and type-specific configuration.
|
||||
|
||||
Pipelines:: Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names.
|
||||
|
||||
Filters:: Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline.
|
||||
|
||||
[id="clf-inputs_6-1_{context}"]
|
||||
=== Inputs
|
||||
|
||||
Inputs are configured in an array under `spec.inputs`. There are three built-in input types:
|
||||
|
||||
application:: Selects logs from all application containers, excluding those in infrastructure namespaces such as `default`, `openshift`, or any namespace with the `kube-` or `openshift-` prefix.
|
||||
|
||||
infrastructure:: Selects logs from infrastructure components running in `default` and `openshift` namespaces and node logs.
|
||||
|
||||
audit:: Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd.
|
||||
|
||||
Users can define custom inputs of type `application` that select logs from specific namespaces or using pod labels.
|
||||
|
||||
[id="clf-outputs_6-1_{context}"]
|
||||
=== Outputs
|
||||
|
||||
Outputs are configured in an array under `spec.outputs`. Each output must have a unique name and a type. Supported types are:
|
||||
|
||||
azureMonitor:: Forwards logs to Azure Monitor.
|
||||
cloudwatch:: Forwards logs to AWS CloudWatch.
|
||||
elasticsearch:: Forwards logs to an external Elasticsearch instance.
|
||||
googleCloudLogging:: Forwards logs to Google Cloud Logging.
|
||||
http:: Forwards logs to a generic HTTP endpoint.
|
||||
kafka:: Forwards logs to a Kafka broker.
|
||||
loki:: Forwards logs to a Loki logging backend.
|
||||
lokistack:: Forwards logs to the logging supported combination of Loki and web proxy with {Product-Title} authentication integration. LokiStack's proxy uses {Product-Title} authentication to enforce multi-tenancy
|
||||
otlp:: Forwards logs using the OpenTelemetry Protocol.
|
||||
splunk:: Forwards logs to Splunk.
|
||||
syslog:: Forwards logs to an external syslog server.
|
||||
|
||||
Each output type has its own configuration fields.
|
||||
|
||||
include::modules/log6x-configuring-otlp-output.adoc[leveloffset=+1]
|
||||
|
||||
[id="clf-pipelines_6-1_{context}"]
|
||||
=== Pipelines
|
||||
|
||||
Pipelines are configured in an array under `spec.pipelines`. Each pipeline must have a unique name and consists of:
|
||||
|
||||
inputRefs:: Names of inputs whose logs should be forwarded to this pipeline.
|
||||
outputRefs:: Names of outputs to send logs to.
|
||||
filterRefs:: (optional) Names of filters to apply.
|
||||
|
||||
The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters.
|
||||
|
||||
[id="clf-filters_6-1_{context}"]
|
||||
=== Filters
|
||||
|
||||
Filters are configured in an array under `spec.filters`. They can match incoming log messages based on the value of structured fields and modify or drop them.
|
||||
|
||||
Administrators can configure the following types of filters:
|
||||
|
||||
include::modules/log6x-multiline-except.adoc[leveloffset=+2]
|
||||
include::modules/log6x-content-filter-drop-records.adoc[leveloffset=+2]
|
||||
include::modules/log6x-audit-log-filtering.adoc[leveloffset=+2]
|
||||
include::modules/log6x-input-spec-filter-labels-expressions.adoc[leveloffset=+2]
|
||||
include::modules/log6x-content-filter-prune-records.adoc[leveloffset=+2]
|
||||
include::modules/log6x-input-spec-filter-audit-infrastructure.adoc[leveloffset=+1]
|
||||
include::modules/log6x-input-spec-filter-namespace-container.adoc[leveloffset=+1]
|
||||
@@ -0,0 +1,148 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="log6x-configuring-lokistack-otlp-6-1"]
|
||||
= OTLP data ingestion in Loki
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: log6x-configuring-lokistack-otlp-6-1
|
||||
|
||||
toc::[]
|
||||
|
||||
Logging 6.1 enables an API endpoint using the OpenTelemetry Protocol (OTLP). As OTLP is a standardized format not specifically designed for Loki, it requires additional configuration on Loki's side to map OpenTelemetry's data format to Loki's data model. OTLP lacks concepts such as _stream labels_ or _structured metadata_. Instead, OTLP provides metadata about log entries as *attributes*, grouped into three categories:
|
||||
|
||||
* Resource
|
||||
* Scope
|
||||
* Log
|
||||
|
||||
This allows metadata to be set for multiple entries simultaneously or individually as needed.
|
||||
|
||||
include::modules/log6x-configuring-lokistack-otlp-data-ingestion.adoc[leveloffset=+1]
|
||||
|
||||
[id="attribute-mapping_{context}"]
|
||||
== Attribute mapping
|
||||
|
||||
When the {loki-op} is set to `openshift-logging` mode, it automatically applies a default set of attribute mappings. These mappings align specific OTLP attributes with Loki's stream labels and structured metadata.
|
||||
|
||||
For typical setups, these default mappings should be sufficient. However, you might need to customize attribute mapping in the following cases:
|
||||
|
||||
* Using a custom Collector: If your setup includes a custom collector that generates additional attributes, consider customizing the mapping to ensure these attributes are retained in Loki.
|
||||
* Adjusting attribute detail levels: If the default attribute set is more detailed than necessary, you can reduce it to essential attributes only. This can avoid excessive data storage and streamline the {logging} process.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
Attributes that are not mapped to either stream labels or structured metadata are not stored in Loki.
|
||||
====
|
||||
|
||||
[id="custom-attribute-mapping-for-openshift_{context}"]
|
||||
=== Custom attribute mapping for OpenShift
|
||||
|
||||
When using the {loki-op} in `openshift-logging` mode, attribute mapping follow OpenShift defaults, but custom mappings can be configured to adjust these. Custom mappings allow further configurations to meet specific needs.
|
||||
|
||||
In `openshift-logging` mode, custom attribute mappings can be configured globally for all tenants or for individual tenants as needed. When custom mappings are defined, they are appended to the OpenShift defaults. If default recommended labels are not required, they can be disabled in the tenant configuration.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
A major difference between the {loki-op} and Loki itself lies in inheritance handling. Loki only copies `default_resource_attributes_as_index_labels` to tenants by default, while the {loki-op} applies the entire global configuration to each tenant in `openshift-logging` mode.
|
||||
====
|
||||
|
||||
Within `LokiStack`, attribute mapping configuration is managed through the `limits` setting:
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
# ...
|
||||
spec:
|
||||
limits:
|
||||
global:
|
||||
otlp: {} # <1>
|
||||
tenants:
|
||||
application:
|
||||
otlp: {} # <2>
|
||||
----
|
||||
<1> Global OTLP attribute configuration.
|
||||
<2> OTLP attribute configuration for the `application` tenant within `openshift-logging` mode.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Both global and per-tenant OTLP configurations can map attributes to stream labels or structured metadata. At least one stream label is required to save a log entry to Loki storage, so ensure this configuration meets that requirement.
|
||||
====
|
||||
|
||||
Stream labels derive only from resource-level attributes, which the `LokiStack` resource structure reflects:
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
spec:
|
||||
limits:
|
||||
global:
|
||||
otlp:
|
||||
streamLabels:
|
||||
resourceAttributes:
|
||||
- name: "k8s.namespace.name"
|
||||
- name: "k8s.pod.name"
|
||||
- name: "k8s.container.name"
|
||||
----
|
||||
|
||||
Structured metadata, in contrast, can be generated from resource, scope or log-level attributes:
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
# ...
|
||||
spec:
|
||||
limits:
|
||||
global:
|
||||
otlp:
|
||||
streamLabels:
|
||||
# ...
|
||||
structuredMetadata:
|
||||
resourceAttributes:
|
||||
- name: "process.command_line"
|
||||
- name: "k8s\\.pod\\.labels\\..+"
|
||||
regex: true
|
||||
scopeAttributes:
|
||||
- name: "service.name"
|
||||
logAttributes:
|
||||
- name: "http.route"
|
||||
----
|
||||
|
||||
[TIP]
|
||||
====
|
||||
Use regular expressions by setting `regex: true` for attributes names when mapping similar attributes in Loki.
|
||||
====
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
Avoid using regular expressions for stream labels, as this can increase data volume.
|
||||
====
|
||||
|
||||
[id="customizing-openshift-defaults_{context}"]
|
||||
=== Customizing OpenShift defaults
|
||||
|
||||
In `openshift-logging` mode, certain attributes are required and cannot be removed from the configuration due to their role in OpenShift functions. Other attributes, labeled *recommended*, might be disabled if performance is impacted.
|
||||
|
||||
When using the `openshift-logging` mode without custom attributes, you can achieve immediate compatibility with OpenShift tools. If additional attributes are needed as stream labels or structured metadata, use custom configuration. Custom configurations can merge with default configurations.
|
||||
|
||||
[id="removing-recommended-attributes_{context}"]
|
||||
=== Removing recommended attributes
|
||||
|
||||
To reduce default attributes in `openshift-logging` mode, disable recommended attributes:
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
# ...
|
||||
spec:
|
||||
tenants:
|
||||
mode: openshift-logging
|
||||
openshift:
|
||||
otlp:
|
||||
disableRecommendedAttributes: true # <1>
|
||||
----
|
||||
<1> Set `disableRecommendedAttributes: true` to remove recommended attributes, which limits default attributes to the *required attributes*.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
This option is beneficial if the default attributes causes performance or storage issues. This setting might negatively impact query performance, as it removes default stream labels. You should pair this option with a custom attribute configuration to retain attributes essential for queries.
|
||||
====
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources_{context}"]
|
||||
== Additional resources
|
||||
* link:https://grafana.com/docs/loki/latest/get-started/labels/[Loki labels]
|
||||
* link:https://grafana.com/docs/loki/latest/get-started/labels/structured-metadata/[Structured metadata]
|
||||
* link:https://opentelemetry.io/docs/specs/otel/common/#attribute[OpenTelemetry attribute]
|
||||
46
observability/logging/logging-6.1/log6x-loki-6.1.adoc
Normal file
46
observability/logging/logging-6.1/log6x-loki-6.1.adoc
Normal file
@@ -0,0 +1,46 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
include::_attributes/common-attributes.adoc[leveloffset=+1]
|
||||
[id="log6x-loki-6-1"]
|
||||
= Storing logs with LokiStack
|
||||
:context: log6x-loki-6.1
|
||||
|
||||
toc::[]
|
||||
|
||||
You can configure a `LokiStack` CR to store application, audit, and infrastructure-related logs.
|
||||
|
||||
include::snippets/log6x-loki-statement-snip.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/log6x-loki-sizing.adoc[leveloffset=+1]
|
||||
|
||||
[id="prerequisites-6-1_{context}"]
|
||||
== Prerequisites
|
||||
|
||||
* You have installed the {loki-op} by using the CLI or web console.
|
||||
* You have a `serviceAccount` in the same namespace in which you create the `ClusterLogForwarder`.
|
||||
* The `serviceAccount` is assigned `collect-audit-logs`, `collect-application-logs`, and `collect-infrastructure-logs` cluster roles.
|
||||
|
||||
[id="setup-6-1_{context}"]
|
||||
== Core Setup and Configuration
|
||||
*Role-based access controls, basic monitoring, and pod placement to deploy Loki.*
|
||||
|
||||
include::modules/log6x-loki-rbac-rules-perms.adoc[leveloffset=+1]
|
||||
include::modules/log6x-enabling-loki-alerts.adoc[leveloffset=+1]
|
||||
include::modules/log6x-loki-memberlist-ip.adoc[leveloffset=+1]
|
||||
include::modules/log6x-loki-retention.adoc[leveloffset=+1]
|
||||
include::modules/log6x-loki-pod-placement.adoc[leveloffset=+1]
|
||||
|
||||
[id="performance-6-1_{context}"]
|
||||
== Enhanced Reliability and Performance
|
||||
*Configurations to ensure Loki’s reliability and efficiency in production.*
|
||||
|
||||
include::modules/log6x-identity-federation.adoc[leveloffset=+1]
|
||||
include::modules/log6x-loki-reliability-hardening.adoc[leveloffset=+1]
|
||||
include::modules/log6x-loki-restart-hardening.adoc[leveloffset=+1]
|
||||
|
||||
[id="advanced-6-1_{context}"]
|
||||
== Advanced Deployment and Scalability
|
||||
*Specialized configurations for high availability, scalability, and error handling.*
|
||||
|
||||
include::modules/log6x-loki-zone-aware-rep.adoc[leveloffset=+1]
|
||||
include::modules/log6x-loki-zone-fail-recovery.adoc[leveloffset=+1]
|
||||
include::modules/log6x-loki-rate-limit-errors.adoc[leveloffset=+1]
|
||||
@@ -0,0 +1,383 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="log6x-opentelemetry-data-model-6-1"]
|
||||
= OpenTelemetry data model
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: log6x-opentelemetry-data-model-6-1
|
||||
|
||||
toc::[]
|
||||
|
||||
This document outlines the protocol and semantic conventions {for} Logging's OpenTelemetry support with {logging-uc} 6.1.
|
||||
|
||||
:FeatureName: The OpenTelemetry Protocol (OTLP) output log forwarder
|
||||
include::snippets/technology-preview.adoc[]
|
||||
|
||||
[id="forwarding-and-ingestion-protocol_{context}"]
|
||||
== Forwarding and ingestion protocol
|
||||
|
||||
Red Hat OpenShift {logging-uc} collects and forwards logs to OpenTelemetry endpoints using link:https://opentelemetry.io/docs/specs/otlp/[OTLP Specification]. OTLP encodes, transports, and delivers telemetry data. You can also deploy Loki storage, which provides an OTLP endpont to ingest log streams. This document defines the semantic conventions for the logs collected from various OpenShift cluster sources.
|
||||
|
||||
[id="semantic-conventions_{context}"]
|
||||
== Semantic conventions
|
||||
|
||||
The log collector in this solution gathers the following log streams:
|
||||
|
||||
* Container logs
|
||||
* Cluster node journal logs
|
||||
* Cluster node auditd logs
|
||||
* Kubernetes and OpenShift API server logs
|
||||
* OpenShift Virtual Network (OVN) logs
|
||||
|
||||
You can forward these streams according to the semantic conventions defined by OpenTelemetry semantic attributes. The semantic conventions in OpenTelemetry define a resource as an immutable representation of the entity producing telemetry, identified by attributes. For example, a process running in a container includes attributes such as `container_name`, `cluster_id`, `pod_name`, `namespace`, and possibly `deployment` or `app_name`. These attributes are grouped under the resource object, which helps reduce repetition and optimizes log transmission as telemetry data.
|
||||
|
||||
In addition to resource attributes, logs might also contain scope attributes specific to instrumentation libraries and log attributes specific to each log entry. These attributes provide greater detail about each log entry and enhance filtering capabilities when querying logs in storage.
|
||||
|
||||
The following sections define the attributes that are generally forwarded.
|
||||
|
||||
[id="log-entry-structure_{context}"]
|
||||
=== Log entry structure
|
||||
|
||||
All log streams include the following link:https://opentelemetry.io/docs/specs/otel/logs/data-model/#log-and-event-record-definition[log data] fields:
|
||||
|
||||
The *Applicable Sources* column indicates which log sources each field applies to:
|
||||
|
||||
* `all`: This field is present in all logs.
|
||||
* `container`: This field is present in Kubernetes container logs, both application and infrastructure.
|
||||
* `audit`: This field is present in Kubernetes, OpenShift API, and OVN logs.
|
||||
* `auditd`: This field is present in node auditd logs.
|
||||
* `journal`: This field is present in node journal logs.
|
||||
|
||||
[cols="1,1,1", options="header"]
|
||||
|===
|
||||
|Name |Applicable Sources |Comment
|
||||
|
||||
|`body`
|
||||
|all
|
||||
|
|
||||
|
||||
|`observedTimeUnixNano`
|
||||
|all
|
||||
|
|
||||
|
||||
|`timeUnixNano`
|
||||
|all
|
||||
|
|
||||
|
||||
|`severityText`
|
||||
|container, journal
|
||||
|
|
||||
|
||||
|`attributes`
|
||||
|all
|
||||
|(Optional) Present when forwarding stream specific attributes
|
||||
|===
|
||||
|
||||
[id="attributes_{context}"]
|
||||
=== Attributes
|
||||
|
||||
Log entries include a set of resource, scope, and log attributes based on their source, as described in the following table.
|
||||
|
||||
The *Location* column specifies the type of attribute:
|
||||
|
||||
* `resource`: Indicates a resource attribute
|
||||
* `scope`: Indicates a scope attribute
|
||||
* `log`: Indicates a log attribute
|
||||
|
||||
The *Storage* column indicates whether the attribute is stored in a LokiStack using the default `openshift-logging` mode and specifies where the attribute is stored:
|
||||
|
||||
* `stream label`:
|
||||
** Enables efficient filtering and querying based on specific labels.
|
||||
** Can be labeled as `required` if the {loki-op} enforces this attribute in the configuration.
|
||||
* `structured metadata`:
|
||||
** Allows for detailed filtering and storage of key-value pairs.
|
||||
** Enables users to use direct labels for streamlined queries without requiring JSON parsing.
|
||||
|
||||
With OTLP, users can filter queries directly by labels rather than using JSON parsing, improving the speed and efficiency of queries.
|
||||
|
||||
[cols="1,1,1,1,1", options="header"]
|
||||
|===
|
||||
|Name |Location |Applicable Sources |Storage (LokiStack) |Comment
|
||||
|
||||
|`log_source`
|
||||
|resource
|
||||
|all
|
||||
|required stream label
|
||||
|*(DEPRECATED)* Compatibility attribute, contains same information as `openshift.log.source`
|
||||
|
||||
|`log_type`
|
||||
|resource
|
||||
|all
|
||||
|required stream label
|
||||
|*(DEPRECATED)* Compatibility attribute, contains same information as `openshift.log.type`
|
||||
|
||||
|`kubernetes.container_name`
|
||||
|resource
|
||||
|container
|
||||
|stream label
|
||||
|*(DEPRECATED)* Compatibility attribute, contains same information as `k8s.container.name`
|
||||
|
||||
|`kubernetes.host`
|
||||
|resource
|
||||
|all
|
||||
|stream label
|
||||
|*(DEPRECATED)* Compatibility attribute, contains same information as `k8s.node.name`
|
||||
|
||||
|`kubernetes.namespace_name`
|
||||
|resource
|
||||
|container
|
||||
|required stream label
|
||||
|*(DEPRECATED)* Compatibility attribute, contains same information as `k8s.namespace.name`
|
||||
|
||||
|`kubernetes.pod_name`
|
||||
|resource
|
||||
|container
|
||||
|stream label
|
||||
|*(DEPRECATED)* Compatibility attribute, contains same information as `k8s.pod.name`
|
||||
|
||||
|`openshift.cluster_id`
|
||||
|resource
|
||||
|all
|
||||
|
|
||||
|*(DEPRECATED)* Compatibility attribute, contains same information as `openshift.cluster.uid`
|
||||
|
||||
|`level`
|
||||
|log
|
||||
|container, journal
|
||||
|
|
||||
|*(DEPRECATED)* Compatibility attribute, contains same information as `severityText`
|
||||
|
||||
|`openshift.cluster.uid`
|
||||
|resource
|
||||
|all
|
||||
|required stream label
|
||||
|
|
||||
|
||||
|`openshift.log.source`
|
||||
|resource
|
||||
|all
|
||||
|required stream label
|
||||
|
|
||||
|
||||
|`openshift.log.type`
|
||||
|resource
|
||||
|all
|
||||
|required stream label
|
||||
|
|
||||
|
||||
|`openshift.labels.*`
|
||||
|resource
|
||||
|all
|
||||
|structured metadata
|
||||
|
|
||||
|
||||
|`k8s.node.name`
|
||||
|resource
|
||||
|all
|
||||
|stream label
|
||||
|
|
||||
|
||||
|`k8s.namespace.name`
|
||||
|resource
|
||||
|container
|
||||
|required stream label
|
||||
|
|
||||
|
||||
|`k8s.container.name`
|
||||
|resource
|
||||
|container
|
||||
|stream label
|
||||
|
|
||||
|
||||
|`k8s.pod.labels.*`
|
||||
|resource
|
||||
|container
|
||||
|structured metadata
|
||||
|
|
||||
|
||||
|`k8s.pod.name`
|
||||
|resource
|
||||
|container
|
||||
|stream label
|
||||
|
|
||||
|
||||
|`k8s.pod.uid`
|
||||
|resource
|
||||
|container
|
||||
|structured metadata
|
||||
|
|
||||
|
||||
|`k8s.cronjob.name`
|
||||
|resource
|
||||
|container
|
||||
|stream label
|
||||
|Conditionally forwarded based on creator of pod
|
||||
|
||||
|`k8s.daemonset.name`
|
||||
|resource
|
||||
|container
|
||||
|stream label
|
||||
|Conditionally forwarded based on creator of pod
|
||||
|
||||
|`k8s.deployment.name`
|
||||
|resource
|
||||
|container
|
||||
|stream label
|
||||
|Conditionally forwarded based on creator of pod
|
||||
|
||||
|`k8s.job.name`
|
||||
|resource
|
||||
|container
|
||||
|stream label
|
||||
|Conditionally forwarded based on creator of pod
|
||||
|
||||
|`k8s.replicaset.name`
|
||||
|resource
|
||||
|container
|
||||
|structured metadata
|
||||
|Conditionally forwarded based on creator of pod
|
||||
|
||||
|`k8s.statefulset.name`
|
||||
|resource
|
||||
|container
|
||||
|stream label
|
||||
|Conditionally forwarded based on creator of pod
|
||||
|
||||
|`log.iostream`
|
||||
|log
|
||||
|container
|
||||
|structured metadata
|
||||
|
|
||||
|
||||
|`k8s.audit.event.level`
|
||||
|log
|
||||
|audit
|
||||
|structured metadata
|
||||
|
|
||||
|
||||
|`k8s.audit.event.stage`
|
||||
|log
|
||||
|audit
|
||||
|structured metadata
|
||||
|
|
||||
|
||||
|`k8s.audit.event.user_agent`
|
||||
|log
|
||||
|audit
|
||||
|structured metadata
|
||||
|
|
||||
|
||||
|`k8s.audit.event.request.uri`
|
||||
|log
|
||||
|audit
|
||||
|structured metadata
|
||||
|
|
||||
|
||||
|`k8s.audit.event.response.code`
|
||||
|log
|
||||
|audit
|
||||
|structured metadata
|
||||
|
|
||||
|
||||
|`k8s.audit.event.annotation.*`
|
||||
|log
|
||||
|audit
|
||||
|structured metadata
|
||||
|
|
||||
|
||||
|`k8s.audit.event.object_ref.resource`
|
||||
|log
|
||||
|audit
|
||||
|structured metadata
|
||||
|
|
||||
|
||||
|`k8s.audit.event.object_ref.name`
|
||||
|log
|
||||
|audit
|
||||
|structured metadata
|
||||
|
|
||||
|
||||
|`k8s.audit.event.object_ref.namespace`
|
||||
|log
|
||||
|audit
|
||||
|structured metadata
|
||||
|
|
||||
|
||||
|`k8s.audit.event.object_ref.api_group`
|
||||
|log
|
||||
|audit
|
||||
|structured metadata
|
||||
|
|
||||
|
||||
|`k8s.audit.event.object_ref.api_version`
|
||||
|log
|
||||
|audit
|
||||
|structured metadata
|
||||
|
|
||||
|
||||
|`k8s.user.username`
|
||||
|log
|
||||
|audit
|
||||
|structured metadata
|
||||
|
|
||||
|
||||
|`k8s.user.groups`
|
||||
|log
|
||||
|audit
|
||||
|structured metadata
|
||||
|
|
||||
|
||||
|`process.executable.name`
|
||||
|resource
|
||||
|journal
|
||||
|structured metadata
|
||||
|
|
||||
|
||||
|`process.executable.path`
|
||||
|resource
|
||||
|journal
|
||||
|structured metadata
|
||||
|
|
||||
|
||||
|`process.command_line`
|
||||
|resource
|
||||
|journal
|
||||
|structured metadata
|
||||
|
|
||||
|
||||
|`process.pid`
|
||||
|resource
|
||||
|journal
|
||||
|structured metadata
|
||||
|
|
||||
|
||||
|`service.name`
|
||||
|resource
|
||||
|journal
|
||||
|stream label
|
||||
|
|
||||
|
||||
|`systemd.t.*`
|
||||
|log
|
||||
|journal
|
||||
|structured metadata
|
||||
|
|
||||
|
||||
|`systemd.u.*`
|
||||
|log
|
||||
|journal
|
||||
|structured metadata
|
||||
|
|
||||
|===
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Attributes marked as *Compatibility attribute* support minimal backward compatibility with the ViaQ data model. These attributes are deprecated and function as a compatibility layer to ensure continued UI functionality. These attributes will remain supported until the Logging UI fully supports the OpenTelemetry counterparts in future releases.
|
||||
====
|
||||
|
||||
Loki changes the attribute names when persisting them to storage. The names will be lowercased, and all characters in the set: (`.`,`/`,`-`) will be replaced by underscores (`_`). For example, `k8s.namespace.name` will become `k8s_namespace_name`.
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources_{context}"]
|
||||
== Additional resources
|
||||
* link:https://opentelemetry.io/docs/specs/semconv/[Semantic Conventions]
|
||||
* link:https://opentelemetry.io/docs/specs/otel/logs/data-model/[Logs Data Model]
|
||||
* link:https://opentelemetry.io/docs/specs/semconv/general/logs/[General Logs Attributes]
|
||||
@@ -0,0 +1,9 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
[id="log6x-release-notes-6-1"]
|
||||
= Logging 6.1
|
||||
:context: logging-6x-6.1
|
||||
|
||||
toc::[]
|
||||
|
||||
include::modules/log6x-6-1-0-rn.adoc[leveloffset=+1]
|
||||
11
observability/logging/logging-6.1/log6x-visual-6.1.adoc
Normal file
11
observability/logging/logging-6.1/log6x-visual-6.1.adoc
Normal file
@@ -0,0 +1,11 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="log6x-visual-6-1"]
|
||||
= Visualization for logging
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: logging-6x-6.1
|
||||
|
||||
toc::[]
|
||||
|
||||
Visualization for logging is provided by deploying the xref:../../../observability/cluster_observability_operator/ui_plugins/logging-ui-plugin.adoc#logging-ui-plugin[Logging UI Plugin] of the xref:../../../observability/cluster_observability_operator/cluster-observability-operator-overview.adoc#cluster-observability-operator-overview[Cluster Observability Operator], which requires Operator installation.
|
||||
|
||||
include::snippets/logging-support-exception-for-cluster-observability-operator-due-to-logging-ui-plugin.adoc[]
|
||||
1
observability/logging/logging-6.1/modules
Symbolic link
1
observability/logging/logging-6.1/modules
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../modules/
|
||||
1
observability/logging/logging-6.1/snippets
Symbolic link
1
observability/logging/logging-6.1/snippets
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../snippets/
|
||||
Reference in New Issue
Block a user