1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 21:46:22 +01:00

OBSDOCS-460: Clean up collector docs

This commit is contained in:
Ashleigh Brennan
2023-10-20 10:29:06 -05:00
committed by openshift-cherrypick-robot
parent 599599a82e
commit 7c02bf9985
8 changed files with 98 additions and 218 deletions

View File

@@ -46,10 +46,6 @@ include::modules/cluster-logging-export-fields.adoc[leveloffset=+2]
For information, see xref:../logging/cluster-logging-exported-fields.adoc#cluster-logging-exported-fields[About exporting fields].
include::modules/cluster-logging-about-collector.adoc[leveloffset=+2]
For information, see xref:../logging/log_collection_forwarding/cluster-logging-collector.adoc#cluster-logging-collector[Configuring the logging collector].
include::modules/cluster-logging-about-logstore.adoc[leveloffset=+2]
For information, see xref:../logging/config/cluster-logging-log-store.adoc#cluster-logging-log-store[Configuring the log store].
@@ -61,5 +57,3 @@ For information, see xref:../logging/config/cluster-logging-visualizer.adoc#clus
include::modules/cluster-logging-eventrouter-about.adoc[leveloffset=+2]
For information, see xref:../logging/log_collection_forwarding/cluster-logging-eventrouter.adoc#cluster-logging-eventrouter[Collecting and storing Kubernetes events].
include::modules/cluster-logging-feature-reference.adoc[leveloffset=+1]

View File

@@ -7,6 +7,19 @@ include::_attributes/attributes-openshift-dedicated.adoc[]
toc::[]
The Cluster Logging Operator deploys a collector based on the `ClusterLogForwarder` resource specification. There are two collector options supported by this Operator: the legacy Fluentd collector, and the Vector collector.
include::snippets/logging-fluentd-dep-snip.adoc[]
include::modules/about-log-collection.adoc[leveloffset=+1]
include::modules/logging-vector-fluentd-feature-comparison.adoc[leveloffset=+2]
include::modules/log-forwarding-collector-outputs.adoc[leveloffset=+2]
[id="log-forwarding-about-clf"]
== Log forwarding
Administrators can create `ClusterLogForwarder` resources that specify which logs are collected, how they are transformed, and where they are forwarded to.
`ClusterLogForwarder` resources can be used up to forward container, infrastructure, and audit logs to specific endpoints within or outside of a cluster. Transport Layer Security (TLS) is supported so that log forwarders can be configured to send logs securely.

View File

@@ -6,10 +6,4 @@ include::_attributes/common-attributes.adoc[]
toc::[]
:leveloffset: +1
include::modules/logging-feature-reference-5.6.adoc[]
include::modules/logging-5.6-api-ref.adoc[]
:leveloffset: -1
include::modules/logging-5.6-api-ref.adoc[leveloffset=+1]

View File

@@ -0,0 +1,51 @@
// Module included in the following assemblies:
//
// * logging/log_collection_forwarding/log-forwarding.adoc
:_mod-docs-content-type: CONCEPT
[id="about-log-collection_{context}"]
= Log collection
The log collector is a daemon set that deploys pods to each {product-title} node to collect container and node logs.
By default, the log collector uses the following sources:
* System and infrastructure logs generated by journald log messages from the operating system, the container runtime, and {product-title}.
* `/var/log/containers/*.log` for all container logs.
If you configure the log collector to collect audit logs, it collects them from `/var/log/audit/audit.log`.
The log collector collects the logs from these sources and forwards them internally or externally depending on your {logging} configuration.
[id="about-log-collectors-types_{context}"]
== Log collector types
link:https://vector.dev/docs/about/what-is-vector/[Vector] is a log collector offered as an alternative to Fluentd for the {logging}.
You can configure which logging collector type your cluster uses by modifying the `ClusterLogging` custom resource (CR) `collection` spec:
.Example ClusterLogging CR that configures Vector as the collector
[source,yaml]
----
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
collection:
logs:
type: vector
vector: {}
# ...
----
[id="about-log-collectors-limitations_{context}"]
== Log collection limitations
The container runtimes provide minimal information to identify the source of log messages: project, pod name, and container ID. This information is not sufficient to uniquely identify the source of the logs. If a pod with a given name and project is deleted before the log collector begins processing its logs, information from the API server, such as labels and annotations, might not be available. There might not be a way to distinguish the log messages from a similarly named pod and project or trace the logs to their source. This limitation means that log collection and normalization are considered _best effort_.
[IMPORTANT]
====
The available container runtimes provide minimal information to identify the source of log messages and do not guarantee unique individual log messages or that these messages can be traced to their source.
====

View File

@@ -1,34 +0,0 @@
// Module included in the following assemblies:
//
// * logging/cluster-logging.adoc
:_mod-docs-content-type: CONCEPT
[id="cluster-logging-about-collector_{context}"]
= About the logging collector
The Cluster Logging Operator deploys a collector based on the `ClusterLogForwarder` resource specification. There are two collector options supported by this Operator: the Fluentd collector, and the Vector collector.
[IMPORTANT]
====
The Fluentd collector is now deprecated and will be removed in a future logging release.
====
The {logging-title} collects container and node logs.
By default, the log collector uses the following sources:
* journald for all system logs
* `/var/log/containers/*.log` for all container logs
If you configure the log collector to collect audit logs, it gets them from `/var/log/audit/audit.log`.
The logging collector is a daemon set that deploys pods to each {product-title} node. System and infrastructure logs are generated by journald log messages from the operating system, the container runtime, and {product-title}. Application logs are generated by the CRI-O container engine. Fluentd collects the logs from these sources and forwards them internally or externally as you configure in {product-title}.
The container runtimes provide minimal information to identify the source of log messages: project, pod name, and container ID. This information is not sufficient to uniquely identify the source of the logs. If a pod with a given name and project is deleted before the log collector begins processing its logs, information from the API server, such as labels and annotations, might not be available. There might not be a way to distinguish the log messages from a similarly named pod and project or trace the logs to their source. This limitation means that log collection and normalization are considered *best effort*.
[IMPORTANT]
====
The available container runtimes provide minimal information to identify the
source of log messages and do not guarantee unique individual log
messages or that these messages can be traced to their source.
====

View File

@@ -1,159 +0,0 @@
// Module is included in the following assemblies:
//cluster-logging-loki.adoc
:_mod-docs-content-type: REFERENCE
[id="cluster-logging-about-vector_{context}"]
= About Vector
Vector is a log collector offered as an alternative to Fluentd for the {logging}.
The following outputs are supported:
* `elasticsearch`. An external Elasticsearch instance. The `elasticsearch` output can use a TLS connection.
* `kafka`. A Kafka broker. The `kafka` output can use an unsecured or TLS connection.
* `loki`. Loki, a horizontally scalable, highly available, multitenant log aggregation system.
[id="cluster-logging-vector-enable_{context}"]
== Enabling Vector
Use the following steps to enable Vector on your {product-title} cluster.
.Procedure
. Edit the `ClusterLogging` custom resource (CR) in the `openshift-logging` project:
+
[source,terminal]
----
$ oc -n openshift-logging edit ClusterLogging instance
----
. Add a `logging.openshift.io/preview-vector-collector: enabled` annotation to the `ClusterLogging` custom resource (CR).
. Add `vector` as a collection type to the `ClusterLogging` custom resource (CR).
[source,yaml]
----
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "instance"
namespace: "openshift-logging"
annotations:
logging.openshift.io/preview-vector-collector: enabled
spec:
collection:
logs:
type: "vector"
vector: {}
----
[role="_additional-resources"]
.Additional resources
* link:https://vector.dev/docs/about/what-is-vector/[Vector Documentation]
== Collector features
.Log Sources
[options="header"]
|===============================================================
| Feature | Fluentd | Vector
| App container logs | ✓ | ✓
| App-specific routing | ✓ | ✓
| App-specific routing by namespace | ✓ | ✓
| Infra container logs | ✓ | ✓
| Infra journal logs | ✓ | ✓
| Kube API audit logs | ✓ | ✓
| OpenShift API audit logs | ✓ | ✓
| Open Virtual Network (OVN) audit logs| ✓ | ✓
|===============================================================
.Outputs
[options="header"]
|==========================================================
| Feature | Fluentd | Vector
| Elasticsearch v5-v7 | ✓ | ✓
| Fluent forward | ✓ |
| Syslog RFC3164 | ✓ | ✓ (Logging 5.7+)
| Syslog RFC5424 | ✓ | ✓ (Logging 5.7+)
| Kafka | ✓ | ✓
| Cloudwatch | ✓ | ✓
| Loki | ✓ | ✓
| HTTP | ✓ | ✓ (Logging 5.7+)
|==========================================================
.Authorization and Authentication
[options="header"]
|=================================================================
| Feature | Fluentd | Vector
| Elasticsearch certificates | ✓ | ✓
| Elasticsearch username / password | ✓ | ✓
| Cloudwatch keys | ✓ | ✓
| Cloudwatch STS | ✓ | ✓
| Kafka certificates | ✓ | ✓
| Kafka username / password | ✓ | ✓
| Kafka SASL | ✓ | ✓
| Loki bearer token | ✓ | ✓
|=================================================================
.Normalizations and Transformations
[options="header"]
|============================================================================
| Feature | Fluentd | Vector
| Viaq data model - app | ✓ | ✓
| Viaq data model - infra | ✓ | ✓
| Viaq data model - infra(journal) | ✓ | ✓
| Viaq data model - Linux audit | ✓ | ✓
| Viaq data model - kube-apiserver audit | ✓ | ✓
| Viaq data model - OpenShift API audit | ✓ | ✓
| Viaq data model - OVN | ✓ | ✓
| Loglevel Normalization | ✓ | ✓
| JSON parsing | ✓ | ✓
| Structured Index | ✓ | ✓
| Multiline error detection | ✓ | ✓
| Multicontainer / split indices | ✓ | ✓
| Flatten labels | ✓ | ✓
| CLF static labels | ✓ | ✓
|============================================================================
.Tuning
[options="header"]
|==========================================================
| Feature | Fluentd | Vector
| Fluentd readlinelimit | ✓ |
| Fluentd buffer | ✓ |
| - chunklimitsize | ✓ |
| - totallimitsize | ✓ |
| - overflowaction | ✓ |
| - flushthreadcount | ✓ |
| - flushmode | ✓ |
| - flushinterval | ✓ |
| - retrywait | ✓ |
| - retrytype | ✓ |
| - retrymaxinterval | ✓ |
| - retrytimeout | ✓ |
|==========================================================
.Visibility
[options="header"]
|=====================================================
| Feature | Fluentd | Vector
| Metrics | ✓ | ✓
| Dashboard | ✓ | ✓
| Alerts | ✓ |
|=====================================================
.Miscellaneous
[options="header"]
|===========================================================
| Feature | Fluentd | Vector
| Global proxy support | ✓ | ✓
| x86 support | ✓ | ✓
| ARM support | ✓ | ✓
ifndef::openshift-rosa[]
| {ibmpowerProductName} support | ✓ | ✓
| {ibmzProductName} support | ✓ | ✓
endif::openshift-rosa[]
| IPv6 support | ✓ | ✓
| Log event buffering | ✓ |
| Disconnected Cluster | ✓ | ✓
|===========================================================

View File

@@ -0,0 +1,26 @@
// Module included in the following assemblies:
//
// * logging/log_collection_forwarding/log-forwarding.adoc
:_mod-docs-content-type: REFERENCE
[id="log-forwarding-collector-outputs_{context}"]
= Collector outputs
The following collector outputs are supported:
.Supported outputs
[options="header"]
|==========================================================
| Feature | Fluentd | Vector
| Elasticsearch v6-v8 | ✓ | ✓
| Fluent forward | ✓ |
| Syslog RFC3164 | ✓ | ✓ (Logging 5.7+)
| Syslog RFC5424 | ✓ | ✓ (Logging 5.7+)
| Kafka | ✓ | ✓
| Cloudwatch | ✓ | ✓
| Cloudwatch STS | ✓ | ✓
| Loki | ✓ | ✓
| HTTP | ✓ | ✓ (Logging 5.7+)
| Google Cloud Logging | ✓ | ✓
| Splunk | | ✓ (Logging 5.6+)
|==========================================================

View File

@@ -1,11 +1,10 @@
// Module is included in the following assemblies:
// Module included in the following assemblies:
//
// * logging/log_collection_forwarding/log-forwarding.adoc
:_mod-docs-content-type: REFERENCE
[id="logging-5-6-collector-ref_{context}"]
= Collector features
include::snippets/logging-outputs-5.6-snip.adoc[]
[id="logging-vector-fluentd-feature-comparison_{context}"]
= Log collector features by type
.Log Sources
[options="header"]
@@ -49,7 +48,7 @@ include::snippets/logging-outputs-5.6-snip.adoc[]
| Loglevel Normalization | ✓ | ✓
| JSON parsing | ✓ | ✓
| Structured Index | ✓ | ✓
| Multiline error detection | ✓ |
| Multiline error detection | ✓ | ✓
| Multicontainer / split indices | ✓ | ✓
| Flatten labels | ✓ | ✓
| CLF static labels | ✓ | ✓
@@ -81,6 +80,7 @@ include::snippets/logging-outputs-5.6-snip.adoc[]
| Dashboard | ✓ | ✓
| Alerts | ✓ |
|=====================================================
// alerts maybe needs updated for vector in 5.7+?
.Miscellaneous
[options="header"]
@@ -95,8 +95,3 @@ include::snippets/logging-outputs-5.6-snip.adoc[]
| Log event buffering | ✓ |
| Disconnected Cluster | ✓ | ✓
|===========================================================
[role="_additional-resources"]
.Additional resources
* link:https://vector.dev/docs/about/what-is-vector/[Vector Documentation]