1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

OBSDOCS-104: Update supported logging outputs documentation

This commit is contained in:
Ashleigh Brennan
2023-12-05 11:04:02 -06:00
committed by openshift-cherrypick-robot
parent e5c882fc69
commit 7f1aa9988a
10 changed files with 97 additions and 405 deletions

View File

@@ -1,13 +1,17 @@
:_mod-docs-content-type: ASSEMBLY
include::_attributes/common-attributes.adoc[]
include::_attributes/attributes-openshift-dedicated.adoc[]
[id="logging-output-types"]
= Log output types
:context: logging-output-types
toc::[]
Log outputs specified in the `ClusterLogForwarder` CR can be any of the following types:
Outputs define the destination where logs are sent to from a log forwarder. You can configure multiple types of outputs in the `ClusterLogForwarder` custom resource (CR) to send logs to servers that support different protocols.
include::modules/supported-log-outputs.adoc[leveloffset=+1]
[id="logging-output-types-descriptions"]
== Output type descriptions
`default`:: The on-cluster, Red{nbsp}Hat managed log store. You are not required to configure the default output.
+
@@ -18,7 +22,7 @@ If you configure a `default` output, you receive an error message, because the `
`loki`:: Loki, a horizontally scalable, highly available, multi-tenant log aggregation system.
`kafka`:: A Kafka broker. The `kafka` output can use a TCP or TLS connection.
`elasticsearch`:: An external Elasticsearch instance. The `elasticsearch` output can use a TLS connection.
`fluentdForward`:: An external log aggregation solution that supports Fluentd. This option uses the Fluentd *forward* protocols. The `fluentForward` output can use a TCP or TLS connection and supports shared-key authentication by providing a *shared_key* field in a secret. Shared-key authentication can be used with or without TLS.
`fluentdForward`:: An external log aggregation solution that supports Fluentd. This option uses the Fluentd `forward` protocols. The `fluentForward` output can use a TCP or TLS connection and supports shared-key authentication by providing a `shared_key` field in a secret. Shared-key authentication can be used with or without TLS.
+
[IMPORTANT]
====
@@ -26,12 +30,3 @@ The `fluentdForward` output is only supported if you are using the Fluentd colle
====
`syslog`:: An external log aggregation solution that supports the syslog link:https://tools.ietf.org/html/rfc3164[RFC3164] or link:https://tools.ietf.org/html/rfc5424[RFC5424] protocols. The `syslog` output can use a UDP, TCP, or TLS connection.
`cloudwatch`:: Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS).
// supported outputs by version
include::modules/cluster-logging-collector-log-forwarding-supported-plugins-5-7.adoc[leveloffset=+1]
include::modules/cluster-logging-collector-log-forwarding-supported-plugins-5-6.adoc[leveloffset=+1]
include::modules/cluster-logging-collector-log-forwarding-supported-plugins-5-5.adoc[leveloffset=+1]
include::modules/cluster-logging-collector-log-forwarding-supported-plugins-5-4.adoc[leveloffset=+1]
include::modules/cluster-logging-collector-log-forwarding-supported-plugins-5-3.adoc[leveloffset=+1]
include::modules/cluster-logging-collector-log-forwarding-supported-plugins-5-2.adoc[leveloffset=+1]
include::modules/cluster-logging-collector-log-forwarding-supported-plugins-5-1.adoc[leveloffset=+1]

View File

@@ -4,24 +4,6 @@
To send logs to specific endpoints inside and outside your {product-title} cluster, you specify a combination of _outputs_ and _pipelines_ in a `ClusterLogForwarder` custom resource (CR). You can also use _inputs_ to forward the application logs associated with a specific project to an endpoint. Authentication is provided by a Kubernetes _Secret_ object.
_output_:: The destination for log data that you define, or where you want the logs sent. An output can be one of the following types:
+
--
* `elasticsearch`. An external Elasticsearch instance. The `elasticsearch` output can use a TLS connection.
* `fluentdForward`. An external log aggregation solution that supports Fluentd. This option uses the Fluentd *forward* protocols. The `fluentForward` output can use a TCP or TLS connection and supports shared-key authentication by providing a *shared_key* field in a secret. Shared-key authentication can be used with or without TLS.
* `syslog`. An external log aggregation solution that supports the syslog link:https://tools.ietf.org/html/rfc3164[RFC3164] or link:https://tools.ietf.org/html/rfc5424[RFC5424] protocols. The `syslog` output can use a UDP, TCP, or TLS connection.
* `cloudwatch`. Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS).
* `loki`. Loki, a horizontally scalable, highly available, multi-tenant log aggregation system.
* `kafka`. A Kafka broker. The `kafka` output can use a TCP or TLS connection.
* `default`. The internal {product-title} Elasticsearch instance. You are not required to configure the default output. If you do configure a `default` output, you receive an error message because the `default` output is reserved for the {clo}.
--
+
_pipeline_:: Defines simple routing from one log type to one or more outputs, or which logs you want to send. The log types are one of the following:
+
--

View File

@@ -1,53 +0,0 @@
// Module included in the following assemblies:
//
// * logging/log_collection_forwarding/logging-output-types.adoc
[id="cluster-logging-collector-log-forwarding-supported-plugins-5-1_{context}"]
= Supported log data output types in OpenShift Logging 5.1
Red Hat OpenShift Logging 5.1 provides the following output types and protocols for sending log data to target log collectors.
Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols.
[options="header"]
|====
| Output types | Protocols | Tested with
| elasticsearch
| elasticsearch
a| Elasticsearch 6.8.1
Elasticsearch 6.8.4
Elasticsearch 7.12.2
| fluentdForward
| fluentd forward v1
a| fluentd 1.7.4
logstash 7.10.1
| kafka
| kafka 0.11
a| kafka 2.4.1
kafka 2.7.0
| syslog
| RFC-3164, RFC-5424
| rsyslog-8.39.0
|====
// Note to tech writer, validate these items against the corresponding line of the test configuration file that Red Hat OpenShift Logging 5.0 uses: https://github.com/openshift/origin-aggregated-logging/blob/release-5.0/fluentd/Gemfile.lock
// This file is the authoritative source of information about which items and versions Red Hat tests and supports.
// According to this link:https://github.com/zendesk/ruby-kafka#compatibility[Zendesk compatibility list for ruby-kafka], the fluent-plugin-kafka plugin supports Kafka version 0.11.
// Logstash support is according to https://github.com/openshift/cluster-logging-operator/blob/master/test/functional/outputs/forward_to_logstash_test.go#L37
[NOTE]
====
Previously, the syslog output supported only RFC-3164. The current syslog output adds support for RFC-5424.
====
//ENG-Feedback: How can we reformat this to accurately reflect 5.4?

View File

@@ -1,55 +0,0 @@
// Module included in the following assemblies:
//
// * logging/log_collection_forwarding/logging-output-types.adoc
[id="cluster-logging-collector-log-forwarding-supported-plugins-5-2_{context}"]
= Supported log data output types in OpenShift Logging 5.2
Red Hat OpenShift Logging 5.2 provides the following output types and protocols for sending log data to target log collectors.
Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols.
[options="header"]
|====
| Output types | Protocols | Tested with
| Amazon CloudWatch
| REST over HTTPS
| The current version of Amazon CloudWatch
| elasticsearch
| elasticsearch
a| Elasticsearch 6.8.1
Elasticsearch 6.8.4
Elasticsearch 7.12.2
| fluentdForward
| fluentd forward v1
a| fluentd 1.7.4
logstash 7.10.1
| Loki
| REST over HTTP and HTTPS
| Loki 2.3.0 deployed on OCP and Grafana labs
| kafka
| kafka 0.11
a| kafka 2.4.1
kafka 2.7.0
| syslog
| RFC-3164, RFC-5424
| rsyslog-8.39.0
|====
// Note to tech writer, validate these items against the corresponding line of the test configuration file that Red Hat OpenShift Logging 5.0 uses: https://github.com/openshift/origin-aggregated-logging/blob/release-5.0/fluentd/Gemfile.lock
// This file is the authoritative source of information about which items and versions Red Hat tests and supports.
// According to this link:https://github.com/zendesk/ruby-kafka#compatibility[Zendesk compatibility list for ruby-kafka], the fluent-plugin-kafka plugin supports Kafka version 0.11.
// Logstash support is according to https://github.com/openshift/cluster-logging-operator/blob/master/test/functional/outputs/forward_to_logstash_test.go#L37

View File

@@ -1,57 +0,0 @@
// Module included in the following assemblies:
//
// * logging/log_collection_forwarding/logging-output-types.adoc
[id="cluster-logging-collector-log-forwarding-supported-plugins-5-3_{context}"]
= Supported log data output types in OpenShift Logging 5.3
Red Hat OpenShift Logging 5.3 provides the following output types and protocols for sending log data to target log collectors.
Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols.
[options="header"]
|====
| Output types | Protocols | Tested with
| Amazon CloudWatch
| REST over HTTPS
| The current version of Amazon CloudWatch
| elasticsearch
| elasticsearch
a| Elasticsearch 7.10.1
| fluentdForward
| fluentd forward v1
a| fluentd 1.7.4
logstash 7.10.1
| Loki
| REST over HTTP and HTTPS
| Loki 2.2.1 deployed on OCP
| kafka
| kafka 0.11
a| kafka 2.7.0
| syslog
| RFC-3164, RFC-5424
| rsyslog-8.39.0
|====
// Note: validate these items against the corresponding line of the test configuration files that Red Hat OpenShift Logging uses:
//
// cloudwatch https://github.com/openshift/cluster-logging-operator/blob/release-5.3/test/functional/outputs/forward_to_cloudwatch_test.go#L18
// elasticsearch https://github.com/openshift/cluster-logging-operator/blob/release-5.3/test/functional/outputs/forward_to_elasticsearch_index_test.go#L17
// es fluentd https://github.com/ViaQ/logging-fluentd/blob/release-5.5/fluentd/Gemfile.lock#L55
// fluentd https://github.com/openshift/cluster-logging-operator/blob/release-5.3/Makefile#L23
// kafka https://github.com/openshift/cluster-logging-operator/blob/release-5.3/test/helpers/kafka/constants.go#L17
// kafka fluentd https://github.com/zendesk/ruby-kafka/tree/v1.4.0#compatibility
// logstash https://github.com/openshift/cluster-logging-operator/blob/release-5.3/test/functional/outputs/forward_to_logstash_test.go#L30
// loki https://github.com/openshift/cluster-logging-operator/blob/release-5.3/test/helpers/loki/receiver.go#L25
// syslog protocols https://github.com/openshift/cluster-logging-operator/tree/release-5.3/test/functional/outputs/syslog
// syslog version https://github.com/openshift/cluster-logging-operator/blob/release-5.3/test/framework/functional/output_syslog.go#L13

View File

@@ -1,57 +0,0 @@
// Module included in the following assemblies:
//
// * logging/log_collection_forwarding/logging-output-types.adoc
[id="cluster-logging-collector-log-forwarding-supported-plugins-5-4_{context}"]
= Supported log data output types in OpenShift Logging 5.4
Red Hat OpenShift Logging 5.4 provides the following output types and protocols for sending log data to target log collectors.
Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols.
[options="header"]
|====
| Output types | Protocols | Tested with
| Amazon CloudWatch
| REST over HTTPS
| The current version of Amazon CloudWatch
| elasticsearch
| elasticsearch
a| Elasticsearch 7.10.1
| fluentdForward
| fluentd forward v1
a| fluentd 1.14.5
logstash 7.10.1
| Loki
| REST over HTTP and HTTPS
| Loki 2.2.1 deployed on OCP
| kafka
| kafka 0.11
a| kafka 2.7.0
| syslog
| RFC-3164, RFC-5424
| rsyslog-8.39.0
|====
// Note: validate these items against the corresponding line of the test configuration files that Red Hat OpenShift Logging uses:
//
// cloudwatch https://github.com/openshift/cluster-logging-operator/blob/release-5.4/test/functional/outputs/forward_to_cloudwatch_test.go#L18
// elasticsearch https://github.com/openshift/cluster-logging-operator/blob/release-5.4/test/functional/outputs/forward_to_elasticsearch_index_test.go#L17
// es fluentd https://github.com/ViaQ/logging-fluentd/blob/release-5.5/fluentd/Gemfile.lock#L55
// fluentd https://github.com/openshift/cluster-logging-operator/blob/release-5.4/Makefile#L23
// kafka https://github.com/openshift/cluster-logging-operator/blob/release-5.4/test/helpers/kafka/constants.go#L17
// kafka fluentd https://github.com/zendesk/ruby-kafka/tree/v1.4.0#compatibility
// logstash https://github.com/openshift/cluster-logging-operator/blob/release-5.4/test/functional/outputs/forward_to_logstash_test.go#L30
// loki https://github.com/openshift/cluster-logging-operator/blob/release-5.4/test/helpers/loki/receiver.go#L26
// syslog protocols https://github.com/openshift/cluster-logging-operator/tree/release-5.4/test/functional/outputs/syslog
// syslog version https://github.com/openshift/cluster-logging-operator/blob/release-5.4/test/framework/functional/output_syslog.go#L13

View File

@@ -1,58 +0,0 @@
// Module included in the following assemblies:
//
// * logging/log_collection_forwarding/logging-output-types.adoc
[id="cluster-logging-collector-log-forwarding-supported-plugins-5-5_{context}"]
= Supported log data output types in OpenShift Logging 5.5
Red Hat OpenShift Logging 5.5 provides the following output types and protocols for sending log data to target log collectors.
Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols.
[options="header"]
|====
| Output types | Protocols | Tested with
| Amazon CloudWatch
| REST over HTTPS
| The current version of Amazon CloudWatch
| elasticsearch
| elasticsearch
a| Elasticsearch 7.10.1
| fluentdForward
| fluentd forward v1
a| fluentd 1.14.6
logstash 7.10.1
| Loki
| REST over HTTP and HTTPS
| Loki 2.5.0 deployed on OCP
| kafka
| kafka 0.11
a| kafka 2.7.0
| syslog
| RFC-3164, RFC-5424
| rsyslog-8.39.0
|====
// Note: validate these items against the corresponding line of the test configuration files that Red Hat OpenShift Logging uses:
//
// cloudwatch https://github.com/openshift/cluster-logging-operator/blob/release-5.5/test/functional/outputs/forward_to_cloudwatch_test.go#L18
// elasticsearch https://github.com/openshift/cluster-logging-operator/blob/release-5.5/test/functional/outputs/elasticsearch/forward_to_elasticsearch_index_test.go#L24
// elasticsearch https://github.com/openshift/cluster-logging-operator/blob/release-5.5/test/framework/functional/output_elasticsearch7.go#L13
// es fluentd https://github.com/ViaQ/logging-fluentd/blob/release-5.5/fluentd/Gemfile.lock#L55
// fluentd https://github.com/openshift/cluster-logging-operator/blob/release-5.5/Makefile#L24
// kafka https://github.com/openshift/cluster-logging-operator/blob/release-5.5/test/helpers/kafka/constants.go#L17
// kafka fluentd https://github.com/zendesk/ruby-kafka/tree/v1.4.0#compatibility
// logstash https://github.com/openshift/cluster-logging-operator/blob/release-5.5/test/functional/outputs/forward_to_logstash_test.go#L30
// loki https://github.com/openshift/cluster-logging-operator/blob/release-5.5/test/helpers/loki/receiver.go#L26
// syslog protocols https://github.com/openshift/cluster-logging-operator/tree/release-5.5/test/functional/outputs/syslog
// syslog version https://github.com/openshift/cluster-logging-operator/blob/release-5.5/test/framework/functional/output_syslog.go#L14

View File

@@ -1,67 +0,0 @@
// Module included in the following assemblies:
//
// * logging/log_collection_forwarding/logging-output-types.adoc
[id="cluster-logging-collector-log-forwarding-supported-plugins-5-6_{context}"]
= Supported log data output types in OpenShift Logging 5.6
Red Hat OpenShift Logging 5.6 provides the following output types and protocols for sending log data to target log collectors.
Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range target log collectors that ingest these protocols.
[options="header"]
|====
| Output types | Protocols | Tested with
| Amazon CloudWatch
| REST over HTTPS
| The current version of Amazon CloudWatch
| elasticsearch
| elasticsearch
a| Elasticsearch 6.8.23
Elasticsearch 7.10.1
Elasticsearch 8.6.1
| fluentdForward
| fluentd forward v1
a| fluentd 1.14.6
logstash 7.10.1
| Loki
| REST over HTTP and HTTPS
| Loki 2.5.0 deployed on OCP
| kafka
| kafka 0.11
a| kafka 2.7.0
| syslog
| RFC-3164, RFC-5424
| rsyslog-8.39.0
|====
[IMPORTANT]
====
Fluentd doesn't support Elasticsearch 8 as of 5.6.2.
Vector doesn't support fluentd/logstash/rsyslog before 5.7.0.
====
// Note: validate these items against the corresponding line of the test configuration files that Red Hat OpenShift Logging uses:
//
// cloudwatch https://github.com/openshift/cluster-logging-operator/blob/release-5.6/test/functional/outputs/cloudwatch/forward_to_cloudwatch_test.go#L13
// elasticsearch https://github.com/openshift/cluster-logging-operator/blob/release-5.6/test/framework/functional/output_elasticsearch.go#L16-L18
// es fluentd https://github.com/ViaQ/logging-fluentd/blob/release-5.6/fluentd/Gemfile.lock#L55
// fluentd https://github.com/openshift/cluster-logging-operator/blob/release-5.6/Makefile#L50
// kafka https://github.com/openshift/cluster-logging-operator/blob/release-5.6/test/helpers/kafka/constants.go#L17
// kafka fluentd https://github.com/zendesk/ruby-kafka/tree/v1.4.0#compatibility
// logstash https://github.com/openshift/cluster-logging-operator/blob/release-5.6/test/functional/outputs/forward_to_logstash_test.go#L30
// loki https://github.com/openshift/cluster-logging-operator/blob/release-5.6/test/helpers/loki/receiver.go#L27
// syslog protocols https://github.com/openshift/cluster-logging-operator/tree/release-5.6/test/functional/outputs/syslog
// syslog version https://github.com/openshift/cluster-logging-operator/blob/release-5.6/test/framework/functional/output_syslog.go#L14

View File

@@ -1,28 +0,0 @@
// Module included in the following assemblies:
//
// * logging/log_collection_forwarding/logging-output-types.adoc
:_mod-docs-content-type: REFERENCE
[id="cluster-logging-collector-log-forwarding-supported-plugins-5-7_{context}"]
= Supported log data output types in OpenShift Logging 5.7
Red{nbsp}Hat OpenShift Logging 5.7 provides the following output types and protocols for sending log data to target log collectors.
Red Hat tests each of the combinations shown in the following table. However, you should be able to send log data to a wider range of target log collectors that ingest these protocols.
.Logging 5.7 outputs
[options="header"]
|====================================================================================================
| Output | Protocol | Tested with | Fluentd | Vector
| Cloudwatch | REST over HTTP(S) | | ✓ | ✓
| Elasticsearch v6 | | v6.8.1 | ✓ | ✓
| Elasticsearch v7 | | v7.12.2, 7.17.7 | ✓ | ✓
| Elasticsearch v8 | | v8.4.3 | ✓ | ✓
| Fluent Forward | Fluentd forward v1 | Fluentd 1.14.6, Logstash 7.10.1 | ✓ |
| Google Cloud Logging | | | | ✓
| HTTP | HTTP 1.1 | Fluentd 1.14.6, Vector 0.21 | ✓ | ✓
| Kafka | Kafka 0.11 | Kafka 2.4.1, 2.7.0, 3.3.1 | ✓ | ✓
| Loki | REST over HTTP(S) | Loki 2.3.0, 2.7 | ✓ | ✓
| Splunk | HEC | v8.2.9, 9.0.0 | | ✓
| Syslog | RFC3164, RFC5424 | Rsyslog 8.37.0-9.el7 | ✓ | ✓
|====================================================================================================

View File

@@ -0,0 +1,90 @@
// Module included in the following assemblies:
//
// * logging/log_collection_forwarding/logging-output-types.adoc
:_mod-docs-content-type: REFERENCE
[id="supported-log-outputs_{context}"]
= Supported log forwarding outputs
Outputs can be any of the following types:
.Supported log output types
[cols="5",options="header"]
|===
|Output type
|Protocol
|Tested with
|Logging versions
|Supported collector type
|Elasticsearch v6
|HTTP 1.1
|6.8.1, 6.8.23
|5.6+
|Fluentd, Vector
|Elasticsearch v7
|HTTP 1.1
|7.12.2, 7.17.7, 7.10.1
|5.6+
|Fluentd, Vector
|Elasticsearch v8
|HTTP 1.1
|8.4.3, 8.6.1
|5.6+
|Fluentd ^[1]^, Vector
|Fluent Forward
|Fluentd forward v1
|Fluentd 1.14.6, Logstash 7.10.1, Fluentd 1.14.5
|5.4+
|Fluentd
|Google Cloud Logging
|REST over HTTPS
|Latest
|5.7+
|Vector
|HTTP
|HTTP 1.1
|Fluentd 1.14.6, Vector 0.21
|5.7+
|Fluentd, Vector
|Kafka
|Kafka 0.11
|Kafka 2.4.1, 2.7.0, 3.3.1
|5.4+
|Fluentd, Vector
|Loki
|REST over HTTP and HTTPS
|2.3.0, 2.5.0, 2.7, 2.2.1
|5.4+
|Fluentd, Vector
|Splunk
|HEC
|8.2.9, 9.0.0
|5.7+
|Vector
|Syslog
|RFC3164, RFC5424
|Rsyslog 8.37.0-9.el7, rsyslog-8.39.0
|5.4+
|Fluentd, Vector ^[2]^
|Amazon CloudWatch
|REST over HTTPS
|Latest
|5.4+
|Fluentd, Vector
|===
[.small]
--
1. Fluentd does not support Elasticsearch 8 in the {logging} version 5.6.2.
2. Vector supports Syslog in the {logging} version 5.7 and higher.
--