mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
OBSDOCS-206: Add multi ClusterLogForwarder docs
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
c6129e3d45
commit
7963b1a797
@@ -2543,8 +2543,10 @@ Topics:
|
||||
- Name: Log collection and forwarding
|
||||
Dir: log_collection_forwarding
|
||||
Topics:
|
||||
- Name: About log forwarding
|
||||
- Name: About log collection and forwarding
|
||||
File: log-forwarding
|
||||
- Name: Log output types
|
||||
File: logging-output-types
|
||||
- Name: Enabling JSON log forwarding
|
||||
File: cluster-logging-enabling-json-logging
|
||||
- Name: Configuring the logging collector
|
||||
@@ -2572,6 +2574,8 @@ Topics:
|
||||
- Name: Exported fields
|
||||
File: cluster-logging-exported-fields
|
||||
Distros: openshift-enterprise,openshift-origin
|
||||
- Name: Glossary
|
||||
File: logging-common-terms
|
||||
---
|
||||
Name: Monitoring
|
||||
Dir: monitoring
|
||||
|
||||
@@ -577,8 +577,10 @@ Topics:
|
||||
- Name: Log collection and forwarding
|
||||
Dir: log_collection_forwarding
|
||||
Topics:
|
||||
- Name: About log forwarding
|
||||
- Name: About log collection and forwarding
|
||||
File: log-forwarding
|
||||
- Name: Log output types
|
||||
File: logging-output-types
|
||||
- Name: Enabling JSON log forwarding
|
||||
File: cluster-logging-enabling-json-logging
|
||||
- Name: Configuring the logging collector
|
||||
@@ -604,6 +606,8 @@ Topics:
|
||||
File: cluster-logging-uninstall
|
||||
- Name: Exported fields
|
||||
File: cluster-logging-exported-fields
|
||||
- Name: Glossary
|
||||
File: logging-common-terms
|
||||
---
|
||||
Name: Monitoring
|
||||
Dir: monitoring
|
||||
|
||||
@@ -731,8 +731,10 @@ Topics:
|
||||
- Name: Log collection and forwarding
|
||||
Dir: log_collection_forwarding
|
||||
Topics:
|
||||
- Name: About log forwarding
|
||||
- Name: About log collection and forwarding
|
||||
File: log-forwarding
|
||||
- Name: Log output types
|
||||
File: logging-output-types
|
||||
- Name: Enabling JSON log forwarding
|
||||
File: cluster-logging-enabling-json-logging
|
||||
- Name: Configuring the logging collector
|
||||
@@ -758,6 +760,8 @@ Topics:
|
||||
File: cluster-logging-uninstall
|
||||
- Name: Exported fields
|
||||
File: cluster-logging-exported-fields
|
||||
- Name: Glossary
|
||||
File: logging-common-terms
|
||||
---
|
||||
Name: Monitoring
|
||||
Dir: monitoring
|
||||
|
||||
@@ -2,33 +2,37 @@
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
include::_attributes/attributes-openshift-dedicated.adoc[]
|
||||
[id="cluster-logging"]
|
||||
= Understanding the {logging-title}
|
||||
= About Logging
|
||||
:context: cluster-logging
|
||||
|
||||
toc::[]
|
||||
|
||||
ifdef::openshift-enterprise,openshift-rosa,openshift-dedicated,openshift-webscale,openshift-origin[]
|
||||
As a cluster administrator, you can deploy the {logging} to aggregate all the logs from your {product-title} cluster, such as node system audit logs, application container logs, and infrastructure logs. The {logging} aggregates these logs from throughout your cluster and stores them in a default log store. You can xref:../logging/cluster-logging-visualizer.adoc#cluster-logging-visualizer[use the Kibana web console to visualize log data].
|
||||
As a cluster administrator, you can deploy {logging} on an {product-title} cluster, and use it to collect and aggregate node system audit logs, application container logs, and infrastructure logs. You can forward logs to your chosen log outputs, including on-cluster, Red{nbsp}Hat managed log storage. You can also visualize your log data in the {product-title} web console, or xref:../logging/cluster-logging-visualizer.adoc#cluster-logging-visualizer[the Kibana web console], depending on your deployed log storage solution.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
The Kibana web console is now deprecated and will be removed in a future logging release.
|
||||
====
|
||||
|
||||
{product-title} cluster administrators can deploy the {logging} by using Operators. For information, see xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploying[Installing the {logging-title}].
|
||||
|
||||
The Operators are responsible for deploying, upgrading, and maintaining the {logging}. After the Operators are installed, you can create a `ClusterLogging` custom resource (CR) to schedule {logging} pods and other resources necessary to support the {logging}. You can also create a `ClusterLogForwarder` CR to specify which logs are collected, how they are transformed, and where they are forwarded to.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Because the internal {product-title} Elasticsearch log store does not provide secure storage for audit logs, audit logs are not stored in the internal Elasticsearch instance by default. If you want to send the audit logs to the default internal Elasticsearch log store, for example to view the audit logs in Kibana, you must use the Log Forwarding API as described in xref:../logging/config/cluster-logging-log-store.adoc#cluster-logging-elasticsearch-audit_cluster-logging-log-store[Forward audit logs to the log store].
|
||||
====
|
||||
endif::[]
|
||||
|
||||
include::modules/logging-architecture-overview.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/cluster-logging-about.adoc[leveloffset=+1]
|
||||
|
||||
ifdef::openshift-rosa,openshift-dedicated[]
|
||||
include::modules/cluster-logging-cloudwatch.adoc[leveloffset=+1]
|
||||
.Next steps
|
||||
* See xref:../logging/log_collection_forwarding/log-forwarding.adoc#cluster-logging-collector-log-forward-cloudwatch_log-forwarding[Forwarding logs to Amazon CloudWatch] for instructions.
|
||||
endif::[]
|
||||
|
||||
include::modules/logging-common-terms.adoc[leveloffset=+1]
|
||||
include::modules/cluster-logging-about.adoc[leveloffset=+1]
|
||||
|
||||
For information, see xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploying[Installing the {logging-title}].
|
||||
|
||||
include::modules/cluster-logging-json-logging-about.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/cluster-logging-collecting-storing-kubernetes-events.adoc[leveloffset=+2]
|
||||
|
||||
@@ -8,9 +8,35 @@ toc::[]
|
||||
|
||||
To configure {logging-title} you customize the `ClusterLogging` custom resource (CR).
|
||||
|
||||
// The following include statements pull in the module files that comprise
|
||||
// the assembly. Include any combination of concept, procedure, or reference
|
||||
// modules required to cover the user story. You can also include other
|
||||
// assemblies.
|
||||
|
||||
include::modules/cluster-logging-about-crd.adoc[leveloffset=+1]
|
||||
|
||||
////
|
||||
// collecting this information here for a future PR
|
||||
|
||||
If you want to specify collector resources or scheduling, you must create a `ClusterLogging` CR:
|
||||
|
||||
.ClusterLogging resource example
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: "logging.openshift.io/v1"
|
||||
kind: "ClusterLogging"
|
||||
metadata:
|
||||
name: audit-collector <1>
|
||||
namespace: openshift-kube-apiserver <2>
|
||||
spec:
|
||||
collection:
|
||||
type: "vector" <3>
|
||||
resources:
|
||||
limits:
|
||||
memory: 2G
|
||||
# ...
|
||||
----
|
||||
<1> The name of the `ClusterLogging` CR must be the same as the `ClusterLogForwarder` CR.
|
||||
<2> The namespace of the `ClusterLogging` CR must be the same as the `ClusterLogForwarder` CR.
|
||||
<3> The collector type that you want to use. This example uses the Vector collector.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
The relevant `spec` fields for this CR in multiple log forwarder mode are the `managmentState` and `collection` fields. All other `spec` fields are ignored.
|
||||
====
|
||||
////
|
||||
|
||||
1
logging/log_collection_forwarding/_attributes
Symbolic link
1
logging/log_collection_forwarding/_attributes
Symbolic link
@@ -0,0 +1 @@
|
||||
../../_attributes/
|
||||
1
logging/log_collection_forwarding/images
Symbolic link
1
logging/log_collection_forwarding/images
Symbolic link
@@ -0,0 +1 @@
|
||||
../../images/
|
||||
@@ -2,43 +2,52 @@
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
include::_attributes/attributes-openshift-dedicated.adoc[]
|
||||
[id="log-forwarding"]
|
||||
= About log forwarding
|
||||
= About log collection and forwarding
|
||||
:context: log-forwarding
|
||||
|
||||
toc::[]
|
||||
|
||||
By default, the {logging} sends container and infrastructure logs to the default internal log store defined in the `ClusterLogging` custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, you do not need to configure the Cluster Log Forwarder.
|
||||
Administrators can create `ClusterLogForwarder` resources that specify which logs are collected, how they are transformed, and where they are forwarded to.
|
||||
|
||||
To send logs to other log aggregators, you use the {product-title} Cluster Log Forwarder. This API enables you to send container, infrastructure, and audit logs to specific endpoints within or outside your cluster. In addition, you can send different types of logs to various systems so that various individuals can access each type. You can also enable Transport Layer Security (TLS) support to send logs securely, as required by your organization.
|
||||
`ClusterLogForwarder` resources can be used up to forward container, infrastructure, and audit logs to specific endpoints within or outside of a cluster. Transport Layer Security (TLS) is supported so that log forwarders can be configured to send logs securely.
|
||||
|
||||
Administrators can also authorize RBAC permissions that define which service accounts and users can access and forward which types of logs.
|
||||
|
||||
////
|
||||
include::modules/log-forwarding-modes.adoc[leveloffset=+1]
|
||||
|
||||
[id="log-forwarding-enabling-multi-clf-mode"]
|
||||
== Enabling multi log forwarder mode for a cluster
|
||||
|
||||
To use multi log forwarder mode, you must create a service account and cluster role bindings for that service account. You can then reference the service account in the `ClusterLogForwarder` resource to control access permissions.
|
||||
|
||||
include::modules/log-collection-rbac-permissions.adoc[leveloffset=+2]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
ifdef::openshift-enterprise[]
|
||||
* xref:../../authentication/using-rbac.adoc#using-rbac[Using RBAC to define and apply permissions]
|
||||
* xref:../../authentication/using-service-accounts-in-applications.adoc#using-service-accounts-in-applications[Using service accounts in applications]
|
||||
endif::[]
|
||||
* link:https://kubernetes.io/docs/reference/access-authn-authz/rbac/[Using RBAC Authorization Kubernetes documentation]
|
||||
|
||||
include::modules/logging-create-clf.adoc[leveloffset=+1]
|
||||
////
|
||||
|
||||
[id="log-forwarding-audit-logs"]
|
||||
== Sending audit logs to the internal log store
|
||||
|
||||
By default, the {logging} sends container and infrastructure logs to the default internal log store defined in the `ClusterLogging` custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, you do not need to configure the Cluster Log Forwarder.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
To send audit logs to the default internal Elasticsearch log store, use the Cluster Log Forwarder as described in xref:../../logging/config/cluster-logging-log-store.adoc#cluster-logging-elasticsearch-audit_cluster-logging-log-store[Forward audit logs to the log store].
|
||||
To send audit logs to the internal Elasticsearch log store, use the Cluster Log Forwarder as described in xref:../../logging/config/cluster-logging-log-store.adoc#cluster-logging-elasticsearch-audit_cluster-logging-log-store[Forward audit logs to the log store].
|
||||
====
|
||||
|
||||
When you forward logs externally, the {logging} creates or modifies a Fluentd config map to send logs using your desired protocols. You are responsible for configuring the protocol on the external log aggregator.
|
||||
|
||||
// unused files - either include or delete
|
||||
// cluster-logging-log-forwarding-disable.adoc
|
||||
|
||||
include::modules/cluster-logging-collector-log-forwarding-about.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/cluster-logging-forwarding-separate-indices.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/cluster-logging-collector-log-forwarding-supported-plugins-5-1.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/cluster-logging-collector-log-forwarding-supported-plugins-5-2.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/cluster-logging-collector-log-forwarding-supported-plugins-5-3.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/cluster-logging-collector-log-forwarding-supported-plugins-5-4.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/cluster-logging-collector-log-forwarding-supported-plugins-5-5.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/cluster-logging-collector-log-forwarding-supported-plugins-5-6.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/cluster-logging-collector-log-forwarding-supported-plugins-5-7.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/cluster-logging-collector-log-forward-es.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/cluster-logging-collector-log-forward-fluentd.adoc[leveloffset=+1]
|
||||
|
||||
37
logging/log_collection_forwarding/logging-output-types.adoc
Normal file
37
logging/log_collection_forwarding/logging-output-types.adoc
Normal file
@@ -0,0 +1,37 @@
|
||||
:_content-type: ASSEMBLY
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
include::_attributes/attributes-openshift-dedicated.adoc[]
|
||||
[id="logging-output-types"]
|
||||
= Log output types
|
||||
:context: logging-output-types
|
||||
|
||||
toc::[]
|
||||
|
||||
Log outputs specified in the `ClusterLogForwarder` CR can be any of the following types:
|
||||
|
||||
`default`:: The on-cluster, Red{nbsp}Hat managed log store. You are not required to configure the default output.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
If you configure a `default` output, you receive an error message, because the `default` output name is reserved for referencing the on-cluster, Red{nbsp}Hat managed log store.
|
||||
====
|
||||
`loki`:: Loki, a horizontally scalable, highly available, multi-tenant log aggregation system.
|
||||
`kafka`:: A Kafka broker. The `kafka` output can use a TCP or TLS connection.
|
||||
`elasticsearch`:: An external Elasticsearch instance. The `elasticsearch` output can use a TLS connection.
|
||||
`fluentdForward`:: An external log aggregation solution that supports Fluentd. This option uses the Fluentd *forward* protocols. The `fluentForward` output can use a TCP or TLS connection and supports shared-key authentication by providing a *shared_key* field in a secret. Shared-key authentication can be used with or without TLS.
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
The `fluentdForward` output is only supported if you are using the Fluentd collector. It is not supported if you are using the Vector collector. If you are using the Vector collector, you can forward logs to Fluentd by using the `http` output.
|
||||
====
|
||||
`syslog`:: An external log aggregation solution that supports the syslog link:https://tools.ietf.org/html/rfc3164[RFC3164] or link:https://tools.ietf.org/html/rfc5424[RFC5424] protocols. The `syslog` output can use a UDP, TCP, or TLS connection.
|
||||
`cloudwatch`:: Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS).
|
||||
|
||||
// supported outputs by version
|
||||
include::modules/cluster-logging-collector-log-forwarding-supported-plugins-5-7.adoc[leveloffset=+1]
|
||||
include::modules/cluster-logging-collector-log-forwarding-supported-plugins-5-6.adoc[leveloffset=+1]
|
||||
include::modules/cluster-logging-collector-log-forwarding-supported-plugins-5-5.adoc[leveloffset=+1]
|
||||
include::modules/cluster-logging-collector-log-forwarding-supported-plugins-5-4.adoc[leveloffset=+1]
|
||||
include::modules/cluster-logging-collector-log-forwarding-supported-plugins-5-3.adoc[leveloffset=+1]
|
||||
include::modules/cluster-logging-collector-log-forwarding-supported-plugins-5-2.adoc[leveloffset=+1]
|
||||
include::modules/cluster-logging-collector-log-forwarding-supported-plugins-5-1.adoc[leveloffset=+1]
|
||||
1
logging/log_collection_forwarding/modules
Symbolic link
1
logging/log_collection_forwarding/modules
Symbolic link
@@ -0,0 +1 @@
|
||||
../../modules/
|
||||
1
logging/log_collection_forwarding/snippets
Symbolic link
1
logging/log_collection_forwarding/snippets
Symbolic link
@@ -0,0 +1 @@
|
||||
../../snippets/
|
||||
@@ -1,10 +1,11 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * logging/cluster-logging.adoc
|
||||
:_content-type: ASSEMBLY
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
include::_attributes/attributes-openshift-dedicated.adoc[]
|
||||
[id="openshift-logging-common-terms"]
|
||||
= Glossary
|
||||
:context: openshift-logging-common-terms
|
||||
|
||||
:_content-type: REFERENCE
|
||||
[id="openshift-logging-common-terms_{context}"]
|
||||
= Glossary of common terms for {product-title} Logging
|
||||
toc::[]
|
||||
|
||||
This glossary defines common terms that are used in the {product-title} Logging content.
|
||||
|
||||
@@ -81,7 +82,7 @@ toleration::
|
||||
You can apply tolerations to pods. Tolerations allow the scheduler to schedule pods with matching taints.
|
||||
|
||||
web console::
|
||||
A user interface (UI) to manage {product-title}.
|
||||
A user interface (UI) to manage {product-title}.
|
||||
ifdef::openshift-rosa,openshift-dedicated[]
|
||||
The web console for {product-title} can be found at link:https://console.redhat.com/openshift[https://console.redhat.com/openshift].
|
||||
endif::[]
|
||||
@@ -6,6 +6,13 @@
|
||||
[id="cluster-logging-about-collector_{context}"]
|
||||
= About the logging collector
|
||||
|
||||
The Cluster Logging Operator deploys a collector based on the `ClusterLogForwarder` resource specification. There are two collector options supported by this Operator: the Fluentd collector, and the Vector collector.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
The Fluentd collector is now deprecated and will be removed in a future logging release.
|
||||
====
|
||||
|
||||
The {logging-title} collects container and node logs.
|
||||
|
||||
By default, the log collector uses the following sources:
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// logging/cluster-logging-external.adoc
|
||||
// * logging/log_collection_forwarding/logging-output-types.adoc
|
||||
|
||||
[id="cluster-logging-collector-log-forwarding-supported-plugins-5-1_{context}"]
|
||||
|
||||
@@ -50,4 +50,4 @@ kafka 2.7.0
|
||||
Previously, the syslog output supported only RFC-3164. The current syslog output adds support for RFC-5424.
|
||||
====
|
||||
|
||||
//ENG-Feedback: How can we reformat this to accurately reflect 5.4?
|
||||
//ENG-Feedback: How can we reformat this to accurately reflect 5.4?
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// logging/cluster-logging-external.adoc
|
||||
// * logging/log_collection_forwarding/logging-output-types.adoc
|
||||
|
||||
[id="cluster-logging-collector-log-forwarding-supported-plugins-5-2_{context}"]
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// logging/cluster-logging-external.adoc
|
||||
// * logging/log_collection_forwarding/logging-output-types.adoc
|
||||
|
||||
[id="cluster-logging-collector-log-forwarding-supported-plugins-5-3_{context}"]
|
||||
|
||||
@@ -51,7 +51,7 @@ a| kafka 2.7.0
|
||||
// fluentd https://github.com/openshift/cluster-logging-operator/blob/release-5.3/Makefile#L23
|
||||
// kafka https://github.com/openshift/cluster-logging-operator/blob/release-5.3/test/helpers/kafka/constants.go#L17
|
||||
// kafka fluentd https://github.com/zendesk/ruby-kafka/tree/v1.4.0#compatibility
|
||||
// logstash https://github.com/openshift/cluster-logging-operator/blob/release-5.3/test/functional/outputs/forward_to_logstash_test.go#L30
|
||||
// logstash https://github.com/openshift/cluster-logging-operator/blob/release-5.3/test/functional/outputs/forward_to_logstash_test.go#L30
|
||||
// loki https://github.com/openshift/cluster-logging-operator/blob/release-5.3/test/helpers/loki/receiver.go#L25
|
||||
// syslog protocols https://github.com/openshift/cluster-logging-operator/tree/release-5.3/test/functional/outputs/syslog
|
||||
// syslog version https://github.com/openshift/cluster-logging-operator/blob/release-5.3/test/framework/functional/output_syslog.go#L13
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// logging/cluster-logging-external.adoc
|
||||
// * logging/log_collection_forwarding/logging-output-types.adoc
|
||||
|
||||
[id="cluster-logging-collector-log-forwarding-supported-plugins-5-4_{context}"]
|
||||
|
||||
@@ -51,7 +51,7 @@ a| kafka 2.7.0
|
||||
// fluentd https://github.com/openshift/cluster-logging-operator/blob/release-5.4/Makefile#L23
|
||||
// kafka https://github.com/openshift/cluster-logging-operator/blob/release-5.4/test/helpers/kafka/constants.go#L17
|
||||
// kafka fluentd https://github.com/zendesk/ruby-kafka/tree/v1.4.0#compatibility
|
||||
// logstash https://github.com/openshift/cluster-logging-operator/blob/release-5.4/test/functional/outputs/forward_to_logstash_test.go#L30
|
||||
// logstash https://github.com/openshift/cluster-logging-operator/blob/release-5.4/test/functional/outputs/forward_to_logstash_test.go#L30
|
||||
// loki https://github.com/openshift/cluster-logging-operator/blob/release-5.4/test/helpers/loki/receiver.go#L26
|
||||
// syslog protocols https://github.com/openshift/cluster-logging-operator/tree/release-5.4/test/functional/outputs/syslog
|
||||
// syslog version https://github.com/openshift/cluster-logging-operator/blob/release-5.4/test/framework/functional/output_syslog.go#L13
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// logging/cluster-logging-external.adoc
|
||||
// * logging/log_collection_forwarding/logging-output-types.adoc
|
||||
|
||||
[id="cluster-logging-collector-log-forwarding-supported-plugins-5-5_{context}"]
|
||||
|
||||
@@ -52,7 +52,7 @@ a| kafka 2.7.0
|
||||
// fluentd https://github.com/openshift/cluster-logging-operator/blob/release-5.5/Makefile#L24
|
||||
// kafka https://github.com/openshift/cluster-logging-operator/blob/release-5.5/test/helpers/kafka/constants.go#L17
|
||||
// kafka fluentd https://github.com/zendesk/ruby-kafka/tree/v1.4.0#compatibility
|
||||
// logstash https://github.com/openshift/cluster-logging-operator/blob/release-5.5/test/functional/outputs/forward_to_logstash_test.go#L30
|
||||
// logstash https://github.com/openshift/cluster-logging-operator/blob/release-5.5/test/functional/outputs/forward_to_logstash_test.go#L30
|
||||
// loki https://github.com/openshift/cluster-logging-operator/blob/release-5.5/test/helpers/loki/receiver.go#L26
|
||||
// syslog protocols https://github.com/openshift/cluster-logging-operator/tree/release-5.5/test/functional/outputs/syslog
|
||||
// syslog version https://github.com/openshift/cluster-logging-operator/blob/release-5.5/test/framework/functional/output_syslog.go#L14
|
||||
// syslog version https://github.com/openshift/cluster-logging-operator/blob/release-5.5/test/framework/functional/output_syslog.go#L14
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// logging/cluster-logging-external.adoc
|
||||
// * logging/log_collection_forwarding/logging-output-types.adoc
|
||||
|
||||
[id="cluster-logging-collector-log-forwarding-supported-plugins-5-6_{context}"]
|
||||
|
||||
@@ -61,7 +61,7 @@ Vector doesn't support fluentd/logstash/rsyslog before 5.7.0.
|
||||
// fluentd https://github.com/openshift/cluster-logging-operator/blob/release-5.6/Makefile#L50
|
||||
// kafka https://github.com/openshift/cluster-logging-operator/blob/release-5.6/test/helpers/kafka/constants.go#L17
|
||||
// kafka fluentd https://github.com/zendesk/ruby-kafka/tree/v1.4.0#compatibility
|
||||
// logstash https://github.com/openshift/cluster-logging-operator/blob/release-5.6/test/functional/outputs/forward_to_logstash_test.go#L30
|
||||
// logstash https://github.com/openshift/cluster-logging-operator/blob/release-5.6/test/functional/outputs/forward_to_logstash_test.go#L30
|
||||
// loki https://github.com/openshift/cluster-logging-operator/blob/release-5.6/test/helpers/loki/receiver.go#L27
|
||||
// syslog protocols https://github.com/openshift/cluster-logging-operator/tree/release-5.6/test/functional/outputs/syslog
|
||||
// syslog version https://github.com/openshift/cluster-logging-operator/blob/release-5.6/test/framework/functional/output_syslog.go#L14
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * logging/cluster-logging-external.adoc
|
||||
// * logging/log_collection_forwarding/logging-output-types.adoc
|
||||
|
||||
:_content-type: REFERENCE
|
||||
[id="cluster-logging-collector-log-forwarding-supported-plugins-5-7_{context}"]
|
||||
|
||||
28
modules/log-collection-rbac-permissions.adoc
Normal file
28
modules/log-collection-rbac-permissions.adoc
Normal file
@@ -0,0 +1,28 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * logging/log_collection_forwarding/log-forwarding.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="log-collection-rbac-permissions_{context}"]
|
||||
= Authorizing log collection RBAC permissions
|
||||
|
||||
In logging 5.8 and later, the Cluster Logging Operator provides `collect-audit-logs`, `collect-application-logs`, and `collect-infrastructure-logs` cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively.
|
||||
|
||||
You can authorize RBAC permissions for log collection by binding the required cluster roles to a service account.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* The Cluster Logging Operator is installed in the `openshift-logging` namespace.
|
||||
* You have administrator permissions.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account.
|
||||
|
||||
. Bind the appropriate cluster roles to the service account:
|
||||
+
|
||||
.Example binding command
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc adm policy add-cluster-role-to-user <cluster_role_name> - system:serviceaccount::<namespace_name>:<service_account_name>
|
||||
----
|
||||
33
modules/log-forwarding-modes.adoc
Normal file
33
modules/log-forwarding-modes.adoc
Normal file
@@ -0,0 +1,33 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * logging/log_collection_forwarding/log-forwarding.adoc
|
||||
|
||||
:_content-type: CONCEPT
|
||||
[id="log-forwarding-modes_{context}"]
|
||||
= Log forwarding modes
|
||||
|
||||
There are two log forwarding modes available: legacy mode, and multi log forwarder mode.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
Only the Vector collector is supported for use with multi log forwarder mode. The Fluentd collector can only be used with legacy mode.
|
||||
====
|
||||
|
||||
[id="log-forwarding-modes-legacy_{context}"]
|
||||
== Legacy mode
|
||||
|
||||
In legacy mode, you can only use one log forwarder in your cluster. The `ClusterLogForwarder` resource in this mode must be named `instance`, and must be created in the `openshift-logging` namespace. The `ClusterLogForwarder` resource also requires a corresponding `ClusterLogging` resource named `instance` in the `openshift-logging` namespace.
|
||||
|
||||
[id="log-forwarding-modes-multi-clf_{context}"]
|
||||
== Multi log forwarder mode
|
||||
|
||||
Multi log forwarder mode is available in logging 5.8 and later, and provides the following functionality:
|
||||
|
||||
* Administrators can control which users are allowed to define log collection and which logs they are allowed to collect.
|
||||
* Users who have the required permissions are able to specify additional log collection configurations.
|
||||
* Administrators who are migrating from the deprecated Fluentd collector to the Vector collector can deploy a new log forwarder separately from their existing deployment. The existing and new log forwarders can operate simultaneously while workloads are being migrated.
|
||||
|
||||
In multi log forwarder mode, you are not required to create a corresponding `ClusterLogging` resource for your `ClusterLogForwarder` resource. You can create multiple `ClusterLogForwarder` resources using any name, in any namespace, with the following exceptions:
|
||||
|
||||
* You cannot create a `ClusterLogForwarder` resource named `instance` in the `openshift-logging` namespace, because this is reserved for a log forwarder that supports the legacy workflow using the Fluentd collector.
|
||||
* You cannot create a `ClusterLogForwarder` resource named `collector` in the `openshift-logging` namespace, because this is reserved for the collector.
|
||||
68
modules/logging-create-clf.adoc
Normal file
68
modules/logging-create-clf.adoc
Normal file
@@ -0,0 +1,68 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * logging/log_collection_forwarding/log-forwarding.adoc
|
||||
|
||||
:_content-type: REFERENCE
|
||||
[id="logging-create-clf_{context}"]
|
||||
= Creating a log forwarder
|
||||
|
||||
To create a log forwarder, you must create a `ClusterLogForwarder` CR that specifies the log input types that the service account can collect. You can also specify which outputs the logs can be forwarded to. If you are using multi log forwarder mode, you must also reference the service account in the `ClusterLogForwarder` CR.
|
||||
|
||||
If you are using multi log forwarder mode on your cluster, you can create `ClusterLogForwarder` custom resources (CRs) in any namespace, using any name.
|
||||
If you are using legacy mode, the `ClusterLogForwarder` CR must be named `instance`, and must be created in the `openshift-logging` namespace.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
You need administrator permissions for the namespace where you create the `ClusterLogForwarder` CR.
|
||||
====
|
||||
|
||||
.ClusterLogForwarder resource example
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: "logging.openshift.io/v1"
|
||||
kind: ClusterLogForwarder
|
||||
metadata:
|
||||
name: <log_forwarder_name> <1>
|
||||
namespace: <log_forwarder_namespace> <2>
|
||||
spec:
|
||||
serviceAccount: <service_account_name> <3>
|
||||
pipelines:
|
||||
- inputRefs:
|
||||
- <log_type> <4>
|
||||
outputRefs:
|
||||
- <output_name> <5>
|
||||
outputs:
|
||||
- name: <output_name> <6>
|
||||
type: <output_type> <5>
|
||||
url: <log_output_url> <7>
|
||||
# ...
|
||||
----
|
||||
<1> In legacy mode, the CR name must be `instance`. In multi log forwarder mode, you can use any name.
|
||||
<2> In legacy mode, the CR namespace must be `openshift-logging`. In multi log forwarder mode, you can use any namespace.
|
||||
<3> The name of your service account. The service account is only required in multi log forwarder mode.
|
||||
<4> The log types that are collected. The value for this field can be `audit` for audit logs, `application` for application logs, `infrastructure` for infrastructure logs, or a named input that has been defined for your application.
|
||||
<5> The type of output that you want to forward logs to. The value of this field can be `default`, `loki`, `kafka`, `elasticsearch`, `fluentdForward`, `syslog`, or `cloudwatch`.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
The `default` output type is not supported in mutli log forwarder mode.
|
||||
====
|
||||
<6> A name for the output that you want to forward logs to.
|
||||
<7> The URL of the output that you want to forward logs to.
|
||||
|
||||
// To be followed up on by adding input examples / docs:
|
||||
////
|
||||
spec:
|
||||
inputs:
|
||||
- name: chatty-app
|
||||
type: application
|
||||
selector:
|
||||
matchLabels:
|
||||
load: heavy
|
||||
pipelines:
|
||||
- inputRefs:
|
||||
- chatty-app
|
||||
- infrastructure
|
||||
- outputRefs:
|
||||
- default
|
||||
////
|
||||
Reference in New Issue
Block a user