diff --git a/_topic_map.yml b/_topic_map.yml index f7fb79befd..8677270462 100644 --- a/_topic_map.yml +++ b/_topic_map.yml @@ -749,8 +749,6 @@ Topics: File: efk-logging-curator - Name: Configuring the logging collector File: efk-logging-fluentd - - Name: Configuring systemd-journald - File: efk-logging-systemd - Name: Sending logs to external devices File: efk-logging-external - Name: Viewing Elasticsearch status diff --git a/logging/config/efk-logging-configuring-about.adoc b/logging/config/efk-logging-configuring-about.adoc index a2843e89bd..9929e54d48 100644 --- a/logging/config/efk-logging-configuring-about.adoc +++ b/logging/config/efk-logging-configuring-about.adoc @@ -19,4 +19,4 @@ For more information, see xref:../../logging/config/efk-logging-management.adoc# // assemblies. include::modules/efk-logging-deploying-about.adoc[leveloffset=+1] -include::modules/infrastructure-moving-logging.adoc[leveloffset=+1] + diff --git a/logging/config/efk-logging-configuring.adoc b/logging/config/efk-logging-configuring.adoc index f3c5ec19e6..8ddfb934e6 100644 --- a/logging/config/efk-logging-configuring.adoc +++ b/logging/config/efk-logging-configuring.adoc @@ -29,8 +29,6 @@ spec: logs: fluentd: resources: null - rsyslog: - resources: null type: fluentd curation: curator: @@ -69,12 +67,14 @@ environment variable in the `cluster-logging-operator` Deployment. * You can specify specific nodes for the logging components using node selectors. +//// * You can specify the Log collectors to deploy to each node in a cluster, either Fluentd or Rsyslog. [IMPORTANT] ==== The Rsyslog log collector is currently a Technology Preview feature. ==== +//// // The following include statements pull in the module files that comprise // the assembly. Include any combination of concept, procedure, or reference @@ -82,5 +82,4 @@ The Rsyslog log collector is currently a Technology Preview feature. // assemblies. include::modules/efk-logging-configuring-image-about.adoc[leveloffset=+1] - include::modules/efk-logging-configuring-node-selector.adoc[leveloffset=+1] diff --git a/logging/config/efk-logging-fluentd.adoc b/logging/config/efk-logging-fluentd.adoc index 42723ff933..8a16d818a3 100644 --- a/logging/config/efk-logging-fluentd.adoc +++ b/logging/config/efk-logging-fluentd.adoc @@ -5,9 +5,9 @@ include::modules/common-attributes.adoc[] toc::[] -{product-title} uses Fluentd or Rsyslog to collect operations and application logs from your cluster which {product-title} enriches with Kubernetes Pod and Namespace metadata. +{product-title} uses Fluentd to collect operations and application logs from your cluster which {product-title} enriches with Kubernetes Pod and Namespace metadata. -You can configure log rotation, log location, use an external log aggregator, change the log collector, and make other configurations for either log collector. +You can configure log rotation, log location, use an external log aggregator, and make other configurations for the log collector. [NOTE] ==== @@ -28,9 +28,8 @@ include::modules/efk-logging-fluentd-limits.adoc[leveloffset=+1] //// 4.1 modules/efk-logging-fluentd-log-rotation.adoc[leveloffset=+1] -//// - include::modules/efk-logging-fluentd-collector.adoc[leveloffset=+1] +//// include::modules/efk-logging-fluentd-log-location.adoc[leveloffset=+1] diff --git a/logging/config/efk-logging-systemd.adoc b/logging/config/efk-logging-systemd.adoc index d9af320ad5..ab707a0d04 100644 --- a/logging/config/efk-logging-systemd.adoc +++ b/logging/config/efk-logging-systemd.adoc @@ -1,11 +1,11 @@ :context: efk-logging-systemd [id="efk-logging-systemd"] -= Configuring systemd-journald and Rsyslog += Configuring systemd-journald and Fluentd include::modules/common-attributes.adoc[] toc::[] -Because Fluentd and Rsyslog read from the journal, and the journal default +Because Fluentd reads from the journal, and the journal default settings are very low, journal entries can be lost because the journal cannot keep up with the logging rate from system services. diff --git a/logging/efk-logging-eventrouter.adoc b/logging/efk-logging-eventrouter.adoc index 3409016096..b47e9607ab 100644 --- a/logging/efk-logging-eventrouter.adoc +++ b/logging/efk-logging-eventrouter.adoc @@ -9,10 +9,12 @@ The Event Router communicates with the {product-title} and prints {product-title If Cluster Logging is deployed, you can view the {product-title} events in Kibana. +//// [NOTE] ==== The Event Router is not supported for the Rsyslog log collector. ==== +//// // The following include statements pull in the module files that comprise // the assembly. Include any combination of concept, procedure, or reference diff --git a/logging/efk-logging-exported-fields.adoc b/logging/efk-logging-exported-fields.adoc index 58f775048e..f5fe491a18 100644 --- a/logging/efk-logging-exported-fields.adoc +++ b/logging/efk-logging-exported-fields.adoc @@ -17,7 +17,6 @@ Not all of these fields are present in every record. The fields are grouped in the following categories: * `exported-fields-Default` -* `exported-fields-rsyslog` * `exported-fields-systemd` * `exported-fields-kubernetes` * `exported-fields-pipeline_metadata` @@ -25,6 +24,9 @@ The fields are grouped in the following categories: * `exported-fields-aushape` * `exported-fields-tlog` +// * `exported-fields-rsyslog` + + // The following include statements pull in the module files that comprise // the assembly. Include any combination of concept, procedure, or reference // modules required to cover the user story. You can also include other @@ -32,7 +34,7 @@ The fields are grouped in the following categories: include::modules/efk-logging-exported-fields-default.adoc[leveloffset=+1] -include::modules/efk-logging-exported-fields-rsyslog.adoc[leveloffset=+1] +//modules/efk-logging-exported-fields-rsyslog.adoc[leveloffset=+1] include::modules/efk-logging-exported-fields-systemd.adoc[leveloffset=+1] diff --git a/modules/efk-logging-about-components.adoc b/modules/efk-logging-about-components.adoc index 6101f5e45c..931d370391 100644 --- a/modules/efk-logging-about-components.adoc +++ b/modules/efk-logging-about-components.adoc @@ -8,7 +8,7 @@ There are currently 5 different types of cluster logging components: * logStore - This is where the logs will be stored. The current implementation is Elasticsearch. -* collection - This is the component that collects logs from the node, formats them, and stores them in the logStore, either Fluentd or Rsyslog. +* collection - This is the component that collects logs from the node, formats them, and stores them in the logStore. The current implementation is Fluentd. * visualization - This is the UI component used to view logs, graphs, charts, and so forth. The current implementation is Kibana. * curation - This is the component that trims logs by age. The current implementation is Curator. * event routing - This is the component forwards events to cluster logging. The current implementation is Event Router. diff --git a/modules/efk-logging-about-crd.adoc b/modules/efk-logging-about-crd.adoc index f0cd450158..19d3b6ea6a 100644 --- a/modules/efk-logging-about-crd.adoc +++ b/modules/efk-logging-about-crd.adoc @@ -20,50 +20,6 @@ ifdef::openshift-dedicated[] ---- apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" -metadata: - name: "instance" - namespace: "openshift-logging" -spec: - managementState: "Managed" - logStore: - type: "elasticsearch" - elasticsearch: - nodeCount: 3 - storage: - storageClassName: "gp2" - size: "200Gi" - redundancyPolicy: "SingleRedundancy" - nodeSelector: - node-role.kubernetes.io/worker: "" - resources: - request: - memory: 8G - visualization: - type: "kibana" - kibana: - replicas: 1 - nodeSelector: - node-role.kubernetes.io/worker: "" - curation: - type: "curator" - curator: - schedule: "30 3 * * *" - nodeSelector: - node-role.kubernetes.io/worker: "" - collection: - logs: - type: "fluentd" - fluentd: {} - nodeSelector: - node-role.kubernetes.io/worker: "" ----- -endif::[] - -ifdef::openshift-enterprise,openshift-origin[] -[source,yaml] ----- -apiVersion: "logging.openshift.io/v1" -kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging diff --git a/modules/efk-logging-about-eventrouter.adoc b/modules/efk-logging-about-eventrouter.adoc index 9913c72bf0..9d354634b7 100644 --- a/modules/efk-logging-about-eventrouter.adoc +++ b/modules/efk-logging-about-eventrouter.adoc @@ -12,7 +12,9 @@ The Event Router collects events and converts them into JSON format, which takes those events and pushes them to `STDOUT`. Fluentd indexes the events to the `.operations` index. +//// [NOTE] ==== The Event Router is not supported for the Rsyslog log collector. ==== +//// diff --git a/modules/efk-logging-about-fluentd.adoc b/modules/efk-logging-about-fluentd.adoc index 9d06f8d98f..836489fa84 100644 --- a/modules/efk-logging-about-fluentd.adoc +++ b/modules/efk-logging-about-fluentd.adoc @@ -5,7 +5,7 @@ [id="efk-logging-about-fluentd_{context}"] = About the logging collector -{product-title} can use Fluentd or Rsyslog to collect data about your cluster. +{product-title} uses Fluentd to collect data about your cluster. The logging collector is deployed as a DaemonSet in {product-title} that deploys pods to each {product-title} node. `journald` is the system log source supplying log messages from the operating system, the container runtime, and {product-title}. diff --git a/modules/efk-logging-clo-status.adoc b/modules/efk-logging-clo-status.adoc index 03adc869e3..9d19cc01cc 100644 --- a/modules/efk-logging-clo-status.adoc +++ b/modules/efk-logging-clo-status.adoc @@ -57,10 +57,6 @@ status: <1> - fluentd-6l2ff - fluentd-flpnn - fluentd-n2frh - rsyslogStatus: - Nodes: null - daemonSet: "" - pods: null curation: <3> curatorStatus: - cronJobs: curator @@ -249,8 +245,4 @@ Status: Failed: Not Ready: Ready: - Rsyslog Status: - Nodes: - Daemon Set: - Pods: ---- diff --git a/modules/efk-logging-configuring-image-about.adoc b/modules/efk-logging-configuring-image-about.adoc index 53df3c749c..3dfb74ee46 100644 --- a/modules/efk-logging-configuring-image-about.adoc +++ b/modules/efk-logging-configuring-image-about.adoc @@ -12,25 +12,31 @@ defined in the *cluster-logging-operator* deployment in the *openshift-logging* You can view the images by running the following command: ---- -oc -n openshift-logging set env deployment/cluster-logging-operator --list | grep _IMAGE +$ oc -n openshift-logging set env deployment/cluster-logging-operator --list | grep _IMAGE +---- +---- ELASTICSEARCH_IMAGE=registry.redhat.io/openshift4/ose-logging-elasticsearch5:v4.2 <1> FLUENTD_IMAGE=registry.redhat.io/openshift4/ose-logging-fluentd:v4.2 <2> KIBANA_IMAGE=registry.redhat.io/openshift4/ose-logging-kibana5:v4.2 <3> CURATOR_IMAGE=registry.redhat.io/openshift4/ose-logging-curator5:v4.2 <4> OAUTH_PROXY_IMAGE=registry.redhat.io/openshift4/ose-oauth-proxy:v4.2 <5> -RSYSLOG_IMAGE=registry.redhat.io/openshift4/ose-logging-rsyslog:v4.2 <6> ---- <1> *ELASTICSEARCH_IMAGE* deploys Elasticsearch. <2> *FLUENTD_IMAGE* deploys Fluentd. <3> *KIBANA_IMAGE* deploys Kibana. <4> *CURATOR_IMAGE* deploys Curator. <5> *OAUTH_PROXY_IMAGE* defines OAUTH for OpenShift Container Platform. + +//// +RSYSLOG_IMAGE=registry.redhat.io/openshift4/ose-logging-rsyslog:v4.2 <6> <6> *RSYSLOG_IMAGE* deploys Rsyslog. + [NOTE] ==== The Rsyslog log collector is in Technology Preview. ==== +//// The values might be different depending on your environment. diff --git a/modules/efk-logging-deploying-about.adoc b/modules/efk-logging-deploying-about.adoc index f38ed7e5d0..18e384b079 100644 --- a/modules/efk-logging-deploying-about.adoc +++ b/modules/efk-logging-deploying-about.adoc @@ -136,6 +136,7 @@ You can set the policy that defines how Elasticsearch shards are replicated acro * `SingleRedundancy`. A single copy of each shard. Logs are always available and recoverable as long as at least two data nodes exist. * `ZeroRedundancy`. No copies of any shards. Logs may be unavailable (or lost) in the event a node is down or fails. +//// Log collectors:: You can select which log collector is deployed as a Daemonset to each node in the {product-title} cluster, either: @@ -156,6 +157,7 @@ You can select which log collector is deployed as a Daemonset to each node in th memory: type: "fluentd" ---- +//// Curator schedule:: You specify the schedule for Curator in the [cron format](https://en.wikipedia.org/wiki/Cron). @@ -226,3 +228,4 @@ spec: cpu: 200m memory: 1Gi ---- + diff --git a/modules/efk-logging-eventrouter-deploy.adoc b/modules/efk-logging-eventrouter-deploy.adoc index a38e9f9ebc..75e64b8ec6 100644 --- a/modules/efk-logging-eventrouter-deploy.adoc +++ b/modules/efk-logging-eventrouter-deploy.adoc @@ -9,10 +9,12 @@ Use the following steps to deploy Event Router into your cluster. The following Template object creates the Service Account, ClusterRole, and ClusterRoleBinding required for the Event Router. +//// [NOTE] ==== The Event Router is not supported for the Rsyslog log collector. ==== +//// .Prerequisites diff --git a/modules/efk-logging-exported-fields-default.adoc b/modules/efk-logging-exported-fields-default.adoc index a71d48613b..8f6217e50e 100644 --- a/modules/efk-logging-exported-fields-default.adoc +++ b/modules/efk-logging-exported-fields-default.adoc @@ -45,7 +45,7 @@ or normalizer. |The IP address V6 of the source server, if available. | `level` -|The logging level as provided by `rsyslog` (severitytext property), python's +|The logging level as provided by rsyslog (severitytext property), python's logging module. Possible values are as listed at link:http://sourceware.org/git/?p=glibc.git;a=blob;f=misc/sys/syslog.h;h=ee01478c4b19a954426a96448577c5a76e6647c0;hb=HEAD#l74[`misc/sys/syslog.h`] plus `trace` and `unknown`. For example, "alert crit debug emerg err info notice @@ -77,7 +77,7 @@ out of it by the collector or normalizer, that is UTF-8 encoded. | `service` |The name of the service associated with the logging entity, if available. For -example, the `syslog APP-NAME` and `rsyslog programname` property are mapped to +example, the `syslog APP-NAME` property is mapped to the service field. | `tags` diff --git a/modules/efk-logging-external-elasticsearch.adoc b/modules/efk-logging-external-elasticsearch.adoc index 7424e642da..2bbd5c85be 100644 --- a/modules/efk-logging-external-elasticsearch.adoc +++ b/modules/efk-logging-external-elasticsearch.adoc @@ -28,7 +28,7 @@ an instance of Fluentd that you control and that is configured with the To direct logs to a specific Elasticsearch instance: -. Edit the `fluentd` or `rsyslog` DaemonSet in the *openshift-logging* project: +. Edit the `fluentd` DaemonSet in the *openshift-logging* project: + [source,yaml] ---- diff --git a/modules/efk-logging-external-syslog.adoc b/modules/efk-logging-external-syslog.adoc index 79991798a1..dd3c4feca3 100644 --- a/modules/efk-logging-external-syslog.adoc +++ b/modules/efk-logging-external-syslog.adoc @@ -8,10 +8,12 @@ Use the `fluent-plugin-remote-syslog` plug-in on the host to send logs to an external syslog server. +//// [NOTE] ==== For Rsyslog, you can edit the Rsyslog ConfigMap to add support for Syslog log forwarding using the *omfwd* module, see link:https://www.rsyslog.com/doc/v8-stable/configuration/modules/omfwd.html[omfwd: syslog Forwarding Output Module]. To send logs to a different Rsyslog instance, you can the *omrelp* module, see link:https://www.rsyslog.com/doc/v8-stable/configuration/modules/omrelp.html[omrelp: RELP Output Module]. ==== +//// .Prerequisite diff --git a/modules/efk-logging-fluentd-alerts.adoc b/modules/efk-logging-fluentd-alerts.adoc index 5a47172251..f07318a4bb 100644 --- a/modules/efk-logging-fluentd-alerts.adoc +++ b/modules/efk-logging-fluentd-alerts.adoc @@ -40,6 +40,8 @@ Alerts are in one of the following states: |=== +//// + .Rsyslog Prometheus alerts |=== |Alert |Message |Description |Severity @@ -66,4 +68,4 @@ Alerts are in one of the following states: |=== - +//// diff --git a/modules/efk-logging-fluentd-collector.adoc b/modules/efk-logging-fluentd-collector.adoc index 0064bbb481..9216e2eae5 100644 --- a/modules/efk-logging-fluentd-collector.adoc +++ b/modules/efk-logging-fluentd-collector.adoc @@ -8,6 +8,9 @@ {product-title} cluster logging uses Fluentd by default. Log collectors are deployed as a DaemonSet to each node in the cluster. +Currently, Fluentd is the only supported log collector, so you cannot change the log collector type. + +//// You can change the logging collector to Rsyslog, if needed. [IMPORTANT] @@ -51,7 +54,7 @@ nodeSpec: collection: logs: - type: "rsyslog" <1> + type: "fluentd" <1> ---- -<1> Set the log collector to `rsyslog` or `fluentd`. - +<1> Set the log collector to `fluentd`. +//// diff --git a/modules/efk-logging-fluentd-envvar.adoc b/modules/efk-logging-fluentd-envvar.adoc index f3d4bc3789..9910007ee5 100644 --- a/modules/efk-logging-fluentd-envvar.adoc +++ b/modules/efk-logging-fluentd-envvar.adoc @@ -6,10 +6,9 @@ = Configuring the logging collector using environment variables You can use environment variables to modify the -configuration of the log collector, Fluentd or Rsyslog. +configuration of the Fluentd log collector. -See the link:https://github.com/openshift/origin-aggregated-logging/blob/master/fluentd/README.md[Fluentd README] in Github or the -link:https://github.com/openshift/origin-aggregated-logging/blob/master/rsyslog/README.md[Rsyslog README] for lists of the +See the link:https://github.com/openshift/origin-aggregated-logging/blob/master/fluentd/README.md[Fluentd README] in Github for lists of the available environment variables. .Prerequisite diff --git a/modules/efk-logging-fluentd-external.adoc b/modules/efk-logging-fluentd-external.adoc index c1fbe62b16..82696021d9 100644 --- a/modules/efk-logging-fluentd-external.adoc +++ b/modules/efk-logging-fluentd-external.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * logging/efk-logging-external.adoc +// * logging/efk-logging-fluentd.adoc [id="efk-logging-fluentd-external_{context}"] = Configuring Fluentd to send logs to an external log aggregator @@ -13,14 +13,18 @@ hosted Fluentd has processed them. ifdef::openshift-origin[] The `secure-forward` plug-in is provided with the Fluentd image as of v1.4.0. endif::openshift-origin[] + +//// ifdef::openshift-enterprise[] The `secure-forward` plug-in is supported by Fluentd only. endif::openshift-enterprise[] - +//// +//// [NOTE] ==== For Rsyslog, you can edit the Rsyslog configmap to add support for Syslog log forwarding using the *omfwd* module, see link:https://www.rsyslog.com/doc/v8-stable/configuration/modules/omfwd.html[omfwd: syslog Forwarding Output Module]. To send logs to a different Rsyslog instance, you can the *omrelp* module, see link:https://www.rsyslog.com/doc/v8-stable/configuration/modules/omrelp.html[omrelp: RELP Output Module]. ==== +//// The logging deployment provides a `secure-forward.conf` section in the Fluentd configmap for configuring the external aggregator: diff --git a/modules/efk-logging-fluentd-json.adoc b/modules/efk-logging-fluentd-json.adoc index d1a3226615..cc1a80dfff 100644 --- a/modules/efk-logging-fluentd-json.adoc +++ b/modules/efk-logging-fluentd-json.adoc @@ -5,10 +5,10 @@ [id="efk-logging-fluentd-json_{context}"] = Configuring log collection JSON parsing -You can configure the log collector, Fluentd or Rsyslog, to determine if a log message is in *JSON* format and merge +You can configure the Fluentd log collector to determine if a log message is in *JSON* format and merge the message into the JSON payload document posted to Elasticsearch. This feature is disabled by default. -You can enable or disable this feature by editing the `MERGE_JSON_LOG` environment variable in the *fluentd* or *rsyslog* daemonset. +You can enable or disable this feature by editing the `MERGE_JSON_LOG` environment variable in the *fluentd* daemonset. [IMPORTANT] ==== @@ -18,7 +18,7 @@ Enabling this feature comes with risks, including: * Potential buffer storage leak caused by rejected message cycling. * Overwrite of data for field with same names. -The features in this topic should be used by only experienced Fluentd, Rsyslog, and Elasticsearch users. +The features in this topic should be used by only experienced Fluentd and Elasticsearch users. ==== .Prerequisites @@ -32,12 +32,13 @@ Use the following command to enable this feature: ---- oc set env ds/fluentd MERGE_JSON_LOG=true <1> ---- +<1> Set this to `false` to disable this feature or `true` to enable this feature. +//// ---- oc set env ds/rsyslog MERGE_JSON_LOG=true <1> ---- - -<1> Set this to `false` to disable this feature or `true` to enable this feature. +//// *Setting MERGE_JSON_LOG and CDM_UNDEFINED_TO_STRING* @@ -46,7 +47,7 @@ If you set the `MERGE_JSON_LOG` and `CDM_UNDEFINED_TO_STRING` enviroment variabl When Fluentd rolls over the indices for the next day's logs, it will create a brand new index. The field definitions are updated and you will not get the *400* error. Records that have *hard* errors, such as schema violations, corrupted data, and so forth, cannot be retried. The log collector sends the records for error handling. If you link:https://docs.fluentd.org/v1.0/articles/config-file#@error-label[add a -`