mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
Remove rsyslog from cluster logging documentation
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
3a7b735d01
commit
02da2bbe43
@@ -749,8 +749,6 @@ Topics:
|
||||
File: efk-logging-curator
|
||||
- Name: Configuring the logging collector
|
||||
File: efk-logging-fluentd
|
||||
- Name: Configuring systemd-journald
|
||||
File: efk-logging-systemd
|
||||
- Name: Sending logs to external devices
|
||||
File: efk-logging-external
|
||||
- Name: Viewing Elasticsearch status
|
||||
|
||||
@@ -19,4 +19,4 @@ For more information, see xref:../../logging/config/efk-logging-management.adoc#
|
||||
// assemblies.
|
||||
|
||||
include::modules/efk-logging-deploying-about.adoc[leveloffset=+1]
|
||||
include::modules/infrastructure-moving-logging.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
@@ -29,8 +29,6 @@ spec:
|
||||
logs:
|
||||
fluentd:
|
||||
resources: null
|
||||
rsyslog:
|
||||
resources: null
|
||||
type: fluentd
|
||||
curation:
|
||||
curator:
|
||||
@@ -69,12 +67,14 @@ environment variable in the `cluster-logging-operator` Deployment.
|
||||
|
||||
* You can specify specific nodes for the logging components using node selectors.
|
||||
|
||||
////
|
||||
* You can specify the Log collectors to deploy to each node in a cluster, either Fluentd or Rsyslog.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
The Rsyslog log collector is currently a Technology Preview feature.
|
||||
====
|
||||
////
|
||||
|
||||
// The following include statements pull in the module files that comprise
|
||||
// the assembly. Include any combination of concept, procedure, or reference
|
||||
@@ -82,5 +82,4 @@ The Rsyslog log collector is currently a Technology Preview feature.
|
||||
// assemblies.
|
||||
|
||||
include::modules/efk-logging-configuring-image-about.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/efk-logging-configuring-node-selector.adoc[leveloffset=+1]
|
||||
|
||||
@@ -5,9 +5,9 @@ include::modules/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
{product-title} uses Fluentd or Rsyslog to collect operations and application logs from your cluster which {product-title} enriches with Kubernetes Pod and Namespace metadata.
|
||||
{product-title} uses Fluentd to collect operations and application logs from your cluster which {product-title} enriches with Kubernetes Pod and Namespace metadata.
|
||||
|
||||
You can configure log rotation, log location, use an external log aggregator, change the log collector, and make other configurations for either log collector.
|
||||
You can configure log rotation, log location, use an external log aggregator, and make other configurations for the log collector.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
@@ -28,9 +28,8 @@ include::modules/efk-logging-fluentd-limits.adoc[leveloffset=+1]
|
||||
////
|
||||
4.1
|
||||
modules/efk-logging-fluentd-log-rotation.adoc[leveloffset=+1]
|
||||
////
|
||||
|
||||
include::modules/efk-logging-fluentd-collector.adoc[leveloffset=+1]
|
||||
////
|
||||
|
||||
include::modules/efk-logging-fluentd-log-location.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
:context: efk-logging-systemd
|
||||
[id="efk-logging-systemd"]
|
||||
= Configuring systemd-journald and Rsyslog
|
||||
= Configuring systemd-journald and Fluentd
|
||||
include::modules/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
Because Fluentd and Rsyslog read from the journal, and the journal default
|
||||
Because Fluentd reads from the journal, and the journal default
|
||||
settings are very low, journal entries can be lost because the journal cannot keep up
|
||||
with the logging rate from system services.
|
||||
|
||||
|
||||
@@ -9,10 +9,12 @@ The Event Router communicates with the {product-title} and prints {product-title
|
||||
|
||||
If Cluster Logging is deployed, you can view the {product-title} events in Kibana.
|
||||
|
||||
////
|
||||
[NOTE]
|
||||
====
|
||||
The Event Router is not supported for the Rsyslog log collector.
|
||||
====
|
||||
////
|
||||
|
||||
// The following include statements pull in the module files that comprise
|
||||
// the assembly. Include any combination of concept, procedure, or reference
|
||||
|
||||
@@ -17,7 +17,6 @@ Not all of these fields are present in every record.
|
||||
The fields are grouped in the following categories:
|
||||
|
||||
* `exported-fields-Default`
|
||||
* `exported-fields-rsyslog`
|
||||
* `exported-fields-systemd`
|
||||
* `exported-fields-kubernetes`
|
||||
* `exported-fields-pipeline_metadata`
|
||||
@@ -25,6 +24,9 @@ The fields are grouped in the following categories:
|
||||
* `exported-fields-aushape`
|
||||
* `exported-fields-tlog`
|
||||
|
||||
// * `exported-fields-rsyslog`
|
||||
|
||||
|
||||
// The following include statements pull in the module files that comprise
|
||||
// the assembly. Include any combination of concept, procedure, or reference
|
||||
// modules required to cover the user story. You can also include other
|
||||
@@ -32,7 +34,7 @@ The fields are grouped in the following categories:
|
||||
|
||||
include::modules/efk-logging-exported-fields-default.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/efk-logging-exported-fields-rsyslog.adoc[leveloffset=+1]
|
||||
//modules/efk-logging-exported-fields-rsyslog.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/efk-logging-exported-fields-systemd.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
There are currently 5 different types of cluster logging components:
|
||||
|
||||
* logStore - This is where the logs will be stored. The current implementation is Elasticsearch.
|
||||
* collection - This is the component that collects logs from the node, formats them, and stores them in the logStore, either Fluentd or Rsyslog.
|
||||
* collection - This is the component that collects logs from the node, formats them, and stores them in the logStore. The current implementation is Fluentd.
|
||||
* visualization - This is the UI component used to view logs, graphs, charts, and so forth. The current implementation is Kibana.
|
||||
* curation - This is the component that trims logs by age. The current implementation is Curator.
|
||||
* event routing - This is the component forwards events to cluster logging. The current implementation is Event Router.
|
||||
|
||||
@@ -20,50 +20,6 @@ ifdef::openshift-dedicated[]
|
||||
----
|
||||
apiVersion: "logging.openshift.io/v1"
|
||||
kind: "ClusterLogging"
|
||||
metadata:
|
||||
name: "instance"
|
||||
namespace: "openshift-logging"
|
||||
spec:
|
||||
managementState: "Managed"
|
||||
logStore:
|
||||
type: "elasticsearch"
|
||||
elasticsearch:
|
||||
nodeCount: 3
|
||||
storage:
|
||||
storageClassName: "gp2"
|
||||
size: "200Gi"
|
||||
redundancyPolicy: "SingleRedundancy"
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/worker: ""
|
||||
resources:
|
||||
request:
|
||||
memory: 8G
|
||||
visualization:
|
||||
type: "kibana"
|
||||
kibana:
|
||||
replicas: 1
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/worker: ""
|
||||
curation:
|
||||
type: "curator"
|
||||
curator:
|
||||
schedule: "30 3 * * *"
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/worker: ""
|
||||
collection:
|
||||
logs:
|
||||
type: "fluentd"
|
||||
fluentd: {}
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/worker: ""
|
||||
----
|
||||
endif::[]
|
||||
|
||||
ifdef::openshift-enterprise,openshift-origin[]
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: "logging.openshift.io/v1"
|
||||
kind: "ClusterLogging"
|
||||
metadata:
|
||||
name: "instance"
|
||||
namespace: openshift-logging
|
||||
|
||||
@@ -12,7 +12,9 @@ The Event Router collects events and converts them into JSON format, which takes
|
||||
those events and pushes them to `STDOUT`. Fluentd indexes the events to the
|
||||
`.operations` index.
|
||||
|
||||
////
|
||||
[NOTE]
|
||||
====
|
||||
The Event Router is not supported for the Rsyslog log collector.
|
||||
====
|
||||
////
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="efk-logging-about-fluentd_{context}"]
|
||||
= About the logging collector
|
||||
|
||||
{product-title} can use Fluentd or Rsyslog to collect data about your cluster.
|
||||
{product-title} uses Fluentd to collect data about your cluster.
|
||||
|
||||
The logging collector is deployed as a DaemonSet in {product-title} that deploys pods to each {product-title} node.
|
||||
`journald` is the system log source supplying log messages from the operating system, the container runtime, and {product-title}.
|
||||
|
||||
@@ -57,10 +57,6 @@ status: <1>
|
||||
- fluentd-6l2ff
|
||||
- fluentd-flpnn
|
||||
- fluentd-n2frh
|
||||
rsyslogStatus:
|
||||
Nodes: null
|
||||
daemonSet: ""
|
||||
pods: null
|
||||
curation: <3>
|
||||
curatorStatus:
|
||||
- cronJobs: curator
|
||||
@@ -249,8 +245,4 @@ Status:
|
||||
Failed:
|
||||
Not Ready:
|
||||
Ready:
|
||||
Rsyslog Status:
|
||||
Nodes: <nil>
|
||||
Daemon Set:
|
||||
Pods: <nil>
|
||||
----
|
||||
|
||||
@@ -12,25 +12,31 @@ defined in the *cluster-logging-operator* deployment in the *openshift-logging*
|
||||
You can view the images by running the following command:
|
||||
|
||||
----
|
||||
oc -n openshift-logging set env deployment/cluster-logging-operator --list | grep _IMAGE
|
||||
$ oc -n openshift-logging set env deployment/cluster-logging-operator --list | grep _IMAGE
|
||||
----
|
||||
|
||||
----
|
||||
ELASTICSEARCH_IMAGE=registry.redhat.io/openshift4/ose-logging-elasticsearch5:v4.2 <1>
|
||||
FLUENTD_IMAGE=registry.redhat.io/openshift4/ose-logging-fluentd:v4.2 <2>
|
||||
KIBANA_IMAGE=registry.redhat.io/openshift4/ose-logging-kibana5:v4.2 <3>
|
||||
CURATOR_IMAGE=registry.redhat.io/openshift4/ose-logging-curator5:v4.2 <4>
|
||||
OAUTH_PROXY_IMAGE=registry.redhat.io/openshift4/ose-oauth-proxy:v4.2 <5>
|
||||
RSYSLOG_IMAGE=registry.redhat.io/openshift4/ose-logging-rsyslog:v4.2 <6>
|
||||
----
|
||||
<1> *ELASTICSEARCH_IMAGE* deploys Elasticsearch.
|
||||
<2> *FLUENTD_IMAGE* deploys Fluentd.
|
||||
<3> *KIBANA_IMAGE* deploys Kibana.
|
||||
<4> *CURATOR_IMAGE* deploys Curator.
|
||||
<5> *OAUTH_PROXY_IMAGE* defines OAUTH for OpenShift Container Platform.
|
||||
|
||||
////
|
||||
RSYSLOG_IMAGE=registry.redhat.io/openshift4/ose-logging-rsyslog:v4.2 <6>
|
||||
<6> *RSYSLOG_IMAGE* deploys Rsyslog.
|
||||
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
The Rsyslog log collector is in Technology Preview.
|
||||
====
|
||||
////
|
||||
|
||||
The values might be different depending on your environment.
|
||||
|
||||
@@ -136,6 +136,7 @@ You can set the policy that defines how Elasticsearch shards are replicated acro
|
||||
* `SingleRedundancy`. A single copy of each shard. Logs are always available and recoverable as long as at least two data nodes exist.
|
||||
* `ZeroRedundancy`. No copies of any shards. Logs may be unavailable (or lost) in the event a node is down or fails.
|
||||
|
||||
////
|
||||
Log collectors::
|
||||
You can select which log collector is deployed as a Daemonset to each node in the {product-title} cluster, either:
|
||||
|
||||
@@ -156,6 +157,7 @@ You can select which log collector is deployed as a Daemonset to each node in th
|
||||
memory:
|
||||
type: "fluentd"
|
||||
----
|
||||
////
|
||||
|
||||
Curator schedule::
|
||||
You specify the schedule for Curator in the [cron format](https://en.wikipedia.org/wiki/Cron).
|
||||
@@ -226,3 +228,4 @@ spec:
|
||||
cpu: 200m
|
||||
memory: 1Gi
|
||||
----
|
||||
|
||||
|
||||
@@ -9,10 +9,12 @@ Use the following steps to deploy Event Router into your cluster.
|
||||
|
||||
The following Template object creates the Service Account, ClusterRole, and ClusterRoleBinding required for the Event Router.
|
||||
|
||||
////
|
||||
[NOTE]
|
||||
====
|
||||
The Event Router is not supported for the Rsyslog log collector.
|
||||
====
|
||||
////
|
||||
|
||||
.Prerequisites
|
||||
|
||||
|
||||
@@ -45,7 +45,7 @@ or normalizer.
|
||||
|The IP address V6 of the source server, if available.
|
||||
|
||||
| `level`
|
||||
|The logging level as provided by `rsyslog` (severitytext property), python's
|
||||
|The logging level as provided by rsyslog (severitytext property), python's
|
||||
logging module. Possible values are as listed at
|
||||
link:http://sourceware.org/git/?p=glibc.git;a=blob;f=misc/sys/syslog.h;h=ee01478c4b19a954426a96448577c5a76e6647c0;hb=HEAD#l74[`misc/sys/syslog.h`]
|
||||
plus `trace` and `unknown`. For example, "alert crit debug emerg err info notice
|
||||
@@ -77,7 +77,7 @@ out of it by the collector or normalizer, that is UTF-8 encoded.
|
||||
|
||||
| `service`
|
||||
|The name of the service associated with the logging entity, if available. For
|
||||
example, the `syslog APP-NAME` and `rsyslog programname` property are mapped to
|
||||
example, the `syslog APP-NAME` property is mapped to
|
||||
the service field.
|
||||
|
||||
| `tags`
|
||||
|
||||
@@ -28,7 +28,7 @@ an instance of Fluentd that you control and that is configured with the
|
||||
|
||||
To direct logs to a specific Elasticsearch instance:
|
||||
|
||||
. Edit the `fluentd` or `rsyslog` DaemonSet in the *openshift-logging* project:
|
||||
. Edit the `fluentd` DaemonSet in the *openshift-logging* project:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
|
||||
@@ -8,10 +8,12 @@
|
||||
Use the `fluent-plugin-remote-syslog` plug-in on the host to send logs to an
|
||||
external syslog server.
|
||||
|
||||
////
|
||||
[NOTE]
|
||||
====
|
||||
For Rsyslog, you can edit the Rsyslog ConfigMap to add support for Syslog log forwarding using the *omfwd* module, see link:https://www.rsyslog.com/doc/v8-stable/configuration/modules/omfwd.html[omfwd: syslog Forwarding Output Module]. To send logs to a different Rsyslog instance, you can the *omrelp* module, see link:https://www.rsyslog.com/doc/v8-stable/configuration/modules/omrelp.html[omrelp: RELP Output Module].
|
||||
====
|
||||
////
|
||||
|
||||
.Prerequisite
|
||||
|
||||
|
||||
@@ -40,6 +40,8 @@ Alerts are in one of the following states:
|
||||
|
||||
|===
|
||||
|
||||
////
|
||||
|
||||
.Rsyslog Prometheus alerts
|
||||
|===
|
||||
|Alert |Message |Description |Severity
|
||||
@@ -66,4 +68,4 @@ Alerts are in one of the following states:
|
||||
|
||||
|===
|
||||
|
||||
|
||||
////
|
||||
|
||||
@@ -8,6 +8,9 @@
|
||||
{product-title} cluster logging uses Fluentd by default.
|
||||
Log collectors are deployed as a DaemonSet to each node in the cluster.
|
||||
|
||||
Currently, Fluentd is the only supported log collector, so you cannot change the log collector type.
|
||||
|
||||
////
|
||||
You can change the logging collector to Rsyslog, if needed.
|
||||
|
||||
[IMPORTANT]
|
||||
@@ -51,7 +54,7 @@ nodeSpec:
|
||||
|
||||
collection:
|
||||
logs:
|
||||
type: "rsyslog" <1>
|
||||
type: "fluentd" <1>
|
||||
----
|
||||
<1> Set the log collector to `rsyslog` or `fluentd`.
|
||||
|
||||
<1> Set the log collector to `fluentd`.
|
||||
////
|
||||
|
||||
@@ -6,10 +6,9 @@
|
||||
= Configuring the logging collector using environment variables
|
||||
|
||||
You can use environment variables to modify the
|
||||
configuration of the log collector, Fluentd or Rsyslog.
|
||||
configuration of the Fluentd log collector.
|
||||
|
||||
See the link:https://github.com/openshift/origin-aggregated-logging/blob/master/fluentd/README.md[Fluentd README] in Github or the
|
||||
link:https://github.com/openshift/origin-aggregated-logging/blob/master/rsyslog/README.md[Rsyslog README] for lists of the
|
||||
See the link:https://github.com/openshift/origin-aggregated-logging/blob/master/fluentd/README.md[Fluentd README] in Github for lists of the
|
||||
available environment variables.
|
||||
|
||||
.Prerequisite
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * logging/efk-logging-external.adoc
|
||||
// * logging/efk-logging-fluentd.adoc
|
||||
|
||||
[id="efk-logging-fluentd-external_{context}"]
|
||||
= Configuring Fluentd to send logs to an external log aggregator
|
||||
@@ -13,14 +13,18 @@ hosted Fluentd has processed them.
|
||||
ifdef::openshift-origin[]
|
||||
The `secure-forward` plug-in is provided with the Fluentd image as of v1.4.0.
|
||||
endif::openshift-origin[]
|
||||
|
||||
////
|
||||
ifdef::openshift-enterprise[]
|
||||
The `secure-forward` plug-in is supported by Fluentd only.
|
||||
endif::openshift-enterprise[]
|
||||
|
||||
////
|
||||
////
|
||||
[NOTE]
|
||||
====
|
||||
For Rsyslog, you can edit the Rsyslog configmap to add support for Syslog log forwarding using the *omfwd* module, see link:https://www.rsyslog.com/doc/v8-stable/configuration/modules/omfwd.html[omfwd: syslog Forwarding Output Module]. To send logs to a different Rsyslog instance, you can the *omrelp* module, see link:https://www.rsyslog.com/doc/v8-stable/configuration/modules/omrelp.html[omrelp: RELP Output Module].
|
||||
====
|
||||
////
|
||||
|
||||
The logging deployment provides a `secure-forward.conf` section in the Fluentd configmap
|
||||
for configuring the external aggregator:
|
||||
|
||||
@@ -5,10 +5,10 @@
|
||||
[id="efk-logging-fluentd-json_{context}"]
|
||||
= Configuring log collection JSON parsing
|
||||
|
||||
You can configure the log collector, Fluentd or Rsyslog, to determine if a log message is in *JSON* format and merge
|
||||
You can configure the Fluentd log collector to determine if a log message is in *JSON* format and merge
|
||||
the message into the JSON payload document posted to Elasticsearch. This feature is disabled by default.
|
||||
|
||||
You can enable or disable this feature by editing the `MERGE_JSON_LOG` environment variable in the *fluentd* or *rsyslog* daemonset.
|
||||
You can enable or disable this feature by editing the `MERGE_JSON_LOG` environment variable in the *fluentd* daemonset.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
@@ -18,7 +18,7 @@ Enabling this feature comes with risks, including:
|
||||
* Potential buffer storage leak caused by rejected message cycling.
|
||||
* Overwrite of data for field with same names.
|
||||
|
||||
The features in this topic should be used by only experienced Fluentd, Rsyslog, and Elasticsearch users.
|
||||
The features in this topic should be used by only experienced Fluentd and Elasticsearch users.
|
||||
====
|
||||
|
||||
.Prerequisites
|
||||
@@ -32,12 +32,13 @@ Use the following command to enable this feature:
|
||||
----
|
||||
oc set env ds/fluentd MERGE_JSON_LOG=true <1>
|
||||
----
|
||||
<1> Set this to `false` to disable this feature or `true` to enable this feature.
|
||||
|
||||
////
|
||||
----
|
||||
oc set env ds/rsyslog MERGE_JSON_LOG=true <1>
|
||||
----
|
||||
|
||||
<1> Set this to `false` to disable this feature or `true` to enable this feature.
|
||||
////
|
||||
|
||||
*Setting MERGE_JSON_LOG and CDM_UNDEFINED_TO_STRING*
|
||||
|
||||
@@ -46,7 +47,7 @@ If you set the `MERGE_JSON_LOG` and `CDM_UNDEFINED_TO_STRING` enviroment variabl
|
||||
When Fluentd rolls over the indices for the next day's logs, it will create a brand new index. The field definitions are updated and you will not get the *400* error.
|
||||
|
||||
Records that have *hard* errors, such as schema violations, corrupted data, and so forth, cannot be retried. The log collector sends the records for error handling. If you link:https://docs.fluentd.org/v1.0/articles/config-file#@error-label[add a
|
||||
`<label @ERROR>` section] to your Fluentd or Rsyslog config, as the last <label>, you can handle these records as needed.
|
||||
`<label @ERROR>` section] to your Fluentd config, as the last <label>, you can handle these records as needed.
|
||||
|
||||
For example:
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="efk-logging-fluentd-limits_{context}"]
|
||||
= Configure log collector CPU and memory limits
|
||||
|
||||
The log collector, Fluentd or Rsyslog, allows for adjustments to both the CPU and memory limits.
|
||||
The log collector allows for adjustments to both the CPU and memory limits.
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -40,6 +40,7 @@ spec:
|
||||
----
|
||||
<1> Specify the CPU and memory limits and requests as needed. The values shown are the default values.
|
||||
|
||||
////
|
||||
[source,yaml]
|
||||
----
|
||||
$ oc edit ClusterLogging instance
|
||||
@@ -63,3 +64,4 @@ spec:
|
||||
memory: 358Mi
|
||||
----
|
||||
<1> Specify the CPU and memory limits and requests as needed. The values shown are the default values.
|
||||
////
|
||||
|
||||
@@ -5,8 +5,7 @@
|
||||
[id="efk-logging-fluentd-log-location_{context}"]
|
||||
= Configuring the collected log location
|
||||
|
||||
The log collector, Fluentd or Rsyslog, writes logs to a specified file or to the default location, `/var/log/fluentd/fluentd.log`
|
||||
or `/var/log/rsyslog/rsyslog.log`, based on the `LOGGING_FILE_PATH` environment variable.
|
||||
The log collector writes logs to a specified file or to the default location, `/var/log/fluentd/fluentd.log` based on the `LOGGING_FILE_PATH` environment variable.
|
||||
|
||||
.Prerequisite
|
||||
|
||||
@@ -16,7 +15,7 @@ Set cluster logging to the unmanaged state.
|
||||
|
||||
To set the output location for the Fluentd logs:
|
||||
|
||||
. Edit the `LOGGING_FILE_PATH` parameter in the `fluentd` or `rsyslog` daemonset. You can specify a particular file or `console`:
|
||||
. Edit the `LOGGING_FILE_PATH` parameter in the `fluentd` daemonset. You can specify a particular file or `console`:
|
||||
+
|
||||
----
|
||||
spec:
|
||||
@@ -37,8 +36,9 @@ Or, use the CLI:
|
||||
----
|
||||
oc -n openshift-logging set env daemonset/fluentd LOGGING_FILE_PATH=/logs/fluentd.log
|
||||
----
|
||||
+
|
||||
|
||||
////
|
||||
----
|
||||
oc -n openshift-logging set env daemonset/rsyslog LOGGING_FILE_PATH=/logs/rsyslog.log
|
||||
----
|
||||
|
||||
////
|
||||
|
||||
@@ -34,9 +34,9 @@ environment variables.
|
||||
|Parameter
|
||||
|Description
|
||||
|
||||
| `LOGGING_FILE_SIZE` | The maximum size of of the fluentd.log file or the rsyslog.log file in Bytes. If the size of the *fluentd.log* file exceeds this value, {product-title} renames the log files and creates a new file. The default is 1024000 (1MB).
|
||||
| `LOGGING_FILE_SIZE` | The maximum size of of the fluentd.log file in Bytes. If the size of the *fluentd.log* file exceeds this value, {product-title} renames the log files and creates a new file. The default is 1024000 (1MB).
|
||||
| `LOGGING_FILE_AGE` | The number of logs that the log collector retains before deleting. The default value is `10`.
|
||||
| `LOGGING_FILE_PATH` | The path to where the log collector writes logs. To output logs to STDOUT, set this variable to `console`. By default `/var/log/rsyslog/rsyslog.log` or `/var/log/fluentd/fluentd.log`
|
||||
| `LOGGING_FILE_PATH` | The path to where the log collector writes logs. To output logs to STDOUT, set this variable to `console`. By default `/var/log/fluentd/fluentd.log`
|
||||
|===
|
||||
|
||||
For example:
|
||||
@@ -45,6 +45,7 @@ For example:
|
||||
$ oc set env daemonset/fluentd LOGGING_FILE_AGE=30 LOGGING_FILE_SIZE=1024000
|
||||
----
|
||||
|
||||
////
|
||||
----
|
||||
$ oc set env daemonset/rsyslog LOGGING_FILE_AGE=30 LOGGING_FILE_SIZE=1024000
|
||||
----
|
||||
@@ -62,4 +63,4 @@ oc edit configmap logrotate-bin
|
||||
----
|
||||
oc edit configmap logrotate-crontab
|
||||
----
|
||||
|
||||
////
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="efk-logging-fluentd-pod-location_{context}"]
|
||||
= Viewing logging collector pods
|
||||
|
||||
You can use the `oc get pods -o wide` command to see the nodes where the Fluentd or Rsyslog pods are deployed.
|
||||
You can use the `oc get pods -o wide` command to see the nodes where the Fluentd are deployed.
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -23,6 +23,7 @@ fluentd-rsm49 1/1 Running 0 4m56s 10.129.0.37
|
||||
fluentd-wjt8s 1/1 Running 0 4m56s 10.130.0.42 ip-10-0-156-251.ec2.internal <none> <none>
|
||||
----
|
||||
|
||||
////
|
||||
----
|
||||
$ oc get pods -o wide | grep rsyslog
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
@@ -33,5 +34,5 @@ rsyslog-cjmdp 1/1 Running 0 3m6s 10.129.2.16
|
||||
rsyslog-kqlzh 1/1 Running 0 3m6s 10.129.0.37 ip-10-0-141-243.ec2.internal <none> <none>
|
||||
rsyslog-nhshr 1/1 Running 0 3m6s 10.128.0.41 ip-10-0-143-38.ec2.internal <none> <none>
|
||||
----
|
||||
|
||||
////
|
||||
|
||||
|
||||
@@ -6,11 +6,11 @@
|
||||
= Throttling log collection
|
||||
|
||||
For projects that are especially verbose, an administrator can throttle down the
|
||||
rate at which the logs are read in by the log collecotr, Fluentd or Rsyslog, before being processed. By throttling,
|
||||
rate at which the logs are read in by the log collector before being processed. By throttling,
|
||||
you deliberately slow down the rate at which you are reading logs,
|
||||
so Kibana might take longer to display records.
|
||||
|
||||
Log throttling is not supported by Rsyslog.
|
||||
// Log throttling is not supported by Rsyslog.
|
||||
|
||||
[WARNING]
|
||||
====
|
||||
|
||||
@@ -25,7 +25,7 @@ For example, if you use the `MERGE_JSON_LOG` feature (`MERGE_JSON_LOG=true`), it
|
||||
* applications might emit too many fields;
|
||||
* fields may conflict with the cluster logging built-in fields.
|
||||
|
||||
You can configure how cluster logging treats fields from disparate sources by editing the log collector daemonset, Fluentd or Rsyslog, and setting environment variables in the table below.
|
||||
You can configure how cluster logging treats fields from disparate sources by editing the Fluentd log collector daemonset and setting environment variables in the table below.
|
||||
|
||||
// from https://github.com/ViaQ/fluent-plugin-viaq_data_model/commit/8b5ef11cedec4c372b2cb082afc7f9cc08473654
|
||||
|
||||
@@ -47,7 +47,7 @@ Coverting to JSON string preserves the structure of the value, so that you can r
|
||||
[[default-fields]]
|
||||
The default top-level fields, defined through the `CDM_DEFAULT_KEEP_FIELDS` parameter, are `CEE`, `time`, `@timestamp`, `aushape`, `ci_job`, `collectd`, `docker`, `fedora-ci`,
|
||||
`file`, `foreman`, `geoip`, `hostname`, `ipaddr4`, `ipaddr6`, `kubernetes`, `level`, `message`, `namespace_name`, `namespace_uuid`,
|
||||
`offset`, `openstack`, `ovirt`, `pid`, `pipeline_metadata`, `rsyslog`, `service`, `systemd`, `tags`, `testcase`, `tlog`, `viaq_msg_id`.
|
||||
`offset`, `openstack`, `ovirt`, `pid`, `pipeline_metadata`, `service`, `systemd`, `tags`, `testcase`, `tlog`, `viaq_msg_id`.
|
||||
+
|
||||
Any fields not included in `${CDM_DEFAULT_KEEP_FIELDS}` or `${CDM_EXTRA_KEEP_FIELDS}` are moved to `${CDM_UNDEFINED_NAME}` if `CDM_USE_UNDEFINED` is `true`.
|
||||
+
|
||||
@@ -103,7 +103,7 @@ If you set the `MERGE_JSON_LOG` parameter to `true`, see the Note below.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
If you set the `MERGE_JSON_LOG` parameter in the log collector daemonset and `CDM_UNDEFINED_TO_STRING` environment variables to true, you might receive an Elasticsearch *400* error.
|
||||
If you set the `MERGE_JSON_LOG` parameter in the Fluentd log collector daemonset and `CDM_UNDEFINED_TO_STRING` environment variables to true, you might receive an Elasticsearch *400* error.
|
||||
The error occurs because when`MERGE_JSON_LOG=true`, the log collector adds fields with data types other than string. When you set `CDM_UNDEFINED_TO_STRING=true`,
|
||||
the log collector attempts to add those fields as a string value resulting in the Elasticsearch 400 error. The error clears when the log collector rolls over
|
||||
the indices for the next day’s logs
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
// * logging/efk-logging-deploy.adoc
|
||||
|
||||
[id="efk-logging-fluentd-scaling_{context}"]
|
||||
= Scaling up systemd-journald
|
||||
= Scaling up systemd-journald
|
||||
|
||||
As you scale up your project, the default logging environment might need some
|
||||
adjustments.
|
||||
@@ -37,4 +37,3 @@ These settings account for the bursty nature of uploading in bulk.
|
||||
After removing the rate limit, you might see increased CPU utilization on the
|
||||
system logging daemons as it processes any messages that would have previously
|
||||
been throttled.
|
||||
|
||||
|
||||
@@ -19,15 +19,18 @@ To view cluster logs:
|
||||
|
||||
. Select the `openshift-logging` project from the drop-down menu.
|
||||
|
||||
. Click one of the logging collector pods, with the `fluentd` or `rsyslog` prefix.
|
||||
. Click one of the logging collector pods with the `fluentd` prefix.
|
||||
|
||||
. Click *Logs*.
|
||||
|
||||
By default, Fluentd reads logs from the tail, or end, of the log. Rsyslog reads from the head, or beginning, of the log.
|
||||
+
|
||||
By default, Fluentd reads logs from the tail, or end, of the log.
|
||||
|
||||
////
|
||||
Rsyslog reads from the head, or beginning, of the log.
|
||||
|
||||
You can configure Rsyslog to display the end of the log by setting the `RSYSLOG_JOURNAL_READ_FROM_TAIL` parameter in the Rsyslog daemonset:
|
||||
+
|
||||
|
||||
----
|
||||
$ oc set env ds/rsyslog RSYSLOG_JOURNAL_READ_FROM_TAIL=true
|
||||
----
|
||||
|
||||
////
|
||||
|
||||
@@ -21,9 +21,11 @@ To view cluster logs:
|
||||
$ oc -n openshift-logging set env daemonset/fluentd --list | grep LOGGING_FILE_PATH
|
||||
----
|
||||
+
|
||||
////
|
||||
----
|
||||
$ oc -n openshift-logging set env daemonset/rsyslog --list | grep LOGGING_FILE_PATH
|
||||
----
|
||||
////
|
||||
|
||||
. Depending on the log location, execute the logging command:
|
||||
+
|
||||
@@ -33,10 +35,6 @@ where the pod is located, to print out the contents of Fluentd log files:
|
||||
----
|
||||
$ oc exec <any-fluentd-pod> -- logs <1>
|
||||
----
|
||||
+
|
||||
----
|
||||
$ oc exec <any-rsyslog-pod> -- logs <1>
|
||||
----
|
||||
<1> Specify the name of a log collector pod. Note the space before `logs`.
|
||||
+
|
||||
For example:
|
||||
@@ -45,9 +43,15 @@ For example:
|
||||
$ oc exec fluentd-ht42r -n openshift-logging -- logs
|
||||
----
|
||||
+
|
||||
////
|
||||
----
|
||||
$ oc exec <any-rsyslog-pod> -- logs <1>
|
||||
----
|
||||
+
|
||||
----
|
||||
$ oc exec rsyslog-ht42r -n openshift-logging -- logs
|
||||
----
|
||||
////
|
||||
|
||||
* If you are using `LOGGING_FILE_PATH=console`, the log collector writes logs to stdout/stderr`.
|
||||
You can retrieve the logs with the `oc logs [-f] <pod_name>` command, where the `-f`
|
||||
@@ -56,10 +60,6 @@ is optional.
|
||||
----
|
||||
$ oc logs -f <any-fluentd-pod> -n openshift-logging <1>
|
||||
----
|
||||
+
|
||||
----
|
||||
$ oc logs -f <any-rsyslog-pod> -n openshift-logging <1>
|
||||
----
|
||||
<1> Specify the name of a log collector pod. Use the `-f` option to follow what is being written into the logs.
|
||||
+
|
||||
For example
|
||||
@@ -68,17 +68,24 @@ For example
|
||||
$ oc logs -f fluentd-ht42r -n openshift-logging
|
||||
----
|
||||
+
|
||||
The contents of log files are printed out.
|
||||
+
|
||||
By default, Fluentd reads logs from the tail, or end, of the log.
|
||||
+
|
||||
////
|
||||
----
|
||||
$ oc logs -f <any-rsyslog-pod> -n openshift-logging <1>
|
||||
----
|
||||
+
|
||||
----
|
||||
$ oc logs -f rsyslog-ht42r -n openshift-logging
|
||||
----
|
||||
+
|
||||
The contents of log files are printed out.
|
||||
+
|
||||
By default, Fluentd reads logs from the tail, or end, of the log. Rsyslog reads from the head, or beginning, of the log.
|
||||
Rsyslog reads from the head, or beginning, of the log.
|
||||
+
|
||||
You can configure Rsyslog to display the end of the log by setting the `RSYSLOG_JOURNAL_READ_FROM_TAIL` parameter in the Rsyslog daemonset:
|
||||
+
|
||||
----
|
||||
$ oc set env ds/rsyslog RSYSLOG_JOURNAL_READ_FROM_TAIL=true
|
||||
----
|
||||
|
||||
////
|
||||
|
||||
@@ -40,8 +40,6 @@ spec:
|
||||
logs:
|
||||
fluentd:
|
||||
resources: null
|
||||
rsyslog:
|
||||
resources: null
|
||||
type: fluentd
|
||||
curation:
|
||||
curator:
|
||||
|
||||
Reference in New Issue
Block a user