1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

OBSDOCS-152: Clean up unnecessary sections of logging docs

This commit is contained in:
Ashleigh Brennan
2023-11-09 15:01:01 -06:00
parent e5c8817065
commit 5848fc2227
33 changed files with 6 additions and 2966 deletions

View File

@@ -2503,26 +2503,6 @@ Topics:
File: logging-5-7-release-notes
- Name: Support
File: cluster-logging-support
- Name: Logging 5.7
Dir: v5_7
Distros: openshift-enterprise,openshift-origin
Topics:
- Name: Administering Logging
File: logging-5-7-administration
# Name: Logging Reference
# File: logging-5-7-reference
- Name: Logging 5.6
Dir: v5_6
Distros: openshift-enterprise,openshift-origin
Topics:
- Name: Administering Logging
File: logging-5-6-administration
- Name: Logging 5.5
Dir: v5_5
Distros: openshift-enterprise,openshift-origin
Topics:
- Name: Administering Logging
File: logging-5-5-administration
- Name: About Logging
File: cluster-logging
- Name: Installing Logging

View File

@@ -1,30 +0,0 @@
// Module included in the following assemblies:
//
// * logging/cluster-logging-collector.adoc
[id="cluster-logging-collector-envvar_{context}"]
= Configuring the logging collector using environment variables
You can use environment variables to modify the configuration of the Fluentd log
collector.
See the link:https://github.com/openshift/origin-aggregated-logging/blob/master/fluentd/README.md[Fluentd README] in Github for lists of the
available environment variables.
.Prerequisites
* Set OpenShift Logging to the unmanaged state. Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades.
.Procedure
Set any of the Fluentd environment variables as needed:
----
$ oc set env ds/fluentd <env-var>=<value>
----
For example:
----
$ oc set env ds/fluentd BUFFER_SIZE_LIMIT=24
----

View File

@@ -1,63 +0,0 @@
// Module included in the following assemblies:
//
// * logging/cluster-logging-exported-fields.adoc
[id="cluster-logging-exported-fields-aushape_{context}"]
= Aushape exported fields
These are the Aushape fields exported by OpenShift Logging available for searching
from Elasticsearch and Kibana.
Audit events converted with Aushape. For more information, see
link:https://github.com/Scribery/aushape[Aushape].
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `aushape.serial`
|Audit event serial number.
| `aushape.node`
|Name of the host where the audit event occurred.
| `aushape.error`
|The error aushape encountered while converting the event.
| `aushape.trimmed`
|An array of JSONPath expressions relative to the event object, specifying
objects or arrays with the content removed as the result of event size limiting.
An empty string means the event removed the content, and an empty array means
the trimming occurred by unspecified objects and arrays.
| `aushape.text`
|An array log record strings representing the original audit event.
|===
[discrete]
[id="exported-fields-aushape.data_{context}"]
=== `aushape.data` Fields
Parsed audit event data related to Aushape.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `aushape.data.avc`
|type: nested
| `aushape.data.execve`
|type: string
| `aushape.data.netfilter_cfg`
|type: nested
| `aushape.data.obj_pid`
|type: nested
| `aushape.data.path`
|type: nested
|===

View File

@@ -1,89 +0,0 @@
// Module included in the following assemblies:
//
// * logging/cluster-logging-exported-fields.adoc
[id="cluster-logging-exported-fields-container_{context}"]
= Container exported fields
These are the Docker fields exported by OpenShift Logging available for searching from Elasticsearch and Kibana.
Namespace for docker container-specific metadata. The docker.container_id is the Docker container ID.
[discrete]
[id="exported-fields-pipeline_metadata.collector_{context}"]
=== `pipeline_metadata.collector` Fields
This section contains metadata specific to the collector.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `pipeline_metadata.collector.hostname`
|FQDN of the collector. It might be different from the FQDN of the actual emitter
of the logs.
| `pipeline_metadata.collector.name`
|Name of the collector.
| `pipeline_metadata.collector.version`
|Version of the collector.
| `pipeline_metadata.collector.ipaddr4`
|IP address v4 of the collector server, can be an array.
| `pipeline_metadata.collector.ipaddr6`
|IP address v6 of the collector server, can be an array.
| `pipeline_metadata.collector.inputname`
|How the log message was received by the collector whether it was TCP/UDP, or
imjournal/imfile.
| `pipeline_metadata.collector.received_at`
|Time when the message was received by the collector.
| `pipeline_metadata.collector.original_raw_message`
|The original non-parsed log message, collected by the collector or as close to the
source as possible.
|===
[discrete]
[id="exported-fields-pipeline_metadata.normalizer_{context}"]
=== `pipeline_metadata.normalizer` Fields
This section contains metadata specific to the normalizer.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `pipeline_metadata.normalizer.hostname`
|FQDN of the normalizer.
| `pipeline_metadata.normalizer.name`
|Name of the normalizer.
| `pipeline_metadata.normalizer.version`
|Version of the normalizer.
| `pipeline_metadata.normalizer.ipaddr4`
|IP address v4 of the normalizer server, can be an array.
| `pipeline_metadata.normalizer.ipaddr6`
|IP address v6 of the normalizer server, can be an array.
| `pipeline_metadata.normalizer.inputname`
|how the log message was received by the normalizer whether it was TCP/UDP.
| `pipeline_metadata.normalizer.received_at`
|Time when the message was received by the normalizer.
| `pipeline_metadata.normalizer.original_raw_message`
|The original non-parsed log message as it is received by the normalizer.
| `pipeline_metadata.trace`
|The field records the trace of the message. Each collector and normalizer appends
information about itself and the date and time when the message was processed.
|===

View File

@@ -1,83 +0,0 @@
[id="cluster-logging-exported-fields-kubernetes_{context}"]
= Kubernetes
The following fields can be present in the namespace for kubernetes-specific metadata.
== kubernetes.pod_name
The name of the pod
[horizontal]
Data type:: keyword
== kubernetes.pod_id
Kubernetes ID of the pod.
[horizontal]
Data type:: keyword
== kubernetes.namespace_name
The name of the namespace in Kubernetes.
[horizontal]
Data type:: keyword
== kubernetes.namespace_id
ID of the namespace in Kubernetes.
[horizontal]
Data type:: keyword
== kubernetes.host
Kubernetes node name
[horizontal]
Data type:: keyword
== kubernetes.master_url
Kubernetes Master URL
[horizontal]
Data type:: keyword
== kubernetes.container_name
The name of the container in Kubernetes.
[horizontal]
Data type:: text
== kubernetes.annotations
Annotations associated with the Kubernetes object
[horizontal]
Data type:: group
== kubernetes.labels
Labels attached to the Kubernetes object Each label name is a subfield of labels field. Each label name is de-dotted: dots in the name are replaced with underscores.
[horizontal]
Data type:: group
== kubernetes.event
The kubernetes event obtained from kubernetes master API The event is already JSON object and as whole nested under kubernetes field This description should loosely follow 'type Event' in https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#event-v1-core
[horizontal]
Data type:: group

View File

@@ -1,34 +0,0 @@
// Module included in the following assemblies:
//
// * logging/cluster-logging-exported-fields.adoc
[id="cluster-logging-exported-fields-rsyslog_{context}"]
= `rsyslog` exported fields
These are the `rsyslog` fields exported by the logging system and available for searching
from Elasticsearch and Kibana.
The following fields are RFC5424 based metadata.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `rsyslog.facility`
|See `syslog` specification for more information on `rsyslog`.
| `rsyslog.protocol-version`
|This is the `rsyslog` protocol version.
| `rsyslog.structured-data`
|See `syslog` specification for more information on `syslog` structured-data.
| `rsyslog.msgid`
|This is the `syslog` msgid field.
| `rsyslog.appname`
|If `app-name` is the same as `programname`, then only fill top-level field `service`.
If `app-name` is not equal to `programname`, this field will hold `app-name`.
See syslog specifications for more information.
|===

View File

@@ -1,195 +0,0 @@
// Module included in the following assemblies:
//
// * logging/cluster-logging-exported-fields.adoc
[id="cluster-logging-exported-fields-systemd_{context}"]
= systemd exported fields
These are the `systemd` fields exported by OpenShift Logging available for searching
from Elasticsearch and Kibana.
Contains common fields specific to `systemd` journal.
link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html[Applications]
can write their own fields to the journal. These will be available under the
`systemd.u` namespace. `RESULT` and `UNIT` are two such fields.
[discrete]
[id="exported-fields-systemd.k_{context}"]
=== `systemd.k` Fields
The following table contains `systemd` kernel-specific metadata.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `systemd.k.KERNEL_DEVICE`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_KERNEL_DEVICE=[`systemd.k.KERNEL_DEVICE`]
is the kernel device name.
| `systemd.k.KERNEL_SUBSYSTEM`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_KERNEL_SUBSYSTEM=[`systemd.k.KERNEL_SUBSYSTEM`]
is the kernel subsystem name.
| `systemd.k.UDEV_DEVLINK`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_UDEV_DEVLINK=[`systemd.k.UDEV_DEVLINK`]
includes additional symlink names that point to the node.
| `systemd.k.UDEV_DEVNODE`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_UDEV_DEVNODE=[`systemd.k.UDEV_DEVNODE`]
is the node path of the device.
| `systemd.k.UDEV_SYSNAME`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_UDEV_SYSNAME=[ `systemd.k.UDEV_SYSNAME`]
is the kernel device name.
|===
[discrete]
[id="exported-fields-systemd.t_{context}"]
=== `systemd.t` Fields
`systemd.t Fields` are trusted journal fields, fields that are implicitly added
by the journal, and cannot be altered by client code.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `systemd.t.AUDIT_LOGINUID`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_AUDIT_SESSION=[`systemd.t.AUDIT_LOGINUID`]
is the user ID for the journal entry process.
| `systemd.t.BOOT_ID`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_BOOT_ID=[`systemd.t.BOOT_ID`]
is the kernel boot ID.
| `systemd.t.AUDIT_SESSION`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_AUDIT_SESSION=[`systemd.t.AUDIT_SESSION`]
is the session for the journal entry process.
| `systemd.t.CAP_EFFECTIVE`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_CAP_EFFECTIVE=[`systemd.t.CAP_EFFECTIVE`]
represents the capabilities of the journal entry process.
| `systemd.t.CMDLINE`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_COMM=[`systemd.t.CMDLINE`]
is the command line of the journal entry process.
| `systemd.t.COMM`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_COMM=[`systemd.t.COMM`]
is the name of the journal entry process.
| `systemd.t.EXE`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_COMM=[`systemd.t.EXE`]
is the executable path of the journal entry process.
| `systemd.t.GID`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_PID=[`systemd.t.GID`]
is the group ID for the journal entry process.
| `systemd.t.HOSTNAME`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_HOSTNAME=[`systemd.t.HOSTNAME`]
is the name of the host.
| `systemd.t.MACHINE_ID`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_MACHINE_ID=[`systemd.t.MACHINE_ID`]
is the machine ID of the host.
| `systemd.t.PID`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_PID=[`systemd.t.PID`]
is the process ID for the journal entry process.
| `systemd.t.SELINUX_CONTEXT`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_SELINUX_CONTEXT=[`systemd.t.SELINUX_CONTEXT`]
is the security context, or label, for the journal entry process.
| `systemd.t.SOURCE_REALTIME_TIMESTAMP`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_SOURCE_REALTIME_TIMESTAMP=[`systemd.t.SOURCE_REALTIME_TIMESTAMP`]
is the earliest and most reliable timestamp of the message. This is converted to RFC 3339 NS format.
| `systemd.t.SYSTEMD_CGROUP`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_SYSTEMD_CGROUP=[`systemd.t.SYSTEMD_CGROUP`]
is the `systemd` control group path.
| `systemd.t.SYSTEMD_OWNER_UID`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_SYSTEMD_CGROUP=[`systemd.t.SYSTEMD_OWNER_UID`]
is the owner ID of the session.
| `systemd.t.SYSTEMD_SESSION`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_SYSTEMD_CGROUP=[`systemd.t.SYSTEMD_SESSION`],
if applicable, is the `systemd` session ID.
| `systemd.t.SYSTEMD_SLICE`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_SYSTEMD_CGROUP=[`systemd.t.SYSTEMD_SLICE`]
is the slice unit of the journal entry process.
| `systemd.t.SYSTEMD_UNIT`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_SYSTEMD_CGROUP=[`systemd.t.SYSTEMD_UNIT`]
is the unit name for a session.
| `systemd.t.SYSTEMD_USER_UNIT`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_SYSTEMD_CGROUP=[`systemd.t.SYSTEMD_USER_UNIT`],
if applicable, is the user unit name for a session.
| `systemd.t.TRANSPORT`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_TRANSPORT=[`systemd.t.TRANSPORT`]
is the method of entry by the journal service. This includes, `audit`, `driver`,
`syslog`, `journal`, `stdout`, and `kernel`.
| `systemd.t.UID`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#_PID=[`systemd.t.UID`]
is the user ID for the journal entry process.
| `systemd.t.SYSLOG_FACILITY`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#SYSLOG_FACILITY=[`systemd.t.SYSLOG_FACILITY`]
is the field containing the facility, formatted as a decimal string, for `syslog`.
| `systemd.t.SYSLOG_IDENTIFIER`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#SYSLOG_FACILITY=[`systemd.t.systemd.t.SYSLOG_IDENTIFIER`]
is the identifier for `syslog`.
| `systemd.t.SYSLOG_PID`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#SYSLOG_FACILITY=[`SYSLOG_PID`]
is the client process ID for `syslog`.
|===
[discrete]
[id="exported-fields-systemd.u_{context}"]
=== `systemd.u` Fields
`systemd.u Fields` are directly passed from clients and stored in the journal.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `systemd.u.CODE_FILE`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#CODE_FILE=[`systemd.u.CODE_FILE`]
is the code location containing the filename of the source.
| `systemd.u.CODE_FUNCTION`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#CODE_FILE=[`systemd.u.CODE_FUNCTION`]
is the code location containing the function of the source.
| `systemd.u.CODE_LINE`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#CODE_FILE=[`systemd.u.CODE_LINE`]
is the code location containing the line number of the source.
| `systemd.u.ERRNO`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#ERRNO=[`systemd.u.ERRNO`],
if present, is the low-level error number formatted in numeric value, as a decimal string.
| `systemd.u.MESSAGE_ID`
|link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html#MESSAGE_ID=[`systemd.u.MESSAGE_ID`]
is the message identifier ID for recognizing message types.
| `systemd.u.RESULT`
|For private use only.
| `systemd.u.UNIT`
|For private use only.
|===

View File

@@ -1,51 +0,0 @@
// Module included in the following assemblies:
//
// * logging/cluster-logging-exported-fields.adoc
[id="cluster-logging-exported-fields-tlog_{context}"]
= Tlog exported fields
These are the Tlog fields exported by the OpenShift Logging system and available for searching
from Elasticsearch and Kibana.
Tlog terminal I/O recording messages. For more information see
link:https://github.com/Scribery/tlog[Tlog].
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `tlog.ver`
|Message format version number.
| `tlog.user`
|Recorded user name.
| `tlog.term`
|Terminal type name.
| `tlog.session`
|Audit session ID of the recorded session.
| `tlog.id`
|ID of the message within the session.
| `tlog.pos`
|Message position in the session, milliseconds.
| `tlog.timing`
|Distribution of this message's events in time.
| `tlog.in_txt`
|Input text with invalid characters scrubbed.
| `tlog.in_bin`
|Scrubbed invalid input characters as bytes.
| `tlog.out_txt`
|Output text with invalid characters scrubbed.
| `tlog.out_bin`
|Scrubbed invalid output characters as bytes.
|===

View File

@@ -26,6 +26,8 @@ include::snippets/logging-fluentd-dep-snip.adoc[]
include::modules/cluster-logging-deploy-console.adoc[leveloffset=+1]
include::modules/cluster-logging-deploy-cli.adoc[leveloffset=+1]
[id="cluster-logging-deploying-es-operator"]
== Installing the Elasticsearch Operator
@@ -38,12 +40,7 @@ include::modules/logging-install-es-operator.adoc[leveloffset=+2]
* xref:../operators/admin/olm-adding-operators-to-cluster.adoc#olm-installing-operators-from-operatorhub_olm-adding-operators-to-a-cluster[Installing Operators from the OperatorHub]
* xref:../logging/config/cluster-logging-log-store.adoc#cluster-logging-removing-unused-components-if-no-elasticsearch_cluster-logging-log-store[Removing unused components if you do not use the default Elasticsearch log store]
== Postinstallation tasks
If your network plugin enforces network isolation, xref:#cluster-logging-deploy-multitenant_cluster-logging-deploying[allow network traffic between the projects that contain the {logging} Operators].
include::modules/cluster-logging-deploy-cli.adoc[leveloffset=+1]
[id="cluster-logging-deploying-postinstallation"]
== Postinstallation tasks
If your network plugin enforces network isolation, xref:#cluster-logging-deploy-multitenant_cluster-logging-deploying[allow network traffic between the projects that contain the {logging} Operators].

View File

@@ -14,6 +14,9 @@ include::modules/loki-deployment-sizing.adoc[leveloffset=+1]
//include::modules/cluster-logging-loki-deploy.adoc[leveloffset=+1]
//Installing the Loki Operator via webconsole
include::modules/logging-deploy-loki-console.adoc[leveloffset=+1]
include::modules/logging-creating-new-group-cluster-admin-user-role.adoc[leveloffset=+1]
include::modules/logging-loki-gui-install.adoc[leveloffset=+1]
@@ -35,7 +38,6 @@ include::modules/logging-loki-reliability-hardening.adoc[leveloffset=+1]
* xref:../nodes/scheduling/nodes-scheduler-pod-affinity.adoc#nodes-scheduler-pod-affinity[Placing pods relative to other pods using affinity and anti-affinity rules]
include::modules/logging-loki-zone-aware-rep.adoc[leveloffset=+1]
include::modules/logging-loki-zone-fail-recovery.adoc[leveloffset=+2]

View File

@@ -1 +0,0 @@
../../_attributes/

View File

@@ -1 +0,0 @@
../../images/

View File

@@ -1,13 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="logging-administration-5-5"]
= Administering your logging deployment
include::_attributes/common-attributes.adoc[]
:context: logging-5.5-administration
toc::[]
//Installing the Loki Operator via webconsole
include::modules/logging-deploy-loki-console.adoc[leveloffset=+1]
//Generic installing operators from operator hub using CLI
include::modules/olm-installing-from-operatorhub-using-cli.adoc[leveloffset=+1]

View File

@@ -1 +0,0 @@
../../modules/

View File

@@ -1 +0,0 @@
../../snippets/

View File

@@ -1 +0,0 @@
../../_attributes/

View File

@@ -1 +0,0 @@
../../images/

View File

@@ -1,13 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="logging-administration-5-6"]
= Administering your logging deployment
include::_attributes/common-attributes.adoc[]
:context: logging-5.6-administration
toc::[]
//Installing the Loki Operator via webconsole
include::modules/logging-deploy-loki-console.adoc[leveloffset=+1]
//Generic installing operators from operator hub using CLI
include::modules/olm-installing-from-operatorhub-using-cli.adoc[leveloffset=+1]

View File

@@ -1 +0,0 @@
../../modules/

View File

@@ -1 +0,0 @@
../../snippets/

View File

@@ -1 +0,0 @@
../../_attributes/

View File

@@ -1 +0,0 @@
../../images/

View File

@@ -1,13 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="logging-administration-5-7"]
= Administering your logging deployment
include::_attributes/common-attributes.adoc[]
:context: logging-5.7-administration
toc::[]
//Installing the Loki Operator via webconsole
include::modules/logging-deploy-loki-console.adoc[leveloffset=+1]
//Generic installing operators from operator hub using CLI
include::modules/olm-installing-from-operatorhub-using-cli.adoc[leveloffset=+1]

View File

@@ -1,7 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="logging-reference-5-7"]
= Logging References
include::_attributes/common-attributes.adoc[]
:context: logging-5.7-reference
toc::[]

View File

@@ -1 +0,0 @@
../../modules/

View File

@@ -1 +0,0 @@
../../snippets/

View File

@@ -1,60 +0,0 @@
// Module included in the following assemblies:
//
// * logging/cluster-logging-elasticsearch.adoc
[id="cluster-logging-configuring-node-selector_{context}"]
= Specifying a node for OpenShift Logging components using node selectors
Each component specification allows the component to target a specific node.
.Procedure
. Edit the Cluster Logging custom resource (CR) in the `openshift-logging` project:
+
----
$ oc edit ClusterLogging instance
----
+
[source,yaml]
----
$ oc edit ClusterLogging instance
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "nodeselector"
spec:
managementState: "Managed"
logStore:
type: "elasticsearch"
elasticsearch:
nodeSelector: <1>
logging: es
nodeCount: 3
resources:
limits:
memory: 16Gi
requests:
cpu: 500m
memory: 16Gi
storage:
size: "20G"
storageClassName: "gp2"
redundancyPolicy: "ZeroRedundancy"
visualization:
type: "kibana"
kibana:
nodeSelector: <2>
logging: kibana
replicas: 1
collection:
logs:
type: "fluentd"
fluentd:
nodeSelector: <3>
logging: fluentd
----
<1> Node selector for Elasticsearch.
<2> Node selector for Kibana.
<3> Node selector for Fluentd.

View File

@@ -1,51 +0,0 @@
// Module included in the following assemblies:
//
// * logging/cluster-logging-elasticsearch.adoc
[id="cluster-logging-elasticsearch-admin_{context}"]
= Performing administrative Elasticsearch operations
An administrator certificate, key, and CA that can be used to communicate with and perform
administrative operations on Elasticsearch are provided within the
*elasticsearch* secret in the `openshift-logging` project.
[NOTE]
====
To confirm whether or not your OpenShift Logging installation provides these, run:
----
$ oc describe secret elasticsearch -n openshift-logging
----
====
. Connect to an Elasticsearch pod that is in the cluster on which you are
attempting to perform maintenance.
. To find a pod in a cluster use:
+
----
$ oc get pods -l component=elasticsearch -o name -n openshift-logging | head -1
----
. Connect to a pod:
+
----
$ oc rsh <your_Elasticsearch_pod>
----
. Once connected to an Elasticsearch container, you can use the certificates
mounted from the secret to communicate with Elasticsearch per its
link:https://www.elastic.co/guide/en/elasticsearch/reference/2.3/indices.html[Indices APIs documentation].
+
Fluentd sends its logs to Elasticsearch using the index format *project.{project_name}.{project_uuid}.YYYY.MM.DD*
where YYYY.MM.DD is the date of the log record.
+
For example, to delete all logs for the *openshift-logging* project with uid *3b3594fa-2ccd-11e6-acb7-0eb6b35eaee3*
from June 15, 2016, we can run:
+
----
$ curl --key /etc/elasticsearch/secret/admin-key \
--cert /etc/elasticsearch/secret/admin-cert \
--cacert /etc/elasticsearch/secret/admin-ca -XDELETE \
"https://localhost:9200/project.openshift-logging.664360-11e9-92d0-0eb4e1b4a396.2019.03.10"
----

View File

@@ -1,993 +0,0 @@
// Module included in the following assemblies:
//
// * logging/cluster-logging-exported-fields.adoc
[id="cluster-logging-exported-fields-collectd_{context}"]
= `collectd` exported fields
These are the `collectd` and `collectd-*` fields exported by the logging system and available for searching
from Elasticsearch and Kibana.
[discrete]
[id="exported-fields-collectd_{context}"]
=== `collectd` Fields
The following fields represent namespace metrics metadata.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.interval`
|type: float
The `collectd` interval.
| `collectd.plugin`
|type: string
The `collectd` plug-in.
| `collectd.plugin_instance`
|type: string
The `collectd` plugin_instance.
| `collectd.type_instance`
|type: string
The `collectd` `type_instance`.
| `collectd.type`
|type: string
The `collectd` type.
| `collectd.dstypes`
|type: string
The `collectd` dstypes.
|===
[discrete]
[id="exported-fields-collectd.processes_{context}"]
=== `collectd.processes` Fields
The following field corresponds to the `collectd` processes plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.processes.ps_state`
|type: integer
The `collectd ps_state` type of processes plug-in.
|===
[discrete]
[id="exported-fields-collectd.processes.ps_disk_ops_{context}"]
=== `collectd.processes.ps_disk_ops` Fields
The `collectd` `ps_disk_ops` type of processes plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.processes.ps_disk_ops.read`
|type: float
`TODO`
| `collectd.processes.ps_disk_ops.write`
|type: float
`TODO`
| `collectd.processes.ps_vm`
|type: integer
The `collectd` `ps_vm` type of processes plug-in.
| `collectd.processes.ps_rss`
|type: integer
The `collectd` `ps_rss` type of processes plug-in.
| `collectd.processes.ps_data`
|type: integer
The `collectd` `ps_data` type of processes plug-in.
| `collectd.processes.ps_code`
|type: integer
The `collectd` `ps_code` type of processes plug-in.
| `collectd.processes.ps_stacksize`
| type: integer
The `collectd` `ps_stacksize` type of processes plug-in.
|===
[discrete]
[id="exported-fields-collectd.processes.ps_cputime_{context}"]
=== `collectd.processes.ps_cputime` Fields
The `collectd` `ps_cputime` type of processes plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.processes.ps_cputime.user`
|type: float
`TODO`
| `collectd.processes.ps_cputime.syst`
|type: float
`TODO`
|===
[discrete]
[id="exported-fields-collectd.processes.ps_count_{context}"]
=== `collectd.processes.ps_count` Fields
The `collectd` `ps_count` type of processes plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.processes.ps_count.processes`
|type: integer
`TODO`
| `collectd.processes.ps_count.threads`
|type: integer
`TODO`
|===
[discrete]
[id="exported-fields-collectd.processes.ps_pagefaults_{context}"]
=== `collectd.processes.ps_pagefaults` Fields
The `collectd` `ps_pagefaults` type of processes plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.processes.ps_pagefaults.majflt`
|type: float
`TODO`
| `collectd.processes.ps_pagefaults.minflt`
|type: float
`TODO`
|===
[discrete]
[id="exported-fields-collectd.processes.ps_disk_octets_{context}"]
=== `collectd.processes.ps_disk_octets` Fields
The `collectd ps_disk_octets` type of processes plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.processes.ps_disk_octets.read`
|type: float
`TODO`
| `collectd.processes.ps_disk_octets.write`
|type: float
`TODO`
| `collectd.processes.fork_rate`
|type: float
The `collectd` `fork_rate` type of processes plug-in.
|===
[discrete]
[id="exported-fields-collectd.disk_{context}"]
=== `collectd.disk` Fields
Corresponds to `collectd` disk plug-in.
[discrete]
[id="exported-fields-collectd.disk.disk_merged_{context}"]
=== `collectd.disk.disk_merged` Fields
The `collectd` `disk_merged` type of disk plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.disk.disk_merged.read`
|type: float
`TODO`
| `collectd.disk.disk_merged.write`
|type: float
`TODO`
|===
[discrete]
[id="exported-fields-collectd.disk.disk_octets_{context}"]
=== `collectd.disk.disk_octets` Fields
The `collectd` `disk_octets` type of disk plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.disk.disk_octets.read`
|type: float
`TODO`
| `collectd.disk.disk_octets.write`
|type: float
`TODO`
|===
[discrete]
[id="exported-fields-collectd.disk.disk_time_{context}"]
=== `collectd.disk.disk_time` Fields
The `collectd` `disk_time` type of disk plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.disk.disk_time.read`
|type: float
`TODO`
| `collectd.disk.disk_time.write`
|type: float
`TODO`
|===
[discrete]
[id="exported-fields-collectd.disk.disk_ops_{context}"]
=== `collectd.disk.disk_ops` Fields
The `collectd` `disk_ops` type of disk plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.disk.disk_ops.read`
|type: float
`TODO`
| `collectd.disk.disk_ops.write`
|type: float
`TODO`
| `collectd.disk.pending_operations`
|type: integer
The `collectd` `pending_operations` type of disk plug-in.
|===
[discrete]
[id="exported-fields-collectd.disk.disk_io_time_{context}"]
=== `collectd.disk.disk_io_time` Fields
The `collectd disk_io_time` type of disk plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.disk.disk_io_time.io_time`
|type: float
`TODO`
| `collectd.disk.disk_io_time.weighted_io_time`
|type: float
`TODO`
|===
[discrete]
[id="exported-fields-collectd.interface_{context}"]
=== `collectd.interface` Fields
Corresponds to the `collectd` interface plug-in.
[discrete]
[id="exported-fields-collectd.interface.if_octets_{context}"]
=== `collectd.interface.if_octets` Fields
The `collectd` `if_octets` type of interface plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.interface.if_octets.rx`
|type: float
`TODO`
| `collectd.interface.if_octets.tx`
|type: float
`TODO`
|===
[discrete]
[id="exported-fields-collectd.interface.if_packets_{context}"]
=== `collectd.interface.if_packets` Fields
The `collectd` `if_packets` type of interface plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.interface.if_packets.rx`
|type: float
`TODO`
| `collectd.interface.if_packets.tx`
|type: float
`TODO`
|===
[discrete]
[id="exported-fields-collectd.interface.if_errors_{context}"]
=== `collectd.interface.if_errors` Fields
The `collectd` `if_errors` type of interface plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.interface.if_errors.rx`
|type: float
`TODO`
| `collectd.interface.if_errors.tx`
|type: float
`TODO`
|===
[discrete]
[id="exported-fields-collectd.interface.if_dropped_{context}"]
=== collectd.interface.if_dropped Fields
The `collectd` `if_dropped` type of interface plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.interface.if_dropped.rx`
|type: float
`TODO`
| `collectd.interface.if_dropped.tx`
|type: float
`TODO`
|===
[discrete]
[id="exported-fields-collectd.virt_{context}"]
=== `collectd.virt` Fields
Corresponds to `collectd` virt plug-in.
[discrete]
[id="exported-fields-collectd.virt.if_octets_{context}"]
=== `collectd.virt.if_octets` Fields
The `collectd if_octets` type of virt plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.virt.if_octets.rx`
|type: float
`TODO`
| `collectd.virt.if_octets.tx`
|type: float
`TODO`
|===
[discrete]
[id="exported-fields-collectd.virt.if_packets_{context}"]
=== `collectd.virt.if_packets` Fields
The `collectd` `if_packets` type of virt plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.virt.if_packets.rx`
|type: float
`TODO`
| `collectd.virt.if_packets.tx`
|type: float
`TODO`
|===
[discrete]
[id="exported-fields-collectd.virt.if_errors_{context}"]
=== `collectd.virt.if_errors` Fields
The `collectd` `if_errors` type of virt plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.virt.if_errors.rx`
|type: float
`TODO`
| `collectd.virt.if_errors.tx`
|type: float
`TODO`
|===
[discrete]
[id="exported-fields-collectd.virt.if_dropped_{context}"]
=== `collectd.virt.if_dropped` Fields
The `collectd` `if_dropped` type of virt plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.virt.if_dropped.rx`
|type: float
`TODO`
| `collectd.virt.if_dropped.tx`
|type: float
`TODO`
|===
[discrete]
[id="exported-fields-collectd.virt.disk_ops_{context}"]
=== `collectd.virt.disk_ops` Fields
The `collectd` `disk_ops` type of virt plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| collectd.virt.disk_ops.read
|type: float
`TODO`
| `collectd.virt.disk_ops.write`
|type: float
`TODO`
|===
[discrete]
[id="exported-fields-collectd.virt.disk_octets_{context}"]
=== `collectd.virt.disk_octets` Fields
The `collectd` `disk_octets` type of virt plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.virt.disk_octets.read`
|type: float
`TODO`
| `collectd.virt.disk_octets.write`
|type: float
`TODO`
| `collectd.virt.memory`
|type: float
The `collectd` memory type of virt plug-in.
| `collectd.virt.virt_vcpu`
|type: float
The `collectd` `virt_vcpu` type of virt plug-in.
| `collectd.virt.virt_cpu_total`
|type: float
The `collectd` `virt_cpu_total` type of virt plug-in.
|===
[discrete]
[id="exported-fields-collectd.CPU_{context}"]
=== `collectd.CPU` Fields
Corresponds to the `collectd` CPU plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.CPU.percent`
|type: float
The `collectd` type percent of plug-in CPU.
|===
[discrete]
[id="exported-fields-collectd.df_{context}"]
=== collectd.df Fields
Corresponds to the `collectd` `df` plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.df.df_complex`
|type: float
The `collectd` type `df_complex` of plug-in `df`.
| `collectd.df.percent_bytes`
|type: float
The `collectd` type `percent_bytes` of plug-in `df`.
|===
[discrete]
[id="exported-fields-collectd.entropy_{context}"]
=== `collectd.entropy` Fields
Corresponds to the `collectd` entropy plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.entropy.entropy`
|type: integer
The `collectd` entropy type of entropy plug-in.
|===
////
[discrete]
[id="exported-fields-collectd.nfs_{context}"]
=== `collectd.nfs` Fields
Corresponds to the `collectd` NFS plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.nfs.nfs_procedure`
|type: integer
The `collectd` `nfs_procedure` type of nfs plug-in.
|===
////
[discrete]
[id="exported-fields-collectd.memory_{context}"]
=== `collectd.memory` Fields
Corresponds to the `collectd` memory plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.memory.memory`
|type: float
The `collectd` memory type of memory plug-in.
| `collectd.memory.percent`
|type: float
The `collectd` percent type of memory plug-in.
|===
[discrete]
[id="exported-fields-collectd.swap_{context}"]
=== `collectd.swap` Fields
Corresponds to the `collectd` swap plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.swap.swap`
|type: integer
The `collectd` swap type of swap plug-in.
| `collectd.swap.swap_io`
|type: integer
The `collectd swap_io` type of swap plug-in.
|===
[discrete]
[id="exported-fields-collectd.load_{context}"]
=== `collectd.load` Fields
Corresponds to the `collectd` load plug-in.
[discrete]
[id="exported-fields-collectd.load.load_{context}"]
=== `collectd.load.load` Fields
The `collectd` load type of load plug-in
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.load.load.shortterm`
|type: float
`TODO`
| `collectd.load.load.midterm`
|type: float
`TODO`
| `collectd.load.load.longterm`
|type: float
`TODO`
|===
[discrete]
[id="exported-fields-collectd.aggregation_{context}"]
=== `collectd.aggregation` Fields
Corresponds to `collectd` aggregation plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.aggregation.percent`
|type: float
`TODO`
|===
[discrete]
[id="exported-fields-collectd.statsd_{context}"]
=== `collectd.statsd` Fields
Corresponds to `collectd` `statsd` plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.statsd.host_cpu`
|type: integer
The `collectd` CPU type of `statsd` plug-in.
| `collectd.statsd.host_elapsed_time`
|type: integer
The `collectd` `elapsed_time` type of `statsd` plug-in.
| `collectd.statsd.host_memory`
|type: integer
The `collectd` memory type of `statsd` plug-in.
| `collectd.statsd.host_nic_speed`
|type: integer
The `collectd` `nic_speed` type of `statsd` plug-in.
| `collectd.statsd.host_nic_rx`
|type: integer
The `collectd` `nic_rx` type of `statsd` plug-in.
| `collectd.statsd.host_nic_tx`
|type: integer
The `collectd` `nic_tx` type of `statsd` plug-in.
| `collectd.statsd.host_nic_rx_dropped`
|type: integer
The `collectd` `nic_rx_dropped` type of `statsd` plug-in.
| `collectd.statsd.host_nic_tx_dropped`
|type: integer
The `collectd` `nic_tx_dropped` type of `statsd` plug-in.
| `collectd.statsd.host_nic_rx_errors`
|type: integer
The `collectd` `nic_rx_errors` type of `statsd` plug-in.
| `collectd.statsd.host_nic_tx_errors`
|type: integer
The `collectd` `nic_tx_errors` type of `statsd` plug-in.
| `collectd.statsd.host_storage`
|type: integer
The `collectd` storage type of `statsd` plug-in.
| `collectd.statsd.host_swap`
|type: integer
The `collectd` swap type of `statsd` plug-in.
| `collectd.statsd.host_vdsm`
|type: integer
The `collectd` VDSM type of `statsd` plug-in.
| `collectd.statsd.host_vms`
|type: integer
The `collectd` VMS type of `statsd` plug-in.
| `collectd.statsd.vm_nic_tx_dropped`
|type: integer
The `collectd` `nic_tx_dropped` type of `statsd` plug-in.
| `collectd.statsd.vm_nic_rx_bytes`
|type: integer
The `collectd` `nic_rx_bytes` type of `statsd` plug-in.
| `collectd.statsd.vm_nic_tx_bytes`
|type: integer
The `collectd` `nic_tx_bytes` type of `statsd` plug-in.
| `collectd.statsd.vm_balloon_min`
|type: integer
The `collectd` `balloon_min` type of `statsd` plug-in.
| `collectd.statsd.vm_balloon_max`
|type: integer
The `collectd` `balloon_max` type of `statsd` plug-in.
| `collectd.statsd.vm_balloon_target`
|type: integer
The `collectd` `balloon_target` type of `statsd` plug-in.
| `collectd.statsd.vm_balloon_cur`
| type: integer
The `collectd` `balloon_cur` type of `statsd` plug-in.
| `collectd.statsd.vm_cpu_sys`
|type: integer
The `collectd` `cpu_sys` type of `statsd` plug-in.
| `collectd.statsd.vm_cpu_usage`
|type: integer
The `collectd` `cpu_usage` type of `statsd` plug-in.
| `collectd.statsd.vm_disk_read_ops`
|type: integer
The `collectd` `disk_read_ops` type of `statsd` plug-in.
| `collectd.statsd.vm_disk_write_ops`
|type: integer
The collectd` `disk_write_ops` type of `statsd` plug-in.
| `collectd.statsd.vm_disk_flush_latency`
|type: integer
The `collectd` `disk_flush_latency` type of `statsd` plug-in.
| `collectd.statsd.vm_disk_apparent_size`
|type: integer
The `collectd` `disk_apparent_size` type of `statsd` plug-in.
| `collectd.statsd.vm_disk_write_bytes`
|type: integer
The `collectd` `disk_write_bytes` type of `statsd` plug-in.
| `collectd.statsd.vm_disk_write_rate`
|type: integer
The `collectd` `disk_write_rate` type of `statsd` plug-in.
| `collectd.statsd.vm_disk_true_size`
|type: integer
The `collectd` `disk_true_size` type of `statsd` plug-in.
| `collectd.statsd.vm_disk_read_rate`
|type: integer
The `collectd` `disk_read_rate` type of `statsd` plug-in.
| `collectd.statsd.vm_disk_write_latency`
|type: integer
The `collectd` `disk_write_latency` type of `statsd` plug-in.
| `collectd.statsd.vm_disk_read_latency`
|type: integer
The `collectd` `disk_read_latency` type of `statsd` plug-in.
| `collectd.statsd.vm_disk_read_bytes`
|type: integer
The `collectd` `disk_read_bytes` type of `statsd` plug-in.
| `collectd.statsd.vm_nic_rx_dropped`
|type: integer
The `collectd` `nic_rx_dropped` type of `statsd` plug-in.
| `collectd.statsd.vm_cpu_user`
|type: integer
The `collectd` `cpu_user` type of `statsd` plug-in.
| `collectd.statsd.vm_nic_rx_errors`
|type: integer
The `collectd` `nic_rx_errors` type of `statsd` plug-in.
| `collectd.statsd.vm_nic_tx_errors`
|type: integer
The `collectd` `nic_tx_errors` type of `statsd` plug-in.
| `collectd.statsd.vm_nic_speed`
|type: integer
The `collectd` `nic_speed` type of `statsd` plug-in.
|===
[discrete]
[id="exported-fields-collectd.postgresql_{context}"]
=== `collectd.postgresql Fields`
Corresponds to `collectd` `postgresql` plug-in.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `collectd.postgresql.pg_n_tup_g`
|type: integer
The `collectd` type `pg_n_tup_g` of plug-in postgresql.
| `collectd.postgresql.pg_n_tup_c`
|type: integer
The `collectd` type `pg_n_tup_c` of plug-in postgresql.
| `collectd.postgresql.pg_numbackends`
|type: integer
The `collectd` type `pg_numbackends` of plug-in postgresql.
| `collectd.postgresql.pg_xact`
|type: integer
The `collectd` type `pg_xact` of plug-in postgresql.
| `collectd.postgresql.pg_db_size`
|type: integer
The `collectd` type `pg_db_size` of plug-in postgresql.
| `collectd.postgresql.pg_blks`
|type: integer
The `collectd` type `pg_blks` of plug-in postgresql.
|===

View File

@@ -1,89 +0,0 @@
// Module included in the following assemblies:
//
// * logging/cluster-logging-exported-fields.adoc
[id="cluster-logging-exported-fields-container_{context}"]
= Container exported fields
These are the Docker fields exported by OpenShift Logging available for searching from Elasticsearch and Kibana.
Namespace for docker container-specific metadata. The docker.container_id is the Docker container ID.
[discrete]
[id="pipeline_metadata.collector_{context}"]
=== `pipeline_metadata.collector` Fields
This section contains metadata specific to the collector.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `pipeline_metadata.collector.hostname`
|FQDN of the collector. It might be different from the FQDN of the actual emitter
of the logs.
| `pipeline_metadata.collector.name`
|Name of the collector.
| `pipeline_metadata.collector.version`
|Version of the collector.
| `pipeline_metadata.collector.ipaddr4`
|IP address v4 of the collector server, can be an array.
| `pipeline_metadata.collector.ipaddr6`
|IP address v6 of the collector server, can be an array.
| `pipeline_metadata.collector.inputname`
|How the log message was received by the collector whether it was TCP/UDP, or
imjournal/imfile.
| `pipeline_metadata.collector.received_at`
|Time when the message was received by the collector.
| `pipeline_metadata.collector.original_raw_message`
|The original non-parsed log message, collected by the collector or as close to the
source as possible.
|===
[discrete]
[id="exported-fields-pipeline_metadata.normalizer_{context}"]
=== `pipeline_metadata.normalizer` Fields
This section contains metadata specific to the normalizer.
[cols="3,7",options="header"]
|===
|Parameter
|Description
| `pipeline_metadata.normalizer.hostname`
|FQDN of the normalizer.
| `pipeline_metadata.normalizer.name`
|Name of the normalizer.
| `pipeline_metadata.normalizer.version`
|Version of the normalizer.
| `pipeline_metadata.normalizer.ipaddr4`
|IP address v4 of the normalizer server, can be an array.
| `pipeline_metadata.normalizer.ipaddr6`
|IP address v6 of the normalizer server, can be an array.
| `pipeline_metadata.normalizer.inputname`
|how the log message was received by the normalizer whether it was TCP/UDP.
| `pipeline_metadata.normalizer.received_at`
|Time when the message was received by the normalizer.
| `pipeline_metadata.normalizer.original_raw_message`
|The original non-parsed log message as it is received by the normalizer.
| `pipeline_metadata.trace`
|The field records the trace of the message. Each collector and normalizer appends
information about itself and the date and time when the message was processed.
|===

View File

@@ -1,28 +0,0 @@
// Module included in the following assemblies:
//
// * logging/cluster-logging-kibana-console.adoc
// * logging/cluster-logging-visualizer.adoc
[id="cluster-logging-kibana-visualize_{context}"]
= Launching the Kibana interface
The Kibana interface is a browser-based console
to query, discover, and visualize your Elasticsearch data through histograms, line graphs,
pie charts, heat maps, built-in geospatial support, and other visualizations.
.Procedure
To launch the Kibana interface:
. In the {product-title} console, click *Observe* -> *Logging*.
. Log in using the same credentials you use to log in to the {product-title} console.
+
The Kibana interface launches. You can now:
+
* Search and browse your data using the Discover page.
* Chart and map your data using the Visualize page.
* Create and view custom dashboards using the Dashboard page.
+
Use and configuration of the Kibana interface is beyond the scope of this documentation. For more information,
on using the interface, see the link:https://www.elastic.co/guide/en/kibana/5.6/connect-to-elasticsearch.html[Kibana documentation].

View File

@@ -1,19 +0,0 @@
// Module included in the following assemblies:
//
// * logging/cluster-logging-uninstall.adoc
[id="cluster-logging-uninstall-ops_{context}"]
= Uninstall the infra cluster
You can uninstall the infra cluster from OpenShift Logging.
After uninstalling, Fluentd no longer splits logs.
.Procedure
To uninstall the infra cluster:
.
.
.