1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

RHDEVDOCS-2521 NEW Document our JSON log entry format.

This commit is contained in:
Rolfe Dlugy-Hegwer
2021-07-08 13:05:08 -04:00
committed by openshift-cherrypick-robot
parent d2adc4a93a
commit 82c26a2fc4
11 changed files with 457 additions and 78 deletions

View File

@@ -11,7 +11,7 @@ from Elasticsearch and Kibana. The default fields are Top Level and `collectd*`
[discrete]
=== Top Level Fields
The top level fields are common to every application, and may be present in
The top level fields are common to every application and can be present in
every record. For the Elasticsearch template, top level fields populate the actual
mappings of `default` in the template's mapping section.

View File

@@ -0,0 +1,83 @@
[id="cluster-logging-exported-fields-kubernetes_{context}"]
= Kubernetes
The following fields can be present in the namespace for kubernetes-specific metadata.
== kubernetes.pod_name
The name of the pod
[horizontal]
Data type:: keyword
== kubernetes.pod_id
Kubernetes ID of the pod.
[horizontal]
Data type:: keyword
== kubernetes.namespace_name
The name of the namespace in Kubernetes.
[horizontal]
Data type:: keyword
== kubernetes.namespace_id
ID of the namespace in Kubernetes.
[horizontal]
Data type:: keyword
== kubernetes.host
Kubernetes node name
[horizontal]
Data type:: keyword
== kubernetes.master_url
Kubernetes Master URL
[horizontal]
Data type:: keyword
== kubernetes.container_name
The name of the container in Kubernetes.
[horizontal]
Data type:: text
== kubernetes.annotations
Annotations associated with the Kubernetes object
[horizontal]
Data type:: group
== kubernetes.labels
Labels attached to the Kubernetes object Each label name is a subfield of labels field. Each label name is de-dotted: dots in the name are replaced with underscores.
[horizontal]
Data type:: group
== kubernetes.event
The kubernetes event obtained from kubernetes master API The event is already JSON object and as whole nested under kubernetes field This description should loosely follow 'type Event' in https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#event-v1-core
[horizontal]
Data type:: group

View File

@@ -10,7 +10,7 @@ from Elasticsearch and Kibana.
Contains common fields specific to `systemd` journal.
link:https://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html[Applications]
may write their own fields to the journal. These will be available under the
can write their own fields to the journal. These will be available under the
`systemd.u` namespace. `RESULT` and `UNIT` are two such fields.
[discrete]

View File

@@ -1,42 +1,15 @@
:context: cluster-logging-exported-fields
[id="cluster-logging-exported-fields"]
= Exported fields
= Log Record Fields
include::modules/common-attributes.adoc[]
toc::[]
These are the fields exported by the logging system and available for searching from Elasticsearch and Kibana. Use the full, dotted field name when searching. For example, for an Elasticsearch */_search URL*, to look for a Kubernetes pod name, use `/_search/q=kubernetes.pod_name:name-of-my-pod`.
The following fields can be present in log records exported by OpenShift Logging system. Although log records are typically formatted as JSON objects, the same data model can be applied to other encodings.
The following sections describe fields that may not be present in your logging store. Not all of these fields are present in every record. The fields are grouped in the following categories:
To search these fields from Elasticsearch and Kibana, use the full dotted field name when searching. For example, with an Elasticsearch */_search URL*, to look for a Kubernetes pod name, use `/_search/q=kubernetes.pod_name:name-of-my-pod`.
* `exported-fields-Default`
* `exported-fields-systemd`
* `exported-fields-kubernetes`
* `exported-fields-pipeline_metadata`
* `exported-fields-ovirt`
* `exported-fields-aushape`
* `exported-fields-tlog`
// The logging system can forward JSON-formatted log entries to external systems. These log entries are formatted as a fluentd message with extra fields such as `kubernetes`. The fields exported by the logging system and available for searching from Elasticsearch and Kibana are documented at the end of this document.
// * `exported-fields-rsyslog`
// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
// modules required to cover the user story. You can also include other
// assemblies.
include::modules/cluster-logging-exported-fields-default.adoc[leveloffset=+1]
//modules/cluster-logging-exported-fields-rsyslog.adoc[leveloffset=+1]
include::modules/cluster-logging-exported-fields-systemd.adoc[leveloffset=+1]
include::modules/cluster-logging-exported-fields-kubernetes.adoc[leveloffset=+1]
include::modules/cluster-logging-exported-fields-container.adoc[leveloffset=+1]
include::modules/cluster-logging-exported-fields-ovirt.adoc[leveloffset=+1]
include::modules/cluster-logging-exported-fields-aushape.adoc[leveloffset=+1]
include::modules/cluster-logging-exported-fields-tlog.adoc[leveloffset=+1]
include::modules/cluster-logging-exported-fields-top-level-fields.adoc[leveloffset=0]
include::modules/cluster-logging-exported-fields-kubernetes.adoc[leveloffset=0]

View File

@@ -1,60 +1,294 @@
// Module included in the following assemblies:
//
// * logging/cluster-logging-exported-fields.adoc
[id="cluster-logging-exported-fields-kubernetes_{context}"]
= Kubernetes exported fields
== Kubernetes
These are the Kubernetes fields exported by OpenShift Logging available for searching
from Elasticsearch and Kibana.
// Normally, the preceding title would be an H1 prefixed with an `=`. However, because the following content is auto-generated at https://github.com/ViaQ/documentation/blob/main/src/data_model/public/kubernetes.part.adoc and pasted here, it is more efficient to use it as-is with no modifications. Therefore, to "realign" the content, I am going to prefix the title with `==` and use `include::modules/cluster-logging-exported-fields-kubernetes.adoc[leveloffset=0]` in the assembly file.
The namespace for Kubernetes-specific metadata. The `kubernetes.pod_name` is the
name of the pod.
// DO NOT MODIFY THE FOLLOWING CONTENT. Instead, update https://github.com/ViaQ/documentation/blob/main/src/data_model/model/kubernetes.yaml and run `make` as instructed here: https://github.com/ViaQ/documentation
[discrete]
[id="exported-fields-kubernetes.labels_{context}"]
=== `kubernetes.labels` Fields
Labels attached to the OpenShift object are `kubernetes.labels`. Each label name
is a subfield of labels field. Each label name is de-dotted, meaning dots in the
name are replaced with underscores.
The namespace for Kubernetes-specific metadata.
[cols="3,7",options="header"]
|===
|Parameter
|Description
=== kubernetes.pod_name
| `kubernetes.pod_id`
|Kubernetes ID of the pod.
| `kubernetes.namespace_name`
|The name of the namespace in Kubernetes.
The name of the pod
| `kubernetes.namespace_id`
|ID of the namespace in Kubernetes.
[horizontal]
Data type:: keyword
| `kubernetes.host`
|Kubernetes node name.
| `kubernetes.container_name`
|The name of the container in Kubernetes.
=== kubernetes.pod_id
| `kubernetes.labels.deployment`
|The deployment associated with the Kubernetes object.
| `kubernetes.labels.deploymentconfig`
|The deploymentconfig associated with the Kubernetes object.
The Kubernetes ID of the pod
| `kubernetes.labels.component`
|The component associated with the Kubernetes object.
[horizontal]
Data type:: keyword
| `kubernetes.labels.provider`
|The provider associated with the Kubernetes object.
|===
[discrete]
[id="exported-fields-kubernetes.annotations_{context}"]
=== `kubernetes.annotations` Fields
=== kubernetes.namespace_name
Annotations associated with the OpenShift object are `kubernetes.annotations`
fields.
The name of the namespace in Kubernetes
[horizontal]
Data type:: keyword
=== kubernetes.namespace_id
The ID of the namespace in Kubernetes
[horizontal]
Data type:: keyword
=== kubernetes.host
The Kubernetes node name
[horizontal]
Data type:: keyword
=== kubernetes.container_name
The name of the container in Kubernetes
[horizontal]
Data type:: keyword
=== kubernetes.annotations
Annotations associated with the Kubernetes object
[horizontal]
Data type:: group
=== kubernetes.labels
Labels present on the original Kubernetes Pod
[horizontal]
Data type:: group
=== kubernetes.event
The Kubernetes event obtained from the Kubernetes master API. This event description loosely follows `type Event` in link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#event-v1-core[Event v1 core].
[horizontal]
Data type:: group
==== kubernetes.event.verb
The type of event, `ADDED`, `MODIFIED`, or `DELETED`
[horizontal]
Data type:: keyword
Example value:: `ADDED`
==== kubernetes.event.metadata
Information related to the location and time of the event creation
[horizontal]
Data type:: group
===== kubernetes.event.metadata.name
The name of the object that triggered the event creation
[horizontal]
Data type:: keyword
Example value:: `java-mainclass-1.14d888a4cfc24890`
===== kubernetes.event.metadata.namespace
The name of the namespace where the event originally occurred. Note that it differs from `kubernetes.namespace_name`, which is the namespace where the `eventrouter` application is deployed.
[horizontal]
Data type:: keyword
Example value:: `default`
===== kubernetes.event.metadata.selfLink
A link to the event
[horizontal]
Data type:: keyword
Example value:: `/api/v1/namespaces/javaj/events/java-mainclass-1.14d888a4cfc24890`
===== kubernetes.event.metadata.uid
The unique ID of the event
[horizontal]
Data type:: keyword
Example value:: `d828ac69-7b58-11e7-9cf5-5254002f560c`
===== kubernetes.event.metadata.resourceVersion
A string that identifies the server's internal version of the event. Clients can use this string to determine when objects have changed.
[horizontal]
Data type:: integer
Example value:: `311987`
==== kubernetes.event.involvedObject
The object that the event is about.
[horizontal]
Data type:: group
===== kubernetes.event.involvedObject.kind
The type of object
[horizontal]
Data type:: keyword
Example value:: `ReplicationController`
===== kubernetes.event.involvedObject.namespace
The namespace name of the involved object. Note that it may differ from `kubernetes.namespace_name`, which is the namespace where the `eventrouter` application is deployed.
[horizontal]
Data type:: keyword
Example value:: `default`
===== kubernetes.event.involvedObject.name
The name of the object that triggered the event
[horizontal]
Data type:: keyword
Example value:: `java-mainclass-1`
===== kubernetes.event.involvedObject.uid
The unique ID of the object
[horizontal]
Data type:: keyword
Example value:: `e6bff941-76a8-11e7-8193-5254002f560c`
===== kubernetes.event.involvedObject.apiVersion
The version of kubernetes master API
[horizontal]
Data type:: keyword
Example value:: `v1`
===== kubernetes.event.involvedObject.resourceVersion
A string that identifies the server's internal version of the pod that triggered the event. Clients can use this string to determine when objects have changed.
[horizontal]
Data type:: keyword
Example value:: `308882`
==== kubernetes.event.reason
A short machine-understandable string that gives the reason for generating this event
[horizontal]
Data type:: keyword
Example value:: `SuccessfulCreate`
==== kubernetes.event.source_component
The component that reported this event
[horizontal]
Data type:: keyword
Example value:: `replication-controller`
==== kubernetes.event.firstTimestamp
The time at which the event was first recorded
[horizontal]
Data type:: date
Example value:: `2017-08-07 10:11:57.000000000 Z`
==== kubernetes.event.count
The number of times this event has occurred
[horizontal]
Data type:: integer
Example value:: `1`
==== kubernetes.event.type
The type of event, `Normal` or `Warning`. New types could be added in the future.
[horizontal]
Data type:: keyword
Example value:: `Normal`

View File

@@ -0,0 +1,89 @@
[id="cluster-logging-exported-fields-top-level-fields_{context}"]
== Top-level fields
// Normally, the preceding title would be an H1 prefixed with an `=`. However, because the following content is auto-generated at https://github.com/ViaQ/documentation/blob/main/src/data_model/public/top-level.part.adoc and pasted here, it is more efficient to use it as-is with no modifications. Therefore, to "realign" the content, I am going to prefix the title with `==` and use `include::modules/cluster-logging-exported-fields-top-level-fields.adoc[leveloffset=0]` in the assembly file.
// DO NOT MODIFY THE FOLLOWING CONTENT. Instead, update https://github.com/ViaQ/documentation/blob/main/src/data_model/model/top-level.yaml and run `make` as instructed here: https://github.com/ViaQ/documentation
//The top-level fields can be present in every record. The descriptions for fields that are optional begin with "Optional."
The top level fields may be present in every record.
=== message
Original log entry text, UTF-8 encoded This field may be absent or empty if a non-empty `structured` field is present. See the description of `structured` for more.
=== structured
Original log entry as a structured object. This field may be present if the forwarder was configured to parse structured JSON logs. If the original log entry was a valid structured log, this field will contain an equivalent JSON structure. Otherwise this field will be empty or absent, and the `message` field will contain the original log message. The `structured` field has whatever sub-fields are included in the log message, there are no restrictions defined here.
=== @timestamp
A UTC value that marks when the log payload was created or, if the creation time is not known, when the log payload was first collected. The “@” prefix denotes a field that is reserved for a particular use. By default, most tools look for “@timestamp” with ElasticSearch.
=== hostname
The name of the host where this log message originated. In a Kubernetes cluster, this is the same as `kubernetes.host`.
=== ipaddr4
The IPv4 address of the source server. Can be an array.
=== ipaddr6
The IPv6 address of the source server, if available. Can be an array.
=== level
The logging level from various sources, including `rsyslog(severitytext property)`, python's logging module, and others.
The following values come from link:http://sourceware.org/git/?p=glibc.git;a=blob;f=misc/sys/syslog.h;h=ee01478c4b19a954426a96448577c5a76e6647c0;hb=HEAD#l74[`syslog.h`], and are preceded by their http://sourceware.org/git/?p=glibc.git;a=blob;f=misc/sys/syslog.h;h=ee01478c4b19a954426a96448577c5a76e6647c0;hb=HEAD#l51[numeric equivalents]:
* `0` = `emerg`, system is unusable.
* `1` = `alert`, action must be taken immediately.
* `2` = `crit`, critical conditions.
* `3` = `err`, error conditions.
* `4` = `warn`, warning conditions.
* `5` = `notice`, normal but significant condition.
* `6` = `info`, informational.
* `7` = `debug`, debug-level messages.
The two following values are not part of `syslog.h` but are widely used:
* `8` = `trace`, trace-level messages, which are more verbose than `debug` messages.
* `9` = `unknown`, when the logging system gets a value it doesn't recognize.
Map the log levels or priorities of other logging systems to their nearest match in the preceding list. For example, from link:https://docs.python.org/2.7/library/logging.html#logging-levels[python logging], you can match `CRITICAL` with `crit`, `ERROR` with `err`, and so on.
=== pid
The process ID of the logging entity, if available.
=== service
The name of the service associated with the logging entity, if available. For example, syslog's `APP-NAME` and rsyslog's `programname` properties are mapped to the service field.
=== tags
Optional. An operator-defined list of tags placed on each log by the collector or normalizer. The payload can be a string with whitespace-delimited string tokens or a JSON list of string tokens.
=== file
The path to the log file from which the collector read this log entry. Normally, this is a path in the `/var/log` file system of a cluster node.
=== offset
The offset value. Can represent bytes to the start of the log line in the file (zero- or one-based), or log line numbers (zero- or one-based), so long as the values are strictly monotonically increasing in the context of a single log file. The values are allowed to wrap, representing a new version of the log file (rotation).