mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-06 15:46:57 +01:00
OBSDOCS-481 - Correct logging name and attributes.
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
c7dd585787
commit
511d38ee5c
@@ -140,14 +140,13 @@ ifdef::telco-core[]
|
||||
:rds-caps: Telco core
|
||||
endif::[]
|
||||
//logging
|
||||
:logging-title: logging subsystem for Red Hat OpenShift
|
||||
:logging-title-uc: Logging subsystem for Red Hat OpenShift
|
||||
:logging: logging subsystem
|
||||
:logging-uc: Logging subsystem
|
||||
:logging: logging
|
||||
:logging-uc: Logging
|
||||
:for: for Red Hat OpenShift
|
||||
:clo: Red Hat OpenShift Logging Operator
|
||||
:loki-op: Loki Operator
|
||||
:es-op: OpenShift Elasticsearch Operator
|
||||
:log-plug: logging subsystem Console plugin
|
||||
:log-plug: logging Console plugin
|
||||
//power monitoring
|
||||
:PM-title-c: Power monitoring for Red Hat OpenShift
|
||||
:PM-title: power monitoring for Red Hat OpenShift
|
||||
@@ -265,4 +264,4 @@ endif::[]
|
||||
//ODF
|
||||
:odf-first: Red Hat OpenShift Data Foundation (ODF)
|
||||
:odf-full: Red Hat OpenShift Data Foundation
|
||||
:odf-short: ODF
|
||||
:odf-short: ODF
|
||||
|
||||
@@ -6,11 +6,11 @@ include::_attributes/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
You can view the Argo CD logs with the {logging-title}. The {logging} visualizes the logs on a Kibana dashboard. The OpenShift Logging Operator enables logging with Argo CD by default.
|
||||
You can view the Argo CD logs with {logging}. {logging-uc} visualizes the logs on a Kibana dashboard. The {clo} enables logging with Argo CD by default.
|
||||
|
||||
include::modules/gitops-storing-and-retrieving-argo-cd-logs.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources_viewing-argo-cd-logs"]
|
||||
== Additional resources
|
||||
* xref:../../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-console_cluster-logging-deploying[Installing the {logging-title} using the web console]
|
||||
* xref:../../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-console_cluster-logging-deploying[Installing {logging} using the web console]
|
||||
|
||||
@@ -7,7 +7,7 @@ include::_attributes/attributes-openshift-dedicated.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
You can deploy the {logging-title} by installing the {clo}. The {clo} creates and manages the components of the logging stack.
|
||||
You can deploy {logging} by installing the {clo}. The {clo} creates and manages the components of the logging stack.
|
||||
|
||||
include::snippets/logging-compatibility-snip.adoc[]
|
||||
|
||||
@@ -24,7 +24,7 @@ ifdef::openshift-origin[]
|
||||
If you have the pull secret, add the `redhat-operators` catalog to the OperatorHub custom resource (CR) as shown in _Configuring {product-title} to use Red Hat Operators_.
|
||||
endif::[]
|
||||
|
||||
//Installing the Red Hat OpenShift Logging Operator via webconsole
|
||||
//Installing the CLO via webconsole
|
||||
include::modules/cluster-logging-deploy-console.adoc[leveloffset=+1]
|
||||
include::modules/create-cluster-logging-cr-console.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
@@ -9,9 +9,9 @@ toc::[]
|
||||
include::snippets/logging-supported-config-snip.adoc[]
|
||||
include::snippets/logging-compatibility-snip.adoc[]
|
||||
|
||||
The {logging-title} is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems.
|
||||
{logging-uc} {for} is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems.
|
||||
|
||||
The {logging-title} is not:
|
||||
{logging-uc} is not:
|
||||
|
||||
* A high scale log collection system
|
||||
* Security Information and Event Monitoring (SIEM) compliant
|
||||
@@ -65,7 +65,7 @@ link:https://docs.openshift.com/container-platform/latest/support/gathering-clus
|
||||
endif::[]
|
||||
to collect diagnostic information for project-level resources, cluster-level resources, and each of the {logging} components.
|
||||
|
||||
For prompt support, supply diagnostic information for both {product-title} and the {logging-title}.
|
||||
For prompt support, supply diagnostic information for both {product-title} and {logging}.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
|
||||
@@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
You can remove the {logging} from your {product-title} cluster by removing installed Operators and related custom resources (CRs).
|
||||
You can remove {logging} from your {product-title} cluster by removing installed Operators and related custom resources (CRs).
|
||||
|
||||
include::modules/uninstall-cluster-logging-operator.adoc[leveloffset=+1]
|
||||
include::modules/uninstall-logging-delete-pvcs.adoc[leveloffset=+1]
|
||||
|
||||
@@ -11,9 +11,9 @@ As a cluster administrator, you can deploy {logging} on an {product-title} clust
|
||||
|
||||
include::snippets/logging-kibana-dep-snip.adoc[]
|
||||
|
||||
{product-title} cluster administrators can deploy the {logging} by using Operators. For information, see xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploying[Installing the {logging-title}].
|
||||
{product-title} cluster administrators can deploy {logging} by using Operators. For information, see xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploying[Installing {logging}].
|
||||
|
||||
The Operators are responsible for deploying, upgrading, and maintaining the {logging}. After the Operators are installed, you can create a `ClusterLogging` custom resource (CR) to schedule {logging} pods and other resources necessary to support the {logging}. You can also create a `ClusterLogForwarder` CR to specify which logs are collected, how they are transformed, and where they are forwarded to.
|
||||
The Operators are responsible for deploying, upgrading, and maintaining {logging}. After the Operators are installed, you can create a `ClusterLogging` custom resource (CR) to schedule {logging} pods and other resources necessary to support {logging}. You can also create a `ClusterLogForwarder` CR to specify which logs are collected, how they are transformed, and where they are forwarded to.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
|
||||
@@ -9,7 +9,7 @@ toc::[]
|
||||
{logging-title-uc} is configurable using a `ClusterLogging` custom resource (CR) deployed
|
||||
in the `openshift-logging` project.
|
||||
|
||||
The {logging} operator watches for changes to `ClusterLogging` CR,
|
||||
The {clo} watches for changes to `ClusterLogging` CR,
|
||||
creates any missing logging components, and adjusts the logging environment accordingly.
|
||||
|
||||
The `ClusterLogging` CR is based on the `ClusterLogging` custom resource definition (CRD), which defines a complete {logging} environment and includes all the components of the logging stack to collect, store and visualize logs.
|
||||
@@ -52,7 +52,7 @@ spec:
|
||||
resources: null
|
||||
type: kibana
|
||||
----
|
||||
You can configure the following for the {logging}:
|
||||
You can configure the following for {logging}:
|
||||
|
||||
* You can overwrite the image for each {logging} component by modifying the appropriate
|
||||
environment variable in the `cluster-logging-operator` Deployment.
|
||||
@@ -77,5 +77,5 @@ The Rsyslog log collector is currently a Technology Preview feature.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
The logging routes are managed by the {logging-title} Operator and cannot be modified by the user.
|
||||
The logging routes are managed by the {clo} and cannot be modified by the user.
|
||||
====
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
:context: dedicated-cluster-logging
|
||||
[id="dedicated-cluster-logging"]
|
||||
= Configuring the {logging-title}
|
||||
= Configuring {logging}
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
@@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
Logging alerts are installed as part of the {clo} installation. Alerts depend on metrics exported by the log collection and log storage backends. These metrics are enabled if you selected the option to *Enable operator recommended cluster monitoring on this namespace* when installing the {clo}. For more information about installing logging Operators, see xref:../../logging/cluster-logging-deploying#cluster-logging-deploy-console_cluster-logging-deploying[Installing the {logging-title} using the web console].
|
||||
Logging alerts are installed as part of the {clo} installation. Alerts depend on metrics exported by the log collection and log storage backends. These metrics are enabled if you selected the option to *Enable operator recommended cluster monitoring on this namespace* when installing the {clo}. For more information about installing logging Operators, see xref:../../logging/cluster-logging-deploying#cluster-logging-deploy-console_cluster-logging-deploying[Installing {logging} using the web console].
|
||||
|
||||
Default logging alerts are sent to the {product-title} monitoring stack Alertmanager in the `openshift-monitoring` namespace, unless you have disabled the local Alertmanager instance.
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="cluster-logging-about_{context}"]
|
||||
= About deploying the {logging-title}
|
||||
= About deploying {logging}
|
||||
|
||||
Administrators can deploy the {logging} by using the {product-title} web console or the {oc-first} to install the {logging} Operators. The Operators are responsible for deploying, upgrading, and maintaining the {logging}.
|
||||
|
||||
|
||||
@@ -21,7 +21,7 @@ You can view the status for a number of {logging} components.
|
||||
$ oc project openshift-logging
|
||||
----
|
||||
|
||||
. View the status of the {logging-title} environment:
|
||||
. View the status of {logging} environment:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -25,7 +25,7 @@ tolerations:
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.
|
||||
* {clo} and {es-op} must be installed.
|
||||
|
||||
.Procedure
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
|
||||
include::snippets/logging-fluentd-dep-snip.adoc[]
|
||||
|
||||
The {logging-title} includes multiple Fluentd parameters that you can use for tuning the performance of the Fluentd log forwarder. With these parameters, you can change the following Fluentd behaviors:
|
||||
{logging-uc} includes multiple Fluentd parameters that you can use for tuning the performance of the Fluentd log forwarder. With these parameters, you can change the following Fluentd behaviors:
|
||||
|
||||
* Chunk and chunk buffer sizes
|
||||
* Chunk flushing behavior
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="cluster-logging-configuring-image-about_{context}"]
|
||||
= Understanding {logging} component images
|
||||
|
||||
There are several components in the {logging-title}, each one implemented with one or more images. Each image is specified by an environment variable
|
||||
There are several components in {logging}, each one implemented with one or more images. Each image is specified by an environment variable
|
||||
defined in the *cluster-logging-operator* deployment in the *openshift-logging* project and should not be changed.
|
||||
|
||||
You can view the images by running the following command:
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="cluster-logging-deploying-about_{context}"]
|
||||
= About deploying and configuring the {logging-title}
|
||||
= About deploying and configuring {logging}
|
||||
|
||||
The {logging} is designed to be used with the default configuration, which is tuned for small to medium sized {product-title} clusters.
|
||||
|
||||
|
||||
@@ -12,7 +12,7 @@ To send the audit logs to the default internal Elasticsearch log store, for exam
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
The internal {product-title} Elasticsearch log store does not provide secure storage for audit logs. Verify that the system to which you forward audit logs complies with your organizational and governmental regulations and is properly secured. The {logging-title} does not comply with those regulations.
|
||||
The internal {product-title} Elasticsearch log store does not provide secure storage for audit logs. Verify that the system to which you forward audit logs complies with your organizational and governmental regulations and is properly secured. {logging-uc} does not comply with those regulations.
|
||||
====
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="cluster-logging-elasticsearch-exposing_{context}"]
|
||||
= Exposing the log store service as a route
|
||||
|
||||
By default, the log store that is deployed with the {logging-title} is not accessible from outside the logging cluster. You can enable a route with re-encryption termination for external access to the log store service for those tools that access its data.
|
||||
By default, the log store that is deployed with {logging} is not accessible from outside the logging cluster. You can enable a route with re-encryption termination for external access to the log store service for those tools that access its data.
|
||||
|
||||
Externally, you can access the log store by creating a reencrypt route, your {product-title} token and the installed log store CA certificate. Then, access a node that hosts the log store service with a cURL request that contains:
|
||||
|
||||
|
||||
@@ -19,8 +19,7 @@ Elasticsearch rolls over an index, moving the current index and creating a new i
|
||||
Elasticsearch deletes the rolled-over indices based on the retention policy you configure. If you do not create a retention policy for any log sources, logs are deleted after seven days by default.
|
||||
|
||||
.Prerequisites
|
||||
//SME Feedback Req: There are a few instances of these for prereqs. Should OpenShift Logging here be the Red Hat OpenShift Logging Operator or the logging product name?
|
||||
* The {logging-title} and the OpenShift Elasticsearch Operator must be installed.
|
||||
* The {clo} and the {es-op} must be installed.
|
||||
|
||||
.Procedure
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="cluster-logging-eventrouter-about_{context}"]
|
||||
= About event routing
|
||||
|
||||
The Event Router is a pod that watches {product-title} events so they can be collected by the {logging-title}.
|
||||
The Event Router is a pod that watches {product-title} events so they can be collected by {logging}.
|
||||
The Event Router collects events from all projects and writes them to `STDOUT`. Fluentd collects those events and forwards them into the {product-title} Elasticsearch instance. Elasticsearch indexes the events to the `infra` index.
|
||||
|
||||
You must manually deploy the Event Router.
|
||||
|
||||
@@ -19,7 +19,7 @@ The following `Template` object creates the service account, cluster role, and c
|
||||
|
||||
* You need proper permissions to create service accounts and update cluster role bindings. For example, you can run the following template with a user that has the *cluster-admin* role.
|
||||
|
||||
* The {logging-title} must be installed.
|
||||
* The {clo} must be installed.
|
||||
|
||||
.Procedure
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ You can use the Kibana dashboard to store and retrieve Argo CD logs.
|
||||
.Prerequisites
|
||||
|
||||
* The {gitops-title} Operator is installed in your cluster.
|
||||
* The {logging-title} is installed with default configuration in your cluster.
|
||||
* {logging-uc} is installed with default configuration in your cluster.
|
||||
|
||||
.Procedure
|
||||
|
||||
|
||||
@@ -26,7 +26,7 @@ Visualization:: You can use a UI component to view a visual representation of yo
|
||||
include::snippets/logging-kibana-dep-snip.adoc[]
|
||||
--
|
||||
|
||||
The {logging-title} collects container logs and node logs. These are categorized into types:
|
||||
{logging-uc} collects container logs and node logs. These are categorized into types:
|
||||
|
||||
Application logs:: Container logs generated by user applications running in the cluster, except infrastructure container applications.
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="logging-plugin-es-loki_{context}"]
|
||||
= Configuring the {log-plug} when you have the Elasticsearch log store and LokiStack installed
|
||||
|
||||
In the {logging} version 5.8 and later, if the Elasticsearch log store is your default log store but you have also installed the LokiStack, you can enable the {log-plug} by using the following procedure.
|
||||
In {logging} version 5.8 and later, if the Elasticsearch log store is your default log store but you have also installed the LokiStack, you can enable the {log-plug} by using the following procedure.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
//
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
The {logging-title} collects container logs and node logs. These are categorized into types:
|
||||
{logging-uc} collects container logs and node logs. These are categorized into types:
|
||||
|
||||
* `application` - Container logs generated by non-infrastructure containers.
|
||||
|
||||
|
||||
@@ -6,4 +6,4 @@
|
||||
//
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
In logging subsystem documentation, LokiStack refers to the logging subsystem supported combination of Loki, and web proxy with OpenShift Container Platform authentication integration. LokiStack’s proxy uses OpenShift Container Platform authentication to enforce multi-tenancy. Loki refers to the log store as either the individual component or an external store.
|
||||
In logging documentation, LokiStack refers to the supported combination of Loki and web proxy with OpenShift Container Platform authentication integration. LokiStack’s proxy uses OpenShift Container Platform authentication to enforce multi-tenancy. Loki refers to the log store as either the individual component or an external store.
|
||||
|
||||
@@ -7,11 +7,11 @@
|
||||
|
||||
:_mod-docs-content-type: SNIPPET
|
||||
|
||||
Only the configuration options described in this documentation are supported for the {logging}.
|
||||
Only the configuration options described in this documentation are supported for {logging}.
|
||||
|
||||
Do not use any other configuration options, as they are unsupported. Configuration paradigms might change across {product-title} releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will be overwritten, because Operators are designed to reconcile any differences.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
If you must perform configurations not described in the {product-title} documentation, you must set your Red Hat OpenShift Logging Operator to `Unmanaged`. An unmanaged {logging-title} is not supported and does not receive updates until you return its status to `Managed`.
|
||||
If you must perform configurations not described in the {product-title} documentation, you must set your Red Hat OpenShift Logging Operator to `Unmanaged`. An unmanaged {logging} instance is not supported and does not receive updates until you return its status to `Managed`.
|
||||
====
|
||||
|
||||
Reference in New Issue
Block a user