1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

remove duplicated content, collate install docs

This commit is contained in:
Ashleigh Brennan
2023-10-27 13:42:23 -05:00
committed by openshift-cherrypick-robot
parent 26193132b9
commit a052a213be
29 changed files with 215 additions and 361 deletions

View File

@@ -2543,8 +2543,6 @@ Topics:
Dir: config
Distros: openshift-enterprise,openshift-origin
Topics:
- Name: Configuring the log store
File: cluster-logging-log-store
- Name: Configuring CPU and memory limits for Logging components
File: cluster-logging-memory
- Name: Using tolerations to control Logging pod placement
@@ -2553,8 +2551,6 @@ Topics:
File: cluster-logging-moving-nodes
- Name: Configuring systemd-journald for Logging
File: cluster-logging-systemd
- Name: Logging using LokiStack
File: cluster-logging-loki
- Name: Log collection and forwarding
Dir: log_collection_forwarding
Topics:
@@ -2568,6 +2564,17 @@ Topics:
File: cluster-logging-collector
- Name: Collecting and storing Kubernetes events
File: cluster-logging-eventrouter
- Name: Log storage
Dir: log_storage
Topics:
- Name: About log storage
File: about-log-storage
- Name: Installing log storage
File: installing-log-storage
- Name: Configuring the LokiStack log store
File: cluster-logging-loki
- Name: Configuring the Elasticsearch log store
File: logging-config-es-store
- Name: Logging alerts
Dir: logging_alerts
Topics:

View File

@@ -1055,8 +1055,6 @@ Topics:
- Name: Configuring your Logging deployment
Dir: config
Topics:
- Name: Configuring the log store
File: cluster-logging-log-store
- Name: Configuring CPU and memory limits for Logging components
File: cluster-logging-memory
- Name: Using tolerations to control Logging pod placement
@@ -1065,8 +1063,6 @@ Topics:
File: cluster-logging-moving-nodes
#- Name: Configuring systemd-journald and Fluentd
# File: cluster-logging-systemd
- Name: Logging using LokiStack
File: cluster-logging-loki
- Name: Log collection and forwarding
Dir: log_collection_forwarding
Topics:
@@ -1080,6 +1076,17 @@ Topics:
File: cluster-logging-collector
- Name: Collecting and storing Kubernetes events
File: cluster-logging-eventrouter
- Name: Log storage
Dir: log_storage
Topics:
- Name: About log storage
File: about-log-storage
- Name: Installing log storage
File: installing-log-storage
- Name: Configuring the LokiStack log store
File: cluster-logging-loki
- Name: Configuring the Elasticsearch log store
File: logging-config-es-store
- Name: Logging alerts
Dir: logging_alerts
Topics:

View File

@@ -1247,8 +1247,6 @@ Topics:
- Name: Configuring your Logging deployment
Dir: config
Topics:
- Name: Configuring the log store
File: cluster-logging-log-store
- Name: Configuring CPU and memory limits for Logging components
File: cluster-logging-memory
- Name: Using tolerations to control Logging pod placement
@@ -1257,8 +1255,6 @@ Topics:
File: cluster-logging-moving-nodes
#- Name: Configuring systemd-journald and Fluentd
# File: cluster-logging-systemd
- Name: Logging using LokiStack
File: cluster-logging-loki
- Name: Log collection and forwarding
Dir: log_collection_forwarding
Topics:
@@ -1272,6 +1268,17 @@ Topics:
File: cluster-logging-collector
- Name: Collecting and storing Kubernetes events
File: cluster-logging-eventrouter
- Name: Log storage
Dir: log_storage
Topics:
- Name: About log storage
File: about-log-storage
- Name: Installing log storage
File: installing-log-storage
- Name: Configuring the LokiStack log store
File: cluster-logging-loki
- Name: Configuring the Elasticsearch log store
File: logging-config-es-store
- Name: Logging alerts
Dir: logging_alerts
Topics:

View File

@@ -24,25 +24,23 @@ endif::[]
include::snippets/logging-fluentd-dep-snip.adoc[]
//Installing the Red Hat OpenShift Logging Operator via webconsole
include::modules/cluster-logging-deploy-console.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../logging/log_collection_forwarding/cluster-logging-collector.adoc#cluster-logging-collector[Configuring the logging collector]
include::modules/logging-clo-gui-install.adoc[leveloffset=+1]
include::modules/logging-clo-cli-install.adoc[leveloffset=+1]
include::modules/cluster-logging-deploy-cli.adoc[leveloffset=+1]
[id="cluster-logging-deploying-es-operator"]
== Installing the Elasticsearch Operator
include::snippets/logging-elastic-dep-snip.adoc[]
include::modules/logging-es-storage-considerations.adoc[leveloffset=+2]
include::modules/logging-install-es-operator.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../operators/admin/olm-adding-operators-to-cluster.adoc#olm-installing-operators-from-operatorhub_olm-adding-operators-to-a-cluster[Installing Operators from the OperatorHub]
* xref:../logging/config/cluster-logging-log-store.adoc#cluster-logging-removing-unused-components-if-no-elasticsearch_cluster-logging-log-store[Removing unused components if you do not use the default Elasticsearch log store]
* xref:../logging/log_storage/logging-config-es-store.adoc#cluster-logging-removing-unused-components-if-no-elasticsearch_logging-config-es-store[Removing unused components if you do not use the default Elasticsearch log store]
[id="cluster-logging-deploying-postinstallation"]
== Postinstallation tasks

View File

@@ -17,7 +17,7 @@ The Operators are responsible for deploying, upgrading, and maintaining the {log
[NOTE]
====
Because the internal {product-title} Elasticsearch log store does not provide secure storage for audit logs, audit logs are not stored in the internal Elasticsearch instance by default. If you want to send the audit logs to the default internal Elasticsearch log store, for example to view the audit logs in Kibana, you must use the Log Forwarding API as described in xref:../logging/config/cluster-logging-log-store.adoc#cluster-logging-elasticsearch-audit_cluster-logging-log-store[Forward audit logs to the log store].
Because the internal {product-title} Elasticsearch log store does not provide secure storage for audit logs, audit logs are not stored in the internal Elasticsearch instance by default. If you want to send the audit logs to the default internal Elasticsearch log store, for example to view the audit logs in Kibana, you must use the Log Forwarding API as described in xref:../logging/log_storage/logging-config-es-store.adoc#cluster-logging-elasticsearch-audit_logging-config-es-store[Forward audit logs to the log store].
====
include::modules/logging-architecture-overview.adoc[leveloffset=+1]
@@ -46,10 +46,6 @@ include::modules/cluster-logging-export-fields.adoc[leveloffset=+2]
For information, see xref:../logging/cluster-logging-exported-fields.adoc#cluster-logging-exported-fields[About exporting fields].
include::modules/cluster-logging-about-logstore.adoc[leveloffset=+2]
For information, see xref:../logging/config/cluster-logging-log-store.adoc#cluster-logging-log-store[Configuring the log store].
include::modules/cluster-logging-eventrouter-about.adoc[leveloffset=+2]
For information, see xref:../logging/log_collection_forwarding/cluster-logging-eventrouter.adoc#cluster-logging-eventrouter[Collecting and storing Kubernetes events].

View File

@@ -59,7 +59,7 @@ By default, the {logging} sends container and infrastructure logs to the default
[NOTE]
====
To send audit logs to the internal Elasticsearch log store, use the Cluster Log Forwarder as described in xref:../../logging/config/cluster-logging-log-store.adoc#cluster-logging-elasticsearch-audit_cluster-logging-log-store[Forward audit logs to the log store].
To send audit logs to the internal Elasticsearch log store, use the Cluster Log Forwarder as described in xref:../../logging/log_storage/logging-config-es-store.adoc#cluster-logging-elasticsearch-audit_logging-config-es-store[Forwarding audit logs to the log store].
====
include::modules/cluster-logging-collector-log-forwarding-about.adoc[leveloffset=+1]

View File

@@ -0,0 +1 @@
../../_attributes/

View File

@@ -0,0 +1,30 @@
:_mod-docs-content-type: ASSEMBLY
include::_attributes/common-attributes.adoc[]
include::_attributes/attributes-openshift-dedicated.adoc[]
[id="about-log-storage"]
= About log storage
:context: about-log-storage
toc::[]
You can use an internal Loki or Elasticsearch log store on your cluster for storing logs, or you can use a xref:../../logging/log_collection_forwarding/log-forwarding.adoc#logging-create-clf_log-forwarding[`ClusterLogForwarder` custom resource (CR)] to forward logs to an external store.
[id="log-storage-overview-types"]
== Log storage types
Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as an alternative to Elasticsearch as a log store for the {logging}.
Elasticsearch indexes incoming log records completely during ingestion. Loki only indexes a few fixed labels during ingestion and defers more complex parsing until after the logs have been stored. This means Loki can collect logs more quickly.
include::modules/cluster-logging-about-es-logstore.adoc[leveloffset=+2]
[id="log-storage-overview-querying"]
== Querying log stores
You can query Loki by using the link:https://grafana.com/docs/loki/latest/logql/[LogQL log query language].
[role="_additional-resources"]
[id="additional-resources_log-storage-overview"]
== Additional resources
* link:https://grafana.com/docs/loki/latest/get-started/components/[Loki components documentation]
* link:https://loki-operator.dev/docs/object_storage.md/[Loki Object Storage documentation]

View File

@@ -1,33 +1,15 @@
:_mod-docs-content-type: ASSEMBLY
:context: cluster-logging-loki
[id="cluster-logging-loki"]
= Logging using LokiStack
= Configuring the LokiStack log store
include::_attributes/common-attributes.adoc[]
toc::[]
In {logging} documentation, _LokiStack_ refers to the {logging} supported combination of Loki and web proxy with {product-title} authentication integration. LokiStack's proxy uses {product-title} authentication to enforce multi-tenancy. _Loki_ refers to the log store as either the individual component or an external store.
Loki is a horizontally scalable, highly available, multi-tenant log aggregation system currently offered as an alternative to Elasticsearch as a log store for the {logging}. Elasticsearch indexes incoming log records completely during ingestion. Loki only indexes a few fixed labels during ingestion and defers more complex parsing until after the logs have been stored. This means Loki can collect logs more quickly. You can query Loki by using the link:https://grafana.com/docs/loki/latest/logql/[LogQL log query language].
include::modules/loki-deployment-sizing.adoc[leveloffset=+1]
//include::modules/cluster-logging-loki-deploy.adoc[leveloffset=+1]
//include::modules/logging-deploy-loki-console.adoc[leveloffset=+1]
include::modules/logging-creating-new-group-cluster-admin-user-role.adoc[leveloffset=+1]
include::modules/logging-loki-gui-install.adoc[leveloffset=+1]
include::modules/logging-clo-gui-install.adoc[leveloffset=+1]
include::modules/logging-loki-cli-install.adoc[leveloffset=+1]
include::modules/logging-clo-cli-install.adoc[leveloffset=+1]
include::modules/configuring-log-storage-cr.adoc[leveloffset=+1]
include::modules/logging-loki-storage.adoc[leveloffset=+1]
include::modules/logging-loki-storage-aws.adoc[leveloffset=+2]
@@ -54,8 +36,7 @@ include::modules/logging-loki-reliability-hardening.adoc[leveloffset=+1]
.Additional resources
* link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#podantiaffinity-v1-core[`PodAntiAffinity` v1 core Kubernetes documentation]
* link:https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity[Assigning Pods to Nodes Kubernetes documentation]
* xref:../nodes/scheduling/nodes-scheduler-pod-affinity.adoc#nodes-scheduler-pod-affinity[Placing pods relative to other pods using affinity and anti-affinity rules]
* xref:../../nodes/scheduling/nodes-scheduler-pod-affinity.adoc#nodes-scheduler-pod-affinity[Placing pods relative to other pods using affinity and anti-affinity rules]
include::modules/logging-loki-zone-aware-rep.adoc[leveloffset=+1]
@@ -64,11 +45,10 @@ include::modules/logging-loki-zone-fail-recovery.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* link:https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#spread-constraint-definition[Topology spread constraints Kubernetes documentation]
* link:https://kubernetes.io/docs/setup/best-practices/multiple-zones/#storage-access-for-zones[Kubernetes storage documentation].
ifdef::openshift-enterprise[]
* xref:../nodes/scheduling/nodes-scheduler-pod-topology-spread-constraints.adoc#nodes-scheduler-pod-topology-spread-constraints-configuring[Controlling pod placement by using pod topology spread constraints]
* xref:../../nodes/scheduling/nodes-scheduler-pod-topology-spread-constraints.adoc#nodes-scheduler-pod-topology-spread-constraints-configuring[Controlling pod placement by using pod topology spread constraints]
endif::[]
include::modules/logging-loki-log-access.adoc[leveloffset=+1]
@@ -77,7 +57,7 @@ include::modules/logging-loki-log-access.adoc[leveloffset=+1]
.Additional resources
ifdef::openshift-enterprise[]
xref:../authentication/using-rbac.adoc[Using RBAC to define and apply permissions]
* xref:../../authentication/using-rbac.adoc[Using RBAC to define and apply permissions]
endif::[]
include::modules/logging-loki-retention.adoc[leveloffset=+1]

1
logging/log_storage/images Symbolic link
View File

@@ -0,0 +1 @@
../../images/

View File

@@ -0,0 +1,34 @@
:_mod-docs-content-type: ASSEMBLY
include::_attributes/common-attributes.adoc[]
include::_attributes/attributes-openshift-dedicated.adoc[]
[id="installing-log-storage"]
= Installing log storage
:context: installing-log-storage
toc::[]
You can use the {oc-first} or the {product-title} web console to install a log store on your {product-title} cluster.
include::snippets/logging-elastic-dep-snip.adoc[]
[id="installing-log-storage-loki"]
== Installing a Loki log store
You can use the {loki-op} to install an internal Loki log store on your {product-title} cluster.
include::modules/loki-deployment-sizing.adoc[leveloffset=+2]
include::modules/logging-loki-gui-install.adoc[leveloffset=+2]
include::modules/logging-loki-cli-install.adoc[leveloffset=+2]
[id="installing-log-storage-es"]
== Installing an Elasticsearch log store
You can use the {es-op} to install an internal Elasticsearch log store on your {product-title} cluster.
include::snippets/logging-elastic-dep-snip.adoc[]
include::modules/logging-es-storage-considerations.adoc[leveloffset=+2]
include::modules/logging-install-es-operator.adoc[leveloffset=+2]
include::modules/cluster-logging-deploy-cli.adoc[leveloffset=+2]
// configuring log store in the clusterlogging CR
include::modules/configuring-log-storage-cr.adoc[leveloffset=+1]

View File

@@ -1,26 +1,27 @@
:_mod-docs-content-type: ASSEMBLY
[id="cluster-logging-log-store"]
= Configuring the log store
[id="logging-config-es-store"]
= Configuring the Elasticsearch log store
include::_attributes/common-attributes.adoc[]
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: cluster-logging-log-store
:context: logging-config-es-store
toc::[]
{logging-title-uc} uses Elasticsearch 6 (ES) to store and organize the log data.
You can use Elasticsearch 6 to store and organize log data.
You can make modifications to your log store, including:
* storage for your Elasticsearch cluster
* shard replication across data nodes in the cluster, from full replication to no replication
* external access to Elasticsearch data
* Storage for your Elasticsearch cluster
* Shard replication across data nodes in the cluster, from full replication to no replication
* External access to Elasticsearch data
include::modules/configuring-log-storage-cr.adoc[leveloffset=+1]
include::modules/cluster-logging-elasticsearch-audit.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* For more information on the Log Forwarding API, see xref:../../logging/log_collection_forwarding/log-forwarding.adoc#log-forwarding[Forwarding logs using the Log Forwarding API].
* xref:../../logging/log_collection_forwarding/log-forwarding.adoc#log-forwarding[About log collection and forwarding]
include::modules/cluster-logging-elasticsearch-retention.adoc[leveloffset=+1]

1
logging/log_storage/modules Symbolic link
View File

@@ -0,0 +1 @@
../../modules/

View File

@@ -0,0 +1 @@
../../snippets/

View File

@@ -19,7 +19,7 @@ Use and configuration of the Kibana interface is beyond the scope of this docume
[NOTE]
====
The audit logs are not stored in the internal {product-title} Elasticsearch instance by default. To view the audit logs in Kibana, you must use the xref:../../logging/config/cluster-logging-log-store.adoc#cluster-logging-elasticsearch-audit_cluster-logging-log-store[Log Forwarding API] to configure a pipeline that uses the `default` output for audit logs.
The audit logs are not stored in the internal {product-title} Elasticsearch instance by default. To view the audit logs in Kibana, you must use the xref:../../logging/log_storage/logging-config-es-store.adoc#cluster-logging-elasticsearch-audit_logging-config-es-store[Log Forwarding API] to configure a pipeline that uses the `default` output for audit logs.
====
include::modules/cluster-logging-visualizer-indices.adoc[leveloffset=+1]

View File

@@ -3,10 +3,8 @@
// * logging/cluster-logging.adoc
:_mod-docs-content-type: CONCEPT
[id="cluster-logging-about-logstore_{context}"]
= About the log store
By default, {product-title} uses link:https://www.elastic.co/products/elasticsearch[Elasticsearch (ES)] to store log data. Optionally you can use the Log Forwarder API to forward logs to an external store. Several types of store are supported, including fluentd, rsyslog, kafka and others.
[id="cluster-logging-about-es-logstore_{context}"]
= About the Elasticsearch log store
The {logging} Elasticsearch instance is optimized and tested for short term storage, approximately seven days. If you want to retain your logs over a longer term, it is recommended you move the data to a third-party storage system.

View File

@@ -1,145 +0,0 @@
// Module is included in the following assemblies:
//cluster-logging-loki.adoc
:_mod-docs-content-type: PROCEDURE
[id="logging-loki-deploy_{context}"]
= Deploying the LokiStack
ifndef::openshift-rosa,openshift-dedicated[]
You can use the {product-title} web console to deploy the LokiStack.
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
You can deploy the LokiStack by using the {product-title} {cluster-manager-url}.
endif::[]
.Prerequisites
* You have installed the Cluster Logging Operator.
* Supported Log Store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation)
.Procedure
. Install the Loki Operator:
ifndef::openshift-rosa,openshift-dedicated[]
.. In the {product-title} web console, click *Operators* -> *OperatorHub*.
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
.. In the {hybrid-console}, click *Operators* -> *OperatorHub*.
endif::[]
.. Choose *Loki Operator* from the list of available Operators, and click *Install*.
.. Under *Installation mode*, select *All namespaces on the cluster*.
.. Under *Installed Namespace*, select *openshift-operators-redhat*.
+
You must specify the `openshift-operators-redhat` namespace. The `openshift-operators` namespace might contain Community Operators, which are untrusted and might publish a metric with the same name as
ifndef::openshift-rosa[]
an {product-title} metric, which would cause conflicts.
endif::[]
ifdef::openshift-rosa[]
a {product-title} metric, which would cause conflicts.
endif::[]
.. Select *Enable operator recommended cluster monitoring on this namespace*.
+
This option sets the `openshift.io/cluster-monitoring: "true"` label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the `openshift-operators-redhat` namespace.
.. Select an *Update approval*.
+
* The *Automatic* strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
+
* The *Manual* strategy requires a user with appropriate credentials to approve the Operator update.
.. Click *Install*.
.. Verify that you installed the Loki Operator. Visit the *Operators* → *Installed Operators* page and look for *Loki Operator*.
.. Ensure that *Loki Operator* is listed with *Status* as *Succeeded* in all the projects.
+
. Create a `Secret` YAML file that uses the `access_key_id` and `access_key_secret` fields to specify your AWS credentials and `bucketnames`, `endpoint` and `region` to define the object storage location. For example:
+
[source,yaml]
----
apiVersion: v1
kind: Secret
metadata:
name: logging-loki-s3
namespace: openshift-logging
stringData:
access_key_id: AKIAIOSFODNN7EXAMPLE
access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
bucketnames: s3-bucket-name
endpoint: https://s3.eu-central-1.amazonaws.com
region: eu-central-1
----
+
. Create the `LokiStack` custom resource (CR):
+
[source,yaml]
----
apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
name: logging-loki
namespace: openshift-logging
spec:
size: 1x.small
storage:
schemas:
- version: v12
effectiveDate: "2022-06-01"
secret:
name: logging-loki-s3
type: s3
storageClassName: gp3-csi <1>
tenants:
mode: openshift-logging
----
<1> Or `gp2-csi`.
. Apply the `LokiStack` CR:
+
[source,terminal]
----
$ oc apply -f logging-loki.yaml
----
. Create a `ClusterLogging` custom resource (CR):
+
[source,yaml]
----
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
managementState: Managed
logStore:
type: lokistack
lokistack:
name: logging-loki
collection:
type: vector
----
. Apply the `ClusterLogging` CR:
+
[source,terminal]
----
$ oc apply -f cr-lokistack.yaml
----
. Enable the Red Hat OpenShift Logging Console Plugin:
ifndef::openshift-rosa,openshift-dedicated[]
.. In the {product-title} web console, click *Operators* -> *Installed Operators*.
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
.. In the {hybrid-console}, click *Operators* -> *Installed Operators*.
endif::[]
.. Select the *Red Hat OpenShift Logging* Operator.
.. Under Console plugin, click *Disabled*.
.. Select *Enable* and then *Save*. This change restarts the `openshift-console` pods.
.. After the pods restart, you will receive a notification that a web console update is available, prompting you to refresh.
.. After refreshing the web console, click *Observe* from the left main menu. A new option for *Logs* is available.

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * logging/config/cluster-logging-log-store.adoc
// * logging/log_storage/logging-config-es-store.adoc
:_mod-docs-content-type: PROCEDURE
[id="cluster-logging-manual-rollout-rolling_{context}"]

View File

@@ -1,127 +0,0 @@
// Module included in the following assemblies:
//
// *
:_mod-docs-content-type: PROCEDURE
[id="logging-deploy-loki-console_{context}"]
= Deploying the Loki Operator using the web console
You can use the {product-title} web console to install the Loki Operator.
.Prerequisites
* Supported Log Store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation)
.Procedure
To install the Loki Operator using the {product-title} web console:
. In the {product-title} web console, click *Operators* -> *OperatorHub*.
. Type *Loki* in the *Filter by keyword* field.
.. Choose *Loki Operator* from the list of available Operators, and click *Install*.
. Select *stable* or *stable-5.y* as the *Update channel*.
+
--
include::snippets/logging-stable-updates-snip.adoc[]
--
. Ensure that *All namespaces on the cluster* is selected under *Installation mode*.
. Ensure that *openshift-operators-redhat* is selected under *Installed Namespace*.
. Select *Enable Operator recommended cluster monitoring on this Namespace*.
+
This option sets the `openshift.io/cluster-monitoring: "true"` label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the `openshift-operators-redhat` namespace.
. Select an option for *Update approval*.
+
* The *Automatic* option allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
+
* The *Manual* option requires a user with appropriate credentials to approve the Operator update.
. Click *Install*.
. Verify that the *LokiOperator* installed by switching to the *Operators* → *Installed Operators* page.
.. Ensure that *LokiOperator* is listed with *Status* as *Succeeded* in all the projects.
+
. Create a `Secret` YAML file that uses the `access_key_id` and `access_key_secret` fields to specify your credentials and `bucketnames`, `endpoint`, and `region` to define the object storage location. AWS is used in the following example:
+
[source,yaml]
----
apiVersion: v1
kind: Secret
metadata:
name: logging-loki-s3
namespace: openshift-logging
stringData:
access_key_id: AKIAIOSFODNN7EXAMPLE
access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
bucketnames: s3-bucket-name
endpoint: https://s3.eu-central-1.amazonaws.com
region: eu-central-1
----
+
. Select *Create instance* under LokiStack on the *Details* tab. Then select *YAML view*. Paste in the following template, subsituting values where appropriate.
+
[source,yaml]
----
apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
name: logging-loki <1>
namespace: openshift-logging
spec:
size: 1x.small <2>
storage:
schemas:
- version: v12
effectiveDate: '2022-06-01'
secret:
name: logging-loki-s3 <3>
type: s3 <4>
storageClassName: <storage_class_name> <5>
tenants:
mode: openshift-logging
----
<1> Name should be `logging-loki`.
<2> Select your Loki deployment size.
<3> Define the secret used for your log storage.
<4> Define corresponding storage type.
<5> Enter the name of an existing storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed using `oc get storageclasses`.
+
.. Apply the configuration:
+
[source,terminal]
----
oc apply -f logging-loki.yaml
----
+
. Create or edit a `ClusterLogging` CR:
+
[source,yaml]
----
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
managementState: Managed
logStore:
type: lokistack
lokistack:
name: logging-loki
collection:
type: vector
----
+
.. Apply the configuration:
+
[source,terminal]
----
oc apply -f cr-lokistack.yaml
----

View File

@@ -10,27 +10,27 @@ To install and configure logging on your {product-title} cluster, additional Ope
.Prerequisites
* Supported object store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation)
* You have access to a supported object store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation).
* You have administrator permissions.
.Procedure
. To install the {loki-op} in the {product-title} web console, click *Operators* -> *OperatorHub*.
. In the {product-title} web console *Administrator* perspective, navigate to *Operators* -> *OperatorHub*.
. Type {loki-op} in the filter by keyword box. Choose *Loki Operator* from the list of available Operators and click *Install*.
. Type {loki-op} in the *Filter by keyword* box. Click *Loki Operator* in the list of available Operators, then click *Install*.
+
[NOTE]
====
The Community Loki Operator is not supported by Red Hat.
====
. On the *Install Operator* page, for *Update channel* select *stable*.
. Select *stable* or *stable-x.y* as the *Update channel*.
+
[NOTE]
====
The `stable` channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to `stable-X` where `X` is the version of logging you have installed.
====
--
include::snippets/logging-stable-updates-snip.adoc[]
--
+
As the {loki-op} must be deployed to the global operator group namespace `openshift-operators-redhat`, *Installation mode* and *Installed Namespace* is already selected. If this namespace does not already exist, it is created for you.
The {loki-op} must be deployed to the global operator group namespace `openshift-operators-redhat`, so the *Installation mode* and *Installed Namespace* are already selected. If this namespace does not already exist, it is created for you.
. Select *Enable operator-recommended cluster monitoring on this namespace.*
+
@@ -40,7 +40,74 @@ This option sets the `openshift.io/cluster-monitoring: "true"` label in the Name
+
If the approval strategy in the subscription is set to *Automatic*, the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to *Manual*, you must manually approve pending updates.
The Operators should now be available to all users and projects that use this cluster.
. Create a secret.
.. Navigate to *Workloads* -> *Secrets* in the *Administrator* perspective of the {product-title} web console.
.. In the *Create* drop-down menu, select *From YAML*.
.. Create a secret that uses the `access_key_id` and `access_key_secret` fields to specify your credentials and the `bucketnames`, `endpoint`, and `region` fields to define the object storage location. AWS is used in the following example:
+
[source,yaml]
----
apiVersion: v1
kind: Secret
metadata:
name: logging-loki-s3
namespace: openshift-logging
stringData:
access_key_id: AKIAIOSFODNN7EXAMPLE
access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
bucketnames: s3-bucket-name
endpoint: https://s3.eu-central-1.amazonaws.com
region: eu-central-1
----
. Create a `LokiStack` custom resource (CR).
.. Navigate to *Operators* -> *Installed Operators*. Click the *All instances* tab.
.. Use the *Create new* drop-down menu to select *LokiStack*.
.. Select *YAML view*, then use the following template to create a `LokiStack` CR:
+
[source,yaml]
----
apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
name: logging-loki <1>
namespace: openshift-logging
spec:
size: 1x.small <2>
storage:
schemas:
- version: v12
effectiveDate: '2022-06-01'
secret:
name: logging-loki-s3 <3>
type: s3 <4>
storageClassName: <storage_class_name> <5>
tenants:
mode: openshift-logging
----
<1> Use the name `logging-loki`.
<2> Select your Loki deployment size.
<3> Specify the secret used for your log storage.
<4> Specify the corresponding storage type.
<5> Enter the name of an existing storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the `oc get storageclasses` command.
. Create or edit the `ClusterLogging` CR to specify using the Loki log store:
+
[source,yaml]
----
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
managementState: Managed
logStore:
type: lokistack
lokistack:
name: logging-loki
# ...
----
.Verification
@@ -48,7 +115,6 @@ The Operators should now be available to all users and projects that use this cl
. Make sure the *openshift-logging* project is selected.
. In the *Status* column, verify that you see green checkmarks with *InstallSucceeded* and the text *Up to date*.
[NOTE]
====
An Operator might display a `Failed` status before the installation finishes. If the Operator install completes with an `InstallSucceeded` message, refresh the page.

View File

@@ -11,7 +11,7 @@ The Loki Operator integrates a gateway that implements multi-tenancy and authent
[NOTE]
====
The Loki Operator can also be used for xref:../logging/cluster-logging-loki.adoc#cluster-logging-loki[Logging with the LokiStack]. The Network Observability Operator requires a dedicated LokiStack separate from Logging.
The Loki Operator can also be used for xref:../logging/log_storage/cluster-logging-loki.adoc#cluster-logging-loki[configuring the LokiStack log store]. The Network Observability Operator requires a dedicated LokiStack separate from the {logging}.
====
include::modules/network-observability-without-loki.adoc[leveloffset=+1]
@@ -30,7 +30,7 @@ include::modules/network-observability-lokistack-create.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../logging/cluster-logging-loki.adoc#logging-creating-new-group-cluster-admin-user-role_cluster-logging-loki[Creating a new group for the cluster-admin user role]
* xref:../logging/log_storage/cluster-logging-loki.adoc#logging-creating-new-group-cluster-admin-user-role_cluster-logging-loki[Creating a new group for the cluster-admin user role]
include::modules/loki-deployment-sizing.adoc[leveloffset=+2]
include::modules/network-observability-lokistack-ingestion-query.adoc[leveloffset=+2]

View File

@@ -83,5 +83,4 @@ include::modules/deploy-red-hat-openshift-container-storage.adoc[leveloffset=+1]
[role="_additional-resources"]
[id="admission-plug-ins-additional-resources"]
== Additional resources
* xref:../logging/config/cluster-logging-log-store.adoc#cluster-logging-elasticsearch-storage_cluster-logging-log-store[Configuring persistent storage for the log store]
* xref:../logging/log_storage/logging-config-es-store.adoc#logging-config-es-store[Configuring the Elasticsearch log store]

View File

@@ -35,5 +35,4 @@ include::modules/optimizing-storage-azure.adoc[leveloffset=+1]
[role="_additional-resources"]
[id="admission-plug-ins-additional-resources"]
== Additional resources
* xref:../../logging/config/cluster-logging-log-store.adoc#cluster-logging-elasticsearch-storage_cluster-logging-log-store[Configuring persistent storage for the log store]
* xref:../../logging/log_storage/logging-config-es-store.adoc#logging-config-es-store[Configuring the Elasticsearch log store]

View File

@@ -42,7 +42,7 @@ include::modules/ossm-jaeger-config-elasticsearch-v1x.adoc[leveloffset=+2]
include::modules/ossm-jaeger-config-es-cleaner-v1x.adoc[leveloffset=+2]
ifdef::openshift-enterprise[]
For more information about configuring Elasticsearch with {product-title}, see xref:../../logging/config/cluster-logging-log-store.adoc[Configuring the log store].
For more information about configuring Elasticsearch with {product-title}, see xref:../../logging/log_storage/logging-config-es-store.adoc#logging-config-es-store[Configuring the Elasticsearch log store].
endif::[]
include::modules/ossm-cr-threescale.adoc[leveloffset=+1]

View File

@@ -42,7 +42,7 @@ include::modules/ossm-installation-activities.adoc[leveloffset=+1]
ifdef::openshift-enterprise[]
[WARNING]
====
See xref:../../logging/config/cluster-logging-log-store.adoc[Configuring the log store] for details on configuring the default Jaeger parameters for Elasticsearch in a production environment.
See xref:../../logging/log_storage/logging-config-es-store.adoc#logging-config-es-store[Configuring the Elasticsearch log store] for details on configuring the default Jaeger parameters for Elasticsearch in a production environment.
====
== Next steps

View File

@@ -42,7 +42,7 @@ include::modules/distr-tracing-config-sampling.adoc[leveloffset=+2]
include::modules/distr-tracing-config-storage.adoc[leveloffset=+2]
ifdef::openshift-enterprise,openshift-dedicated[]
For more information about configuring Elasticsearch with {product-title}, see xref:../../logging/config/cluster-logging-log-store.adoc[Configuring the log store] or xref:../../distr_tracing/distr_tracing_jaeger/distr-tracing-jaeger-configuring.adoc[Configuring and deploying distributed tracing].
For more information about configuring Elasticsearch with {product-title}, see xref:../../logging/log_storage/logging-config-es-store.adoc#logging-config-es-store[Configuring the Elasticsearch log store] or xref:../../distr_tracing/distr_tracing_jaeger/distr-tracing-jaeger-configuring.adoc[Configuring and deploying distributed tracing].
//TO DO For information about connecting to an external Elasticsearch instance, see xref:../../distr_tracing/distr_tracing_jaeger/distr-tracing-jaeger-configuring.adoc#jaeger-config-external-es_jaeger-deploying[Connecting to an existing Elasticsearch instance].
endif::[]

View File

@@ -6,7 +6,7 @@
//
// Text snippet included in the following modules:
//
// logging-deploy-RHOL-console.adoc
//
:_mod-docs-content-type: SNIPPET

View File

@@ -10,5 +10,5 @@
[NOTE]
====
The `stable` channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to `stable-X` where `X` is the version of logging you have installed.
The *stable* channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to *stable-x.y*, where `x.y` represents the major and minor version of logging you have installed. For example, *stable-5.7*.
====

View File

@@ -69,8 +69,8 @@ include::modules/virt-loki-log-queries.adoc[leveloffset=+3]
[role="_additional-resources"]
[id="additional-resources_{context}"]
==== Additional resources for LokiStack and LogQL
* xref:../../logging/cluster-logging-loki.adoc#about-logging-loki_cluster-logging-loki[About the LokiStack]
* xref:../../logging/cluster-logging-loki.adoc#logging-loki-deploy_cluster-logging-loki[Deploying the LokiStack] on {product-title}
* xref:../../logging/log_storage/about-log-storage.adoc#about-log-storage[About log storage]
* xref:../../logging/log_storage/installing-log-storage.adoc#cluster-logging-loki-deploy_installing-log-storage[Deploying the LokiStack]
* link:https://grafana.com/docs/loki/latest/logql/log_queries/[LogQL log queries] in the Grafana documentation
[id="troubleshooting-data-volumes_{context}"]