1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

OSDOCS-3397: Logging book port to OSD/ROSA

This commit is contained in:
Eric Ponvelle
2022-11-22 10:43:57 -05:00
parent 8bb67b5247
commit eb06ca9aeb
48 changed files with 522 additions and 285 deletions

View File

@@ -251,6 +251,79 @@ Topics:
- Name: Configuring custom domains for applications
File: osd-config-custom-domains-applications
---
Name: Logging
Dir: logging
Distros: openshift-dedicated
Topics:
- Name: Release notes
File: cluster-logging-release-notes
- Name: About Logging
File: cluster-logging
- Name: Installing Logging
File: cluster-logging-deploying
- Name: Accessing the service logs
File: sd-accessing-the-service-logs
- Name: Configuring your Logging deployment
Dir: config
Topics:
- Name: About the Cluster Logging custom resource
File: cluster-logging-configuring-cr
- Name: Configuring the logging collector
File: cluster-logging-collector
- Name: Configuring the log store
File: cluster-logging-log-store
- Name: Configuring the log visualizer
File: cluster-logging-visualizer
- Name: Configuring Logging storage
File: cluster-logging-storage-considerations
- Name: Configuring CPU and memory limits for Logging components
File: cluster-logging-memory
- Name: Using tolerations to control Logging pod placement
File: cluster-logging-tolerations
- Name: Moving the Logging resources with node selectors
File: cluster-logging-moving-nodes
- Name: Configuring systemd-journald and Fluentd
File: cluster-logging-systemd
- Name: Maintenance and support
File: cluster-logging-maintenance-support
- Name: Logging with the LokiStack
File: cluster-logging-loki
- Name: Viewing logs for a specific resource
File: viewing-resource-logs
- Name: Viewing cluster logs in Kibana
File: cluster-logging-visualizer
Distros: openshift-dedicated
- Name: Forwarding logs to third party systems
File: cluster-logging-external
- Name: Enabling JSON logging
File: cluster-logging-enabling-json-logging
- Name: Collecting and storing Kubernetes events
File: cluster-logging-eventrouter
# - Name: Forwarding logs using ConfigMaps
# File: cluster-logging-external-configmap
# Distros: openshift-dedicated
- Name: Updating Logging
File: cluster-logging-upgrading
- Name: Viewing cluster dashboards
File: cluster-logging-dashboards
- Name: Troubleshooting Logging
Dir: troubleshooting
Topics:
- Name: Viewing Logging status
File: cluster-logging-cluster-status
- Name: Viewing the status of the log store
File: cluster-logging-log-store-status
- Name: Understanding Logging alerts
File: cluster-logging-alerts
- Name: Collecting logging data for Red Hat Support
File: cluster-logging-must-gather
- Name: Troubleshooting for Critical Alerts
File: cluster-logging-troubleshooting-for-critical-alerts
- Name: Uninstalling Logging
File: cluster-logging-uninstall
- Name: Exported fields
File: cluster-logging-exported-fields
---
Name: Serverless
Dir: serverless
Distros: openshift-dedicated

View File

@@ -212,16 +212,6 @@ Topics:
Distros: openshift-rosa
- Name: About autoscaling nodes on a cluster
File: rosa-nodes-about-autoscaling-nodes
- Name: Logging
Dir: rosa_logging
Distros: openshift-rosa
Topics:
- Name: Accessing the service logs
File: rosa-accessing-the-service-logs
- Name: Installing the CloudWatch logging service
File: rosa-install-logging
- Name: Viewing cluster logs in the AWS Console
File: rosa-viewing-logs
- Name: Monitoring user-defined projects
Dir: rosa_monitoring
Distros: openshift-rosa
@@ -365,6 +355,79 @@ Topics:
# - Name: Using the internal registry
# File: rosa-using-internal-registry
---
Name: Logging
Dir: logging
Distros: openshift-rosa
Topics:
- Name: Release notes
File: cluster-logging-release-notes
- Name: About Logging
File: cluster-logging
- Name: Installing Logging
File: cluster-logging-deploying
- Name: Accessing the service logs
File: sd-accessing-the-service-logs
- Name: Viewing cluster logs in the AWS Console
File: rosa-viewing-logs
- Name: Configuring your Logging deployment
Dir: config
Topics:
- Name: About the Cluster Logging custom resource
File: cluster-logging-configuring-cr
- Name: Configuring the logging collector
File: cluster-logging-collector
- Name: Configuring the log store
File: cluster-logging-log-store
- Name: Configuring the log visualizer
File: cluster-logging-visualizer
- Name: Configuring Logging storage
File: cluster-logging-storage-considerations
- Name: Configuring CPU and memory limits for Logging components
File: cluster-logging-memory
- Name: Using tolerations to control Logging pod placement
File: cluster-logging-tolerations
- Name: Moving the Logging resources with node selectors
File: cluster-logging-moving-nodes
- Name: Configuring systemd-journald and Fluentd
File: cluster-logging-systemd
- Name: Maintenance and support
File: cluster-logging-maintenance-support
- Name: Logging with the LokiStack
File: cluster-logging-loki
- Name: Viewing logs for a specific resource
File: viewing-resource-logs
- Name: Viewing cluster logs in Kibana
File: cluster-logging-visualizer
- Name: Forwarding logs to third party systems
File: cluster-logging-external
- Name: Enabling JSON logging
File: cluster-logging-enabling-json-logging
- Name: Collecting and storing Kubernetes events
File: cluster-logging-eventrouter
# - Name: Forwarding logs using ConfigMaps
# File: cluster-logging-external-configmap
- Name: Updating Logging
File: cluster-logging-upgrading
- Name: Viewing cluster dashboards
File: cluster-logging-dashboards
- Name: Troubleshooting Logging
Dir: troubleshooting
Topics:
- Name: Viewing Logging status
File: cluster-logging-cluster-status
- Name: Viewing the status of the log store
File: cluster-logging-log-store-status
- Name: Understanding Logging alerts
File: cluster-logging-alerts
- Name: Collecting logging data for Red Hat Support
File: cluster-logging-must-gather
- Name: Troubleshooting for Critical Alerts
File: cluster-logging-troubleshooting-for-critical-alerts
- Name: Uninstalling Logging
File: cluster-logging-uninstall
- Name: Exported fields
File: cluster-logging-exported-fields
---
Name: Service Mesh
Dir: service_mesh
Distros: openshift-rosa

View File

@@ -25,5 +25,5 @@ include::modules/deleting-service.adoc[leveloffset=+1]
ifdef::openshift-rosa[]
[role="_additional-resources"]
== Additional resources
* For information about the `cluster-logging-operator` and the AWS CloudWatch log forwarding service, see xref:../rosa_cluster_admin/rosa_logging/rosa-install-logging.adoc#rosa-install-logging[Install the logging add-on service]
* For information about the `cluster-logging-operator` and the AWS CloudWatch log forwarding service, see xref:../logging/cluster-logging-external.adoc#cluster-logging-collector-log-forward-cloudwatch_cluster-logging-external[Forwarding logs to Amazon CloudWatch]
endif::[]

View File

@@ -16,7 +16,7 @@ include::modules/aws-cloudwatch.adoc[leveloffset=+1]
.Additional resources
* link:https://aws.amazon.com/cloudwatch/[Amazon CloudWatch product information]
* xref:../rosa_cluster_admin/rosa_logging/rosa-install-logging.adoc#rosa-install-logging[Installing the CloudWatch logging service]
* xref:../logging/cluster-logging-external.adoc#cluster-logging-collector-log-forward-cloudwatch_cluster-logging-external[Forwarding logs to Amazon CloudWatch]
include::modules/osd-rhoam.adoc[leveloffset=+1]

View File

@@ -1,12 +1,20 @@
:_content-type: ASSEMBLY
:context: cluster-logging-dashboards
include::_attributes/common-attributes.adoc[]
include::_attributes/attributes-openshift-dedicated.adoc[]
[id="cluster-logging-dashboards"]
= Viewing cluster dashboards
include::_attributes/common-attributes.adoc[]
:context: cluster-logging-dashboards
toc::[]
The *Logging/Elasticsearch Nodes* and *Openshift Logging* dashboards in the {product-title} web console show in-depth details about your Elasticsearch instance and the individual Elasticsearch nodes that you can use to prevent and diagnose problems.
The *Logging/Elasticsearch Nodes* and *Openshift Logging* dashboards in the
ifndef::openshift-rosa,openshift-dedicated[]
{product-title} web console
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
{cluster-manager-url}
endif::[]
contain in-depth details about your Elasticsearch instance and the individual Elasticsearch nodes that you can use to prevent and diagnose problems.
The *OpenShift Logging* dashboard contains charts that show details about your Elasticsearch instance at a cluster level, including cluster resources, garbage collection, shards in the cluster, and Fluentd statistics.

View File

@@ -3,17 +3,21 @@
[id="cluster-logging-deploying"]
= Installing the {logging-title}
include::_attributes/common-attributes.adoc[]
include::_attributes/attributes-openshift-dedicated.adoc[]
toc::[]
You can install the {logging-title} by deploying the OpenShift Elasticsearch and Red Hat OpenShift Logging Operators. The OpenShift Elasticsearch Operator creates and manages the Elasticsearch cluster used by OpenShift Logging. The {logging} Operator creates and manages the components of the logging stack.
The process for deploying the {logging} to {product-title} involves:
The process for deploying the {logging} to {product-title}
ifdef::openshift-rosa[]
(ROSA)
endif::[]
involves:
* Reviewing the xref:../logging/config/cluster-logging-storage-considerations#cluster-logging-storage[{logging-uc} storage considerations].
* Installing the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator using the {product-title} xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-console_cluster-logging-deploying[web console] or xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-cli_cluster-logging-deploying[CLI].
* Installing the logging subsystem for {product-title} using xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-console_cluster-logging-deploying[the web console] or xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-cli_cluster-logging-deploying[the CLI].
// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
@@ -25,7 +29,14 @@ include::modules/cluster-logging-deploy-console.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
ifdef::openshift-enterprise,openshift-origin[]
* xref:../operators/admin/olm-adding-operators-to-cluster.adoc#olm-installing-operators-from-operatorhub_olm-adding-operators-to-a-cluster[Installing Operators from the OperatorHub]
* xref:../logging/config/cluster-logging-collector.adoc#cluster-logging-removing-unused-components-if-no-elasticsearch_cluster-logging-collector[Removing unused components if you do not use the default Elasticsearch log store]
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
* link:https://docs.openshift.com/container-platform/latest/operators/admin/olm-adding-operators-to-cluster.html[Installing Operators from OperatorHub]
* link:https://docs.openshift.com/container-platform/latest/logging/config/cluster-logging-collector.html#cluster-logging-removing-unused-components-if-no-elasticsearch_cluster-logging-collector[Removing unused components if you do not use the default Elasticsearch log store]
endif::[]
== Post-installation tasks
@@ -49,10 +60,16 @@ include::modules/cluster-logging-deploy-multitenant.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
ifdef::openshift-enterprise,openshift-origin[]
* xref:../networking/network_policy/about-network-policy.adoc[About network policy]
* xref:../networking/openshift_sdn/about-openshift-sdn.adoc[About the OpenShift SDN network plugin]
* xref:../networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.adoc[About the OVN-Kubernetes network plugin]
* xref:../networking/openshift_sdn/about-openshift-sdn.adoc[About the OpenShift SDN default CNI network provider]
* xref:../networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.adoc[About the OVN-Kubernetes default Container Network Interface (CNI) network provider]
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
* link:https://docs.openshift.com/container-platform/latest/networking/network_policy/about-network-policy.html[About network policy]
* link:https://docs.openshift.com/container-platform/latest/networking/openshift_sdn/about-openshift-sdn.html[About the OpenShift SDN default CNI network provider]
* link:https://docs.openshift.com/container-platform/latest/networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.html[About the OVN-Kubernetes default Container Network Interface (CNI) network provider]
endif::[]
// include::modules/cluster-logging-deploy-memory.adoc[leveloffset=+1]

View File

@@ -1,8 +1,9 @@
:_content-type: ASSEMBLY
:context: cluster-logging-exported-fields
[id="cluster-logging-exported-fields"]
= Log Record Fields
include::_attributes/common-attributes.adoc[]
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: cluster-logging-exported-fields
toc::[]

View File

@@ -3,6 +3,7 @@
[id="cluster-logging-external"]
= Forwarding logs to external third-party logging systems
include::_attributes/common-attributes.adoc[]
include::_attributes/attributes-openshift-dedicated.adoc[]
toc::[]
@@ -43,23 +44,32 @@ include::modules/cluster-logging-collector-log-forward-syslog.adoc[leveloffset=+
include::modules/cluster-logging-collector-log-forward-cloudwatch.adoc[leveloffset=+1]
[id="cluster-logging-collector-log-forward-sts-cloudwatch_{context}"]
=== Forwarding logs to Amazon CloudWatch from STS enabled clusters
For clusters with AWS Security Token Service (STS) enabled, you can create an AWS service account manually or create a credentials request using the xref:../authentication/managing_cloud_provider_credentials/about-cloud-credential-operator.adoc[Cloud Credential Operator(CCO)] utility `ccoctl`.
.Prerequisites
* {logging-title-uc}: 5.5 and later
For clusters with AWS Security Token Service (STS) enabled, you can create an AWS service account manually or create a credentials request by using the
ifdef::openshift-enterprise,openshift-origin[]
xref:../authentication/managing_cloud_provider_credentials/about-cloud-credential-operator.adoc[Cloud Credential Operator(CCO)]
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
link:https://docs.openshift.com/container-platform/latest/authentication/managing_cloud_provider_credentials/about-cloud-credential-operator.html[Cloud Credential Operator(CCO)]
endif::[]
utility `ccoctl`.
[NOTE]
====
This feature is not supported by the vector collector.
====
.Creating an AWS credentials request
. Create a `CredentialsRequest` Custom Resource YAML using the template below:
.Prerequisites
* {logging-title-uc}: 5.5 and later
.Procedure
. Create a `CredentialsRequest` custom resource YAML by using the template below:
+
.CloudWatch Credentials Request Template
.CloudWatch credentials request template
[source,yaml]
----
apiVersion: cloudcredential.openshift.io/v1
@@ -92,7 +102,7 @@ spec:
+
[source,terminal]
----
ccoctl aws create-iam-roles \
$ ccoctl aws create-iam-roles \
--name=<name> \
--region=<aws_region> \
--credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \
@@ -104,7 +114,7 @@ ccoctl aws create-iam-roles \
[source,terminal]
+
----
oc apply -f output/manifests/openshift-logging-<your_role_name>-credentials.yaml
$ oc apply -f output/manifests/openshift-logging-<your_role_name>-credentials.yaml
----
+
. Create or edit a `ClusterLogForwarder` custom resource:
@@ -151,31 +161,11 @@ spec:
<10> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
<11> Specify the name of the output to use when forwarding logs with this pipeline.
[role="_additional-resources"]
.Additional resources
* link:https://docs.aws.amazon.com/STS/latest/APIReference/welcome.html[AWS STS API Reference]
==== Creating a secret for AWS CloudWatch with an existing AWS role
If you have an existing role for AWS, you can create a secret for AWS with STS using the `oc create secret --from-literal` command.
[source,terminal]
----
oc create secret generic cw-sts-secret -n openshift-logging --from-literal=role_arn=arn:aws:iam::123456789012:role/my-role_with-permissions
----
.Example Secret
[source,yaml]
----
apiVersion: v1
kind: Secret
metadata:
namespace: openshift-logging
name: my-secret-name
stringData:
role_arn: arn:aws:iam::123456789012:role/my-role_with-permissions
----
include::modules/cluster-logging-collector-log-forward-secret-cloudwatch.adoc[leveloffset=+2]
include::modules/cluster-logging-collector-log-forward-loki.adoc[leveloffset=+1]
@@ -185,6 +175,7 @@ include::modules/cluster-logging-troubleshooting-loki-entry-out-of-order-errors.
.Additional resources
* xref:../logging/cluster-logging-exported-fields.adoc#cluster-logging-exported-fields-kubernetes_cluster-logging-exported-fields[Log Record Fields].
* link:https://grafana.com/docs/loki/latest/configuration/[Configuring Loki server]
include::modules/cluster-logging-collector-log-forward-gcp.adoc[leveloffset=+1]
@@ -196,6 +187,11 @@ include::modules/cluster-logging-collector-log-forward-logs-from-application-pod
[role="_additional-resources"]
.Additional resources
ifdef::openshift-enterprise,openshift-origin[]
* xref:../networking/ovn_kubernetes_network_provider/logging-network-policy.adoc#logging-network-policy[Logging for egress firewall and network policy rules]
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
* link:https://docs.openshift.com/container-platform/latest/networking/ovn_kubernetes_network_provider/logging-network-policy.html#logging-network-policy[Logging for egress firewall and network policy rules]
endif::[]
include::modules/cluster-logging-troubleshooting-log-forwarding.adoc[leveloffset=+1]

View File

@@ -1028,7 +1028,7 @@ From {product-title} 4.6 to the present, forwarding logs by using the following
Instead, use the following non-legacy methods:
* xref:../logging/cluster-logging-external.adoc#cluster-logging-collector-log-forward-fluentd_cluster-logging-external[Forwarding logs using the Fluentd fohttps://www.redhat.com/security/data/cve/CVE-2021-22922.htmlrward protocol]
* xref:../logging/cluster-logging-external.adoc#cluster-logging-collector-log-forward-fluentd_cluster-logging-external[Forwarding logs using the Fluentd forward protocol]
* xref:../logging/cluster-logging-external.adoc#cluster-logging-collector-log-forward-syslog_cluster-logging-external[Forwarding logs using the syslog protocol]

View File

@@ -17,5 +17,9 @@ include::modules/cluster-logging-uninstall.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
ifdef::openshift-enterprise,openshift-origin[]
* xref:../storage/understanding-persistent-storage.adoc#reclaim-manual_understanding-persistent-storage[Reclaiming a persistent volume manually]
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
* link:https://docs.openshift.com/container-platform/latest/storage/understanding-persistent-storage.html#reclaim-manual_understanding-persistent-storage[Reclaiming a persistent volume manually]
endif::[]

View File

@@ -1,17 +1,14 @@
:_content-type: ASSEMBLY
:context: cluster-logging
include::_attributes/common-attributes.adoc[]
include::_attributes/attributes-openshift-dedicated.adoc[]
[id="cluster-logging"]
= Understanding the {logging-title}
include::_attributes/common-attributes.adoc[]
:context: cluster-logging
toc::[]
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
As a cluster administrator, you can deploy the {logging} to
aggregate all the logs from your {product-title} cluster, such as node system audit logs, application container logs, and infrastructure logs.
The {logging} aggregates these logs from throughout your cluster and stores them in a default log store. You can xref:../logging/cluster-logging-visualizer.adoc#cluster-logging-visualizer[use the Kibana web console to visualize log data].
ifdef::openshift-enterprise,openshift-rosa,openshift-dedicated,openshift-webscale,openshift-origin[]
As a cluster administrator, you can deploy the {logging} to aggregate all the logs from your {product-title} cluster, such as node system audit logs, application container logs, and infrastructure logs. The {logging} aggregates these logs from throughout your cluster and stores them in a default log store. You can xref:../logging/cluster-logging-visualizer.adoc#cluster-logging-visualizer[use the Kibana web console to visualize log data].
The {logging} aggregates the following types of logs:
@@ -30,10 +27,16 @@ endif::[]
// modules required to cover the user story. You can also include other
// assemblies.
ifdef::openshift-rosa,openshift-dedicated[]
include::modules/cluster-logging-cloudwatch.adoc[leveloffset=+1]
.Next steps
* See xref:../logging/cluster-logging-external.html#cluster-logging-collector-log-forward-cloudwatch_cluster-logging-external[Forwarding logs to Amazon CloudWatch] for instructions.
endif::[]
include::modules/logging-common-terms.adoc[leveloffset=+1]
include::modules/cluster-logging-about.adoc[leveloffset=+1]
For information, see xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploying[Configuring the log collector].
For information, see xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploying[Installing the {logging-title}].
include::modules/cluster-logging-json-logging-about.adoc[leveloffset=+2]
@@ -43,7 +46,7 @@ For information, see xref:../logging/cluster-logging-eventrouter.adoc#cluster-lo
include::modules/cluster-logging-update-logging.adoc[leveloffset=+2]
For information, see xref:../logging/cluster-logging-upgrading.html#cluster-logging-upgrading[About updating {product-title} Logging].
For information, see xref:../logging/cluster-logging-upgrading.adoc#cluster-logging-upgrading[Updating OpenShift Logging].
include::modules/cluster-logging-view-cluster-dashboards.adoc[leveloffset=+2]
@@ -53,7 +56,7 @@ include::modules/cluster-logging-troubleshoot-logging.adoc[leveloffset=+2]
include::modules/cluster-logging-Uninstall-logging.adoc[leveloffset=+2]
For information, see xref:../logging/cluster-logging-uninstall.adoc#cluster-logging-uninstall[About uninstalling {product-title} Logging].
For information, see xref:../logging/cluster-logging-uninstall.adoc#cluster-logging-uninstall_cluster-logging-uninstall[Uninstalling OpenShift Logging].
include::modules/cluster-logging-export-fields.adoc[leveloffset=+2]
@@ -63,7 +66,7 @@ include::modules/cluster-logging-about-components.adoc[leveloffset=+2]
include::modules/cluster-logging-about-collector.adoc[leveloffset=+2]
For information, see xref:../logging/config/cluster-logging-collector.adoc#cluster-logging-collector[Configuring the log collector].
For information, see xref:../logging/config/cluster-logging-collector.adoc#cluster-logging-collector[Configuring the logging collector].
include::modules/cluster-logging-about-logstore.adoc[leveloffset=+2]

View File

@@ -3,6 +3,7 @@
[id="cluster-logging-collector"]
= Configuring the logging collector
include::_attributes/common-attributes.adoc[]
include::_attributes/attributes-openshift-dedicated.adoc[]
toc::[]
@@ -10,6 +11,7 @@ toc::[]
You can configure the CPU and memory limits for the log collector and xref:../../logging/config/cluster-logging-moving-nodes.adoc#cluster-logging-moving[move the log collector pods to specific nodes]. All supported modifications to the log collector can be performed though the `spec.collection.log.fluentd` stanza in the `ClusterLogging` custom resource (CR).
// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
// modules required to cover the user story. You can also include other

View File

@@ -1,8 +1,9 @@
:_content-type: ASSEMBLY
:context: cluster-logging-store
[id="cluster-logging-store"]
= Configuring the log store
include::_attributes/common-attributes.adoc[]
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: cluster-logging-store
toc::[]

View File

@@ -96,4 +96,9 @@ include::modules/cluster-logging-collector-tolerations.adoc[leveloffset=+1]
[id="cluster-logging-tolerations-addtl-resources"]
== Additional resources
ifdef::openshift-enterprise,openshift-origin[]
* xref:../../nodes/scheduling/nodes-scheduler-taints-tolerations.adoc#nodes-scheduler-taints-tolerations[Controlling pod placement using node taints].
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
* link:https://docs.openshift.com/container-platform/latest/nodes/scheduling/nodes-scheduler-taints-tolerations.html#nodes-scheduler-taints-tolerations[Controlling pod placement using node taints].
endif::[]

View File

@@ -0,0 +1,10 @@
:_content-type: ASSEMBLY
[id="rosa-viewing-logs"]
= Viewing cluster logs in the AWS Console
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: rosa-viewing-logs
toc::[]
You can view forwarded cluster logs in the AWS console.
include::modules/rosa-view-cloudwatch-logs.adoc[leveloffset=+1]

View File

@@ -0,0 +1,33 @@
:_content-type: ASSEMBLY
[id="sd-accessing-the-service-logs"]
= Accessing the service logs for {product-title} clusters
include::_attributes/attributes-openshift-dedicated.adoc[]
:context: sd-accessing-the-service-logs
toc::[]
[role="_abstract"]
You can view the service logs for your {product-title}
ifdef::openshift-rosa[]
(ROSA)
endif::[]
clusters by using the {cluster-manager-first}. The service logs detail cluster events such as load balancer quota updates and scheduled maintenance upgrades. The logs also show cluster resource changes such as the addition or deletion of users, groups, and identity providers.
// Commented out while the OpenShift Cluster Manager CLI is in Developer Preview:
//You can view the service logs for your {product-title} (ROSA) clusters by using {cluster-manager-first} or the {cluster-manager} CLI (`ocm`). The service logs detail cluster events such as load balancer quota updates and scheduled maintenance upgrades. The logs also show cluster resource changes such as the addition or deletion of users, groups, and identity providers.
Additionally, you can add notification contacts for
ifdef::openshift-rosa[]
a ROSA
endif::[]
ifdef::openshift-dedicated[]
an {product-title}
endif::[]
cluster. Subscribed users receive emails about cluster events that require customer action, known cluster incidents, upgrade maintenance, and other topics.
// Commented out while the OpenShift Cluster Manager CLI is in Developer Preview:
//include::modules/viewing-the-service-logs.adoc[leveloffset=+1]
//include::modules/viewing-the-service-logs-ocm.adoc[leveloffset=+2]
//include::modules/viewing-the-service-logs-cli.adoc[leveloffset=+2]
include::modules/viewing-the-service-logs-ocm.adoc[leveloffset=+1]
include::modules/adding-cluster-notification-contacts.adoc[leveloffset=+1]

View File

@@ -3,12 +3,17 @@
[id="cluster-logging-alerts"]
= Understanding {logging} alerts
include::_attributes/common-attributes.adoc[]
include::_attributes/attributes-openshift-dedicated.adoc[]
toc::[]
All of the logging collector alerts are listed on the Alerting UI of the {product-title} web console.
All of the logging collector alerts are listed on the Alerting UI of the
ifndef::openshift-rosa,openshift-dedicated[]
{product-title} web console.
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
{cluster-manager-url}.
endif::[]
// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
@@ -20,7 +25,13 @@ include::modules/cluster-logging-collector-alerts-viewing.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* For more information on the Alerting UI, see xref:../../monitoring/managing-alerts.html#managing-alerts[Managing alerts].
* For more information on the Alerting UI, see
ifdef::openshift-enterprise,openshift-origin[]
xref:../../monitoring/managing-alerts.adoc#managing-alerts[Managing alerts].
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
link:https://docs.openshift.com/container-platform/latest/monitoring/managing-alerts.html#managing-alerts[Managing alerts].
endif::[]
include::modules/cluster-logging-collector-alerts.adoc[leveloffset=+1]
include::modules/cluster-logging-elasticsearch-rules.adoc[leveloffset=+1]

View File

@@ -3,12 +3,20 @@
[id="cluster-logging-must-gather"]
= Collecting logging data for Red Hat Support
include::_attributes/common-attributes.adoc[]
include::_attributes/attributes-openshift-dedicated.adoc[]
toc::[]
When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support.
The xref:../../support/gathering-cluster-data.adoc#gathering-cluster-data[`must-gather` tool] enables you to collect diagnostic information for project-level resources, cluster-level resources, and each of the {logging} components.
The
ifdef::openshift-enterprise,openshift-origin[]
xref:../../support/gathering-cluster-data.adoc#gathering-cluster-data[`must-gather` tool]
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
link:https://docs.openshift.com/container-platform/latest/support/gathering-cluster-data.html#gathering-cluster-data[`must-gather` tool]
endif::[]
enables you to collect diagnostic information for project-level resources, cluster-level resources, and each of the {logging} components.
For prompt support, supply diagnostic information for both {product-title} and OpenShift Logging.

View File

@@ -1,7 +1,7 @@
// Module included in the following assemblies:
//
// * osd_cluster_admin/osd_logging/osd-accessing-the-service-logs.adoc
// * rosa_cluster_admin/rosa_logging/rosa-accessing-the-service-logs.adoc
// * logging/sd-accessing-the-service-logs.adoc
:_content-type: PROCEDURE
[id="adding-cluster-notification-contacts_{context}"]

View File

@@ -14,7 +14,7 @@ The following example shows a typical custom resource for the {logging}.
[id="efk-logging-configuring-about-sample_{context}"]
.Sample `ClusterLogging` custom resource (CR)
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
ifdef::openshift-enterprise,openshift-rosa,openshift-dedicated,openshift-webscale,openshift-origin[]
[source,yaml]
----
apiVersion: "logging.openshift.io/v1"

View File

@@ -12,15 +12,9 @@
= About deploying the {logging-title}
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
{product-title} cluster administrators can deploy the {logging} using
the {product-title} web console or CLI to install the OpenShift Elasticsearch
Operator and Red Hat OpenShift Logging Operator. When the Operators are installed, you create
a `ClusterLogging` custom resource (CR) to schedule {logging} pods and
other resources necessary to support the {logging}. The Operators are
responsible for deploying, upgrading, and maintaining the {logging}.
{product-title} cluster administrators can deploy the {logging} using the {product-title} web console or CLI to install the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator. When the Operators are installed, you create a `ClusterLogging` custom resource (CR) to schedule {logging} pods and other resources necessary to support the {logging}. The Operators are responsible for deploying, upgrading, and maintaining the {logging}.
endif::openshift-enterprise,openshift-webscale,openshift-origin[]
The `ClusterLogging` CR defines a complete {logging} environment that includes all the components
of the logging stack to collect, store and visualize logs. The Red Hat OpenShift Logging Operator watches the {logging} CR and adjusts the logging deployment accordingly.
The `ClusterLogging` CR defines a complete {logging} environment that includes all the components of the logging stack to collect, store and visualize logs. The Red Hat OpenShift Logging Operator watches the {logging} CR and adjusts the logging deployment accordingly.
Administrators and application developers can view the logs of the projects for which they have view access.

View File

@@ -0,0 +1,17 @@
// Module included in the following assemblies:
//
// * logging/cluster-logging.adoc
//
// This module uses conditionalized paragraphs so that the module
// can be re-used in associated products.
:_content-type: CONCEPT
[id="cluster-logging-cloudwatch_{context}"]
= CloudWatch recommendation for {product-title}
Red Hat recommends that you use the AWS CloudWatch solution for your logging needs.
[id="cluster-logging-requirements-explained_{context}"]
== Logging requirements
Hosting your own logging stack requires a large amount of compute resources and storage, which might be dependent on your cloud service quota. The compute resource requirements can start at 48 GB or more, while the storage requirement can be as large as 1600 GB or more. The logging stack runs on your worker nodes, which reduces your available workload resource. With these considerations, hosting your own logging stack increases your cluster operating costs.

View File

@@ -6,7 +6,14 @@
[id="cluster-logging-collector-alerts-viewing_{context}"]
= Viewing logging collector alerts
Alerts are shown in the {product-title} web console, on the *Alerts* tab of the Alerting UI. Alerts are in one of the following states:
Alerts are shown in the
ifndef::openshift-rosa,openshift-dedicated[]
{product-title} web console,
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
{cluster-manager-url},
endif::[]
on the *Alerts* tab of the Alerting UI. Alerts are in one of the following states:
* *Firing*. The alert condition is true for the duration of the timeout. Click the *Options* menu at the end of the firing alert to view more information or silence the alert.
* *Pending* The alert condition is currently true, but the timeout has not been reached.

View File

@@ -6,7 +6,14 @@
[id="cluster-logging-collector-alerts_{context}"]
= About logging collector alerts
The following alerts are generated by the logging collector. You can view these alerts in the {product-title} web console, on the *Alerts* page of the Alerting UI.
The following alerts are generated by the logging collector. You can view these alerts in the
ifndef::openshift-rosa,openshift-dedicated[]
{product-title} web console
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
{cluster-manager-url}
endif::[]
on the *Alerts* page of the Alerting UI.
.Fluentd Prometheus alerts
[cols="2,2,2,1",options="header"]

View File

@@ -84,7 +84,14 @@ $ oc create -f <file-name>.yaml
Here, you see an example `ClusterLogForwarder` custom resource (CR) and the log data that it outputs to Amazon CloudWatch.
Suppose that you are running an {product-title} cluster named `mycluster`. The following command returns the cluster's `infrastructureName`, which you will use to compose `aws` commands later on:
Suppose that you are running
ifndef::openshift-rosa[]
an {product-title} cluster
endif::[]
ifdef::openshift-rosa[]
a ROSA cluster
endif::[]
named `mycluster`. The following command returns the cluster's `infrastructureName`, which you will use to compose `aws` commands later on:
[source,terminal]
----
@@ -104,7 +111,6 @@ My life is my message
...
----
You can look up the UUID of the `app` namespace where the `busybox` pod runs:
[source,terminal]
@@ -281,4 +287,4 @@ $ aws --output json logs describe-log-groups | jq .logGroups[].logGroupName
"mycluster-7977k.infrastructure"
----
The `groupBy` field affects the application log group only. It does not affect the `audit` and `infrastructure` log groups.
The `groupBy` field affects the application log group only. It does not affect the `audit` and `infrastructure` log groups.

View File

@@ -0,0 +1,30 @@
// Module included in the following assemblies:
//
// * logging/cluster-logging-external.adoc
//
:_content-type: PROCEDURE
[id="cluster-logging-collector-log-forward-secret-cloudwatch_{context}"]
== Creating a secret for AWS CloudWatch with an existing AWS role
If you have an existing role for AWS, you can create a secret for AWS with STS using the `oc create secret --from-literal` command.
.Procedure
* In the CLI, enter the following to generate a secret for AWS:
+
[source,terminal]
----
$ oc create secret generic cw-sts-secret -n openshift-logging --from-literal=role_arn=arn:aws:iam::123456789012:role/my-role_with-permissions
----
+
.Example Secret
[source,yaml]
----
apiVersion: v1
kind: Secret
metadata:
namespace: openshift-logging
name: my-secret-name
stringData:
role_arn: arn:aws:iam::123456789012:role/my-role_with-permissions
----

View File

@@ -5,14 +5,24 @@
[id="cluster-logging-dashboards-access_{context}"]
= Accessing the Elasticsearch and OpenShift Logging dashboards
You can view the *Logging/Elasticsearch Nodes* and *OpenShift Logging* dashboards in the {product-title} web console.
You can view the *Logging/Elasticsearch Nodes* and *OpenShift Logging* dashboards in the
ifndef::openshift-rosa,openshift-dedicated[]
{product-title} web console.
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
{cluster-manager-url}.
endif::[]
.Procedure
To launch the dashboards:
ifndef::openshift-rosa,openshift-dedicated[]
. In the {product-title} web console, click *Observe* -> *Dashboards*.
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
. In the {product-title} {hybrid-console}, click *Observe* -> *Dashboards*.
endif::[]
. On the *Dashboards* page, select *Logging/Elasticsearch Nodes* or *OpenShift Logging* from the *Dashboard* menu.
+

View File

@@ -44,7 +44,14 @@ metadata:
labels:
openshift.io/cluster-monitoring: "true" <2>
----
<1> You must specify the `openshift-operators-redhat` namespace. To prevent possible conflicts with metrics, you should configure the Prometheus Cluster Monitoring stack to scrape metrics from the `openshift-operators-redhat` namespace and not the `openshift-operators` namespace. The `openshift-operators` namespace might contain community Operators, which are untrusted and could publish a metric with the same name as an {product-title} metric, which would cause conflicts.
<1> You must specify the `openshift-operators-redhat` namespace. To prevent possible conflicts with metrics, you should configure the Prometheus Cluster Monitoring stack to scrape metrics from the `openshift-operators-redhat` namespace and not the `openshift-operators` namespace. The `openshift-operators` namespace might contain community Operators, which are untrusted and could publish a metric with the same name as
ifdef::openshift-rosa[]
a ROSA
endif::[]
ifdef::openshift-dedicated[]
an {product-title}
endif::[]
metric, which would cause conflicts.
<2> String. You must specify this label as shown to ensure that cluster monitoring scrapes the `openshift-operators-redhat` namespace.
.. Create the namespace:
@@ -132,7 +139,7 @@ metadata:
name: "elasticsearch-operator"
namespace: "openshift-operators-redhat" <1>
spec:
channel: "stable-5.1" <2>
channel: "stable-5.5" <2>
installPlanApproval: "Automatic" <3>
source: "redhat-operators" <4>
sourceNamespace: "openshift-marketplace"
@@ -179,14 +186,14 @@ $ oc get csv --all-namespaces
[source,terminal]
----
NAMESPACE NAME DISPLAY VERSION REPLACES PHASE
default elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded
kube-node-lease elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded
kube-public elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded
kube-system elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded
openshift-apiserver-operator elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded
openshift-apiserver elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded
openshift-authentication-operator elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded
openshift-authentication elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded
default elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.5.0-202007012112.p0 Succeeded
kube-node-lease elasticsearch-operator.5.5.0-202007012112.p0 OpenShift Elasticsearch Operator 5.5.0-202007012112.p0 Succeeded
kube-public elasticsearch-operator.5.5.0-202007012112.p0 OpenShift Elasticsearch Operator 5.5.0-202007012112.p0 Succeeded
kube-system elasticsearch-operator.5.5.0-202007012112.p0 OpenShift Elasticsearch Operator 5.5.0-202007012112.p0 Succeeded
openshift-apiserver-operator elasticsearch-operator.5.5.0-202007012112.p0 OpenShift Elasticsearch Operator 5.5.0-202007012112.p0 Succeeded
openshift-apiserver elasticsearch-operator.5.5.0-202007012112.p0 OpenShift Elasticsearch Operator 5.5.0-202007012112.p0 Succeeded
openshift-authentication-operator elasticsearch-operator.5.5.0-202007012112.p0 OpenShift Elasticsearch Operator 5.5.0-202007012112.p0 Succeeded
openshift-authentication elasticsearch-operator.5.5.0-202007012112.p0 OpenShift Elasticsearch Operator 5.5.0-202007012112.p0 Succeeded
...
----
+
@@ -282,10 +289,9 @@ openshift-logging clusterlogging.5.1.0-202
[NOTE]
====
This default OpenShift Logging configuration should support a wide array of environments. Review the topics on tuning and
configuring {logging} components for information on modifications you can make to your OpenShift Logging cluster.
configuring {logging} components for information about modifications you can make to your OpenShift Logging cluster.
====
+
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
[source,yaml]
----
apiVersion: "logging.openshift.io/v1"
@@ -328,7 +334,7 @@ spec:
collection:
logs:
type: "fluentd" <10>
fluentd: {}
fluentd: {}
----
<1> The name must be `instance`.
<2> The OpenShift Logging management state. In some cases, if you change the OpenShift Logging defaults, you must set this to `Unmanaged`.

View File

@@ -6,11 +6,16 @@
[id="cluster-logging-deploy-console_{context}"]
= Installing the {logging-title} using the web console
ifndef::openshift-rosa,openshift-dedicated[]
You can use the {product-title} web console to install the OpenShift Elasticsearch and Red Hat OpenShift Logging Operators.
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
You can install the OpenShift Elasticsearch and Red Hat OpenShift Logging Operators by using the {product-title} {cluster-manager-url}.
endif::[]
[NOTE]
====
If you do not want to use the default Elasticsearch log store, you can remove the internal Elasticsearch `logStore` and Kibana `visualization` components from the `ClusterLogging` custom resource (CR). Removing these components is optional but saves resources. For more information, see xref:../logging/config/cluster-logging-collector.adoc#cluster-logging-removing-unused-components-if-no-elasticsearch_cluster-logging-collector[Removing unused components if you do not use the default Elasticsearch log store].
If you do not want to use the default Elasticsearch log store, you can remove the internal Elasticsearch `logStore` and Kibana `visualization` components from the `ClusterLogging` custom resource (CR). Removing these components is optional but saves resources. For more information, see the additional resources of this section.
====
.Prerequisites
@@ -33,11 +38,21 @@ endif::[]
.Procedure
To install the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator using the {product-title} web console:
ifndef::openshift-rosa,openshift-dedicated[]
To install the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator by using the {product-title} web console:
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
To install the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator by using the {product-title} {cluster-manager-url}:
endif::[]
. Install the OpenShift Elasticsearch Operator:
ifndef::openshift-rosa,openshift-dedicated[]
.. In the {product-title} web console, click *Operators* -> *OperatorHub*.
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
.. In the {hybrid-console}, click *Operators* -> *OperatorHub*.
endif::[]
.. Choose *OpenShift Elasticsearch Operator* from the list of available Operators, and click *Install*.
@@ -45,16 +60,18 @@ To install the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Op
.. Ensure that *openshift-operators-redhat* is selected under *Installed Namespace*.
+
You must specify the `openshift-operators-redhat` namespace. The `openshift-operators`
namespace might contain Community Operators, which are untrusted and could publish
a metric with the same name as an {product-title} metric, which would cause
conflicts.
You must specify the `openshift-operators-redhat` namespace. The `openshift-operators` namespace might contain Community Operators, which are untrusted and could publish a metric with the same name as
ifdef::openshift-rosa[]
a ROSA
endif::[]
ifdef::openshift-dedicated[]
an {product-title}
endif::[]
metric, which would cause conflicts.
.. Select *Enable operator recommended cluster monitoring on this namespace*.
+
This option sets the `openshift.io/cluster-monitoring: "true"` label in the Namespace object.
You must select this option to ensure that cluster monitoring
scrapes the `openshift-operators-redhat` namespace.
This option sets the `openshift.io/cluster-monitoring: "true"` label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the `openshift-operators-redhat` namespace.
.. Select *stable-5.x* as the *Update Channel*.
@@ -82,9 +99,7 @@ scrapes the `openshift-operators-redhat` namespace.
.. Select *Enable operator recommended cluster monitoring on this namespace*.
+
This option sets the `openshift.io/cluster-monitoring: "true"` label in the Namespace object.
You must select this option to ensure that cluster monitoring
scrapes the `openshift-logging` namespace.
This option sets the `openshift.io/cluster-monitoring: "true"` label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the `openshift-logging` namespace.
.. Select *stable-5.x* as the *Update Channel*.
@@ -102,10 +117,8 @@ scrapes the `openshift-logging` namespace.
+
If the Operator does not appear as installed, to troubleshoot further:
+
* Switch to the *Operators* → *Installed Operators* page and inspect
the *Status* column for any errors or failures.
* Switch to the *Workloads* → *Pods* page and check the logs in any pods in the
`openshift-logging` project that are reporting issues.
* Switch to the *Operators* → *Installed Operators* page and inspect the *Status* column for any errors or failures.
* Switch to the *Workloads* → *Pods* page and check the logs in any pods in the `openshift-logging` project that are reporting issues.
. Create an OpenShift Logging instance:
@@ -220,6 +233,7 @@ The number of primary shards for the index templates is equal to the number of E
You should see several pods for OpenShift Logging, Elasticsearch, Fluentd, and Kibana similar to the following list:
+
* cluster-logging-operator-cb795f8dc-xkckc
* collector-pb2f8
* elasticsearch-cdm-b3nqzchd-1-5c6797-67kfz
* elasticsearch-cdm-b3nqzchd-2-6657f4-wtprv
* elasticsearch-cdm-b3nqzchd-3-588c65-clg7g
@@ -227,7 +241,6 @@ You should see several pods for OpenShift Logging, Elasticsearch, Fluentd, and K
* fluentd-9z7kk
* fluentd-br7r2
* fluentd-fn2sb
* fluentd-pb2f8
* fluentd-zqgqx
* kibana-7fb4fd4cc9-bvt4p
endif::[]

View File

@@ -16,7 +16,9 @@ OpenShift SDN has three modes:
network policy:: This is the default mode. If no policy is defined, it allows all traffic. However, if a user defines a policy, they typically start by denying all traffic and then adding exceptions. This process might break applications that are running in different projects. Therefore, explicitly configure the policy to allow traffic to egress from one logging-related project to the other.
ifdef::openshift-enterprise,openshift-origin[]
multitenant:: This mode enforces network isolation. You must join the two logging-related projects to allow traffic between them.
endif::[]
subnet:: This mode allows all traffic. It does not enforce network isolation. No action is needed.

View File

@@ -1,8 +1,7 @@
// Module is included in the following assemblies:
//cluster-logging-loki.adoc
:_content-type: REFERENCE
[id="logging-feature-ref_{context}"]
id="cluster-logging-about-vector"]
[id="cluster-logging-about-vector_{context}"]
= About Vector
Vector is a log collector offered as an alternative to Fluentd for the {logging}.

View File

@@ -65,4 +65,4 @@ This configuration might significantly increase the number of shards on the clus
====
.Additional Resources
link:https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/[Kubernetes Annotations]
* link:https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/[Kubernetes Annotations]

View File

@@ -3,7 +3,13 @@
:_content-type: PROCEDURE
[id="logging-loki-deploy_{context}"]
= Deploying the LokiStack
ifndef::openshift-rosa,openshift-dedicated[]
You can use the {product-title} web console to deploy the LokiStack.
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
You can deploy the LokiStack by using the {product-title} {cluster-manager-url}.
endif::[]
.Prerequisites
@@ -14,7 +20,12 @@ You can use the {product-title} web console to deploy the LokiStack.
. Install the `LokiOperator` Operator:
ifndef::openshift-rosa,openshift-dedicated[]
.. In the {product-title} web console, click *Operators* -> *OperatorHub*.
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
.. In the {hybrid-console}, click *Operators* -> *OperatorHub*.
endif::[]
.. Choose *LokiOperator* from the list of available Operators, and click *Install*.
@@ -22,15 +33,17 @@ You can use the {product-title} web console to deploy the LokiStack.
.. Under *Installed Namespace*, select *openshift-operators-redhat*.
+
You must specify the `openshift-operators-redhat` namespace. The `openshift-operators`
namespace might contain Community Operators, which are untrusted and might publish
a metric with the same name as an {product-title} metric, which would cause
conflicts.
You must specify the `openshift-operators-redhat` namespace. The `openshift-operators` namespace might contain Community Operators, which are untrusted and might publish a metric with the same name as
ifndef::openshift-rosa[]
an {product-title} metric, which would cause conflicts.
endif::[]
ifdef::openshift-rosa[]
a {product-title} metric, which would cause conflicts.
endif::[]
.. Select *Enable operator recommended cluster monitoring on this namespace*.
+
This option sets the `openshift.io/cluster-monitoring: "true"` label in the Namespace object.
You must select this option to ensure that cluster monitoring scrapes the `openshift-operators-redhat` namespace.
This option sets the `openshift.io/cluster-monitoring: "true"` label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the `openshift-operators-redhat` namespace.
.. Select an *Approval Strategy*.
+
@@ -118,7 +131,12 @@ oc apply -f cr-lokistack.yaml
----
+
. Enable the RedHat OpenShift Logging Console Plugin:
ifndef::openshift-rosa,openshift-dedicated[]
.. In the {product-title} web console, click *Operators* -> *Installed Operators*.
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
.. In the {hybrid-console}, click *Operators* -> *Installed Operators*.
endif::[]
.. Select the *RedHat OpenShift Logging* Operator.
.. Under Console plugin, click *Disabled*.
.. Select *Enable* and then *Save*. This change will restart the 'openshift-console' pods.

View File

@@ -10,8 +10,8 @@ In Logging 5.6, Fluentd is deprecated and is planned to be removed in a future r
[id="openshift-logging-5-6-enhancements_{context}"]
== Enhancements
* With this update, Logging is compliant with {product-title}
xref:../security/tls-security-profiles.adoc[cluster-wide cryptographic policies]. (link:https://issues.redhat.com/browse/LOG-895[LOG-895])
* With this update, Logging is compliant with {product-title} cluster-wide cryptographic policies.
(link:https://issues.redhat.com/browse/LOG-895[LOG-895])
* With this update, you can declare per-tenant, per-stream, and global policies retention policies through the LokiStack custom resource, ordered by priority. (link:https://issues.redhat.com/browse/LOG-2695[LOG-2695])

View File

@@ -19,7 +19,14 @@ Deleting the `ClusterLogging` CR does not remove the persistent volume claims (P
To remove OpenShift Logging:
. Use the {product-title} web console to remove the `ClusterLogging` CR:
. Use the
ifndef::openshift-rosa,openshift-dedicated[]
{product-title} web console
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
{cluster-manager-url}
endif::[]
to remove the `ClusterLogging` CR:
.. Switch to the *Administration* -> *Custom Resource Definitions* page.

View File

@@ -29,7 +29,12 @@ If you update the Operators in the wrong order, Kibana does not update and the K
. Update the OpenShift Elasticsearch Operator:
.. From the web console, click *Operators* -> *Installed Operators*.
ifndef::openshift-rosa,openshift-dedicated[]
.. In the {product-title} web console, click *Operators* -> *Installed Operators*.
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
.. In the {hybrid-console}, click *Operators* -> *Installed Operators*.
endif::[]
.. Select the `openshift-Operators-redhat` project.
@@ -47,7 +52,12 @@ If you update the Operators in the wrong order, Kibana does not update and the K
. Update the Red Hat OpenShift Logging Operator:
.. From the web console, click *Operators* -> *Installed Operators*.
ifndef::openshift-rosa,openshift-dedicated[]
.. In the {product-title} web console, click *Operators* -> *Installed Operators*.
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
.. In the {hybrid-console}, click *Operators* -> *Installed Operators*.
endif::[]
.. Select the `openshift-logging` project.

View File

@@ -81,4 +81,7 @@ toleration::
You can apply tolerations to pods. Tolerations allow the scheduler to schedule pods with matching taints.
web console::
A user interface (UI) to manage {product-title}.
A user interface (UI) to manage {product-title}.
ifdef::openshift-rosa,openshift-dedicated[]
The web console for {product-title} can be found at link:https://console.redhat.com/openshift[https://console.redhat.com/openshift].
endif::[]

View File

@@ -4,9 +4,23 @@
= Elasticsearch cluster status
[role="_abstract"]
A dashboard in the *Observe* section of the {product-title} web console displays the status of the Elasticsearch cluster.
A dashboard in the *Observe* section of the
ifndef::openshift-rosa,openshift-dedicated[]
{product-title} web console
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
{cluster-manager-url}
endif::[]
displays the status of the Elasticsearch cluster.
To get the status of the OpenShift Elasticsearch cluster, visit the dashboard in the *Observe* section of the {product-title} web console at
To get the status of the OpenShift Elasticsearch cluster, visit the dashboard in the *Observe* section of the
ifndef::openshift-rosa,openshift-dedicated[]
{product-title} web console
endif::[]
ifdef::openshift-rosa,openshift-dedicated[]
{cluster-manager-url}
endif::[]
at
`<cluster_url>/monitoring/dashboards/grafana-dashboard-cluster-logging`.
.Elasticsearch status fields

View File

@@ -1,107 +0,0 @@
// Module included in the following assemblies:
//
// logging/rosa-install-logging.adoc
:_content-type: PROCEDURE
[id="rosa-install-logging-addon_{context}"]
= Install the logging add-on service
{product-title} (ROSA) provides logging through the `cluster-logging-operator` add-on. This add-on service offers an optional application log forwarding solution based on AWS CloudWatch. This logging solution can be installed after the ROSA cluster is provisioned.
.Procedure
. Enter the following command:
+
[source,terminal]
----
$ rosa install addon cluster-logging-operator --cluster=<cluster_name> --interactive
----
+
For `<cluster_name>`, enter the name of your cluster.
. When prompted, accept the default `yes` to install the `cluster-logging-operator`.
. When prompted, accept the default `yes` to install the optional Amazon CloudWatch log forwarding add-on or enter `no` to decline the installation of this add-on.
+
[NOTE]
====
It is not necessary to install the AWS CloudWatch service when you install the `cluster-logging-operator`. You can install the AWS CloudWatch service at any time through {cluster-manager} console from the cluster's *Add-ons* tab.
====
. For the collection of applications, infrastructure, and audit logs, accept the default values or change them as needed:
+
* *Applications logs*: Lets the Operator collect application logs, which includes everything that is _not_ deployed in the openshift-*, kube-*, and default namespaces. Default: `yes`
* *Infrastructure logs*: Lets the Operator collect logs from OpenShift Container Platform, Kubernetes, and some nodes. Default: `yes`
* *Audit logs*: Type `yes` to let the Operator collect node logs related to security audits. By default, Red Hat stores audit logs outside the cluster through a separate mechanism that does not rely on the Cluster Logging Operator. For more information about default audit logging, see the ROSA Service Definition. Default: `no`
. For the Amazon CloudWatch region, use the default cluster region, leave the `Cloudwatch region` value empty.
+
.Example output
[source,terminal]
----
? Are you sure you want to install add-on 'cluster-logging-operator' on cluster '<cluster_name>'? Yes
? Use AWS CloudWatch (optional): Yes
? Collect Applications logs (optional): Yes
? Collect Infrastructure logs (optional): Yes
? Collect Audit logs (optional): No
? CloudWatch region (optional):
I: Add-on 'cluster-logging-operator' is now installing. To check the status run 'rosa list addons --cluster=<cluster_name>'
----
[NOTE]
====
The installation can take approximately 10 minutes to complete.
====
.Verification steps
. To verify the logging installation status, enter the following command:
+
[source,terminal]
----
$ rosa list addons --cluster=<cluster_name>
----
. To verify which pods are deployed by `cluster-logging-operator` and their state of readiness:
.. Log in to the `oc` CLI using `cluster-admin` credentials:
+
[source,terminal]
----
$ oc login https://api.mycluster.abwp.s1.example.org:6443 \
--username cluster-admin
--password <password>
----
.. Enter the following command to get information about the pods for the default project. Alternatively, you can specify a different project.
+
[source,terminal]
----
$ oc get pods -n openshift-logging
----
+
.Example output
+
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
cluster-logging-operator-<pod_ID > 2/2 Running 0 7m1s
fluentd-4mnwp 1/1 Running 0 6m3s
fluentd-6xt25 1/1 Running 0 6m3s
fluentd-fqjhv 1/1 Running 0 6m3s
fluentd-gcvrg 1/1 Running 0 6m3s
fluentd-vpwrt 1/1 Running 0 6m3s
----
. Optional: To get information about the `clusterlogging` instance, enter the following command:
+
[source,terminal]
----
$ oc get clusterlogging -n openshift-logging
----
. Optional: To get information about `clusterlogforwarders` instances, enter the following command:
+
[source,terminal]
----
$ oc get clusterlogforwarders -n openshift-logging
----

View File

@@ -1,7 +1,7 @@
// Module included in the following assemblies:
//
// * osd_cluster_admin/osd_logging/osd-accessing-the-service-logs.adoc
// * rosa_cluster_admin/rosa_logging/rosa-accessing-the-service-logs.adoc
// * logging/sd-accessing-the-service-logs.adoc
:_content-type: PROCEDURE
[id="viewing-the-service-logs-cli_{context}"]

View File

@@ -1,7 +1,7 @@
// Module included in the following assemblies:
//
// * osd_cluster_admin/osd_logging/osd-accessing-the-service-logs.adoc
// * rosa_cluster_admin/rosa_logging/rosa-accessing-the-service-logs.adoc
// * logging/sd-accessing-the-service-logs.adoc
:_content-type: PROCEDURE
[id="viewing-the-service-logs-ocm_{context}"]

View File

@@ -1,7 +1,7 @@
// Module included in the following assemblies:
//
// * osd_cluster_admin/osd_logging/osd-accessing-the-service-logs.adoc
// * rosa_cluster_admin/rosa_logging/rosa-accessing-the-service-logs.adoc
// * logging/sd-accessing-the-service-logs.adoc
:_content-type: PROCEDURE
[id="viewing-the-service-logs_{context}"]

View File

@@ -64,5 +64,5 @@ ifdef::openshift-dedicated[]
* For steps to add cluster notification contacts, see xref:../osd_cluster_admin/osd_logging/osd-accessing-the-service-logs.adoc#adding-cluster-notification-contacts_osd-accessing-the-service-logs[Adding cluster notification contacts].
endif::openshift-dedicated[]
ifdef::openshift-rosa[]
* For steps to add cluster notification contacts, see xref:../rosa_cluster_admin/rosa_logging/rosa-accessing-the-service-logs.adoc#adding-cluster-notification-contacts_rosa-accessing-the-service-logs[Adding cluster notification contacts].
* For steps to add cluster notification contacts, see xref:../logging/sd-accessing-the-service-logs.adoc#adding-cluster-notification-contacts_sd-accessing-the-service-logs[Adding cluster notification contacts].
endif::openshift-rosa[]

View File

@@ -1,32 +0,0 @@
:_content-type: ASSEMBLY
include::_attributes/attributes-openshift-dedicated.adoc[]
[id="rosa-install-logging"]
= Installing logging add-on services
:context: rosa-install-logging
toc::[]
This section describes how to install the logging add-on and Amazon Web Services (AWS) CloudWatch log forwarding add-on services on {product-title} (ROSA).
The AWS CloudWatch log forwarding service on ROSA has the following approximate log throughput rates. Message rates greater than these can result in dropped log messages.
.Approximate log throughput rates
[cols="30,70"]
|===
|Message size (bytes) |Maximum expected rate (messages/second/node)
|512
|1,000
|1,024
|650
|2,048
|450
|===
include::modules/rosa-install-logging-addon.adoc[leveloffset=+1]
[role="_additional-resources"]
[id="additional-resources_adding-service"]
== Additional resources
* xref:../../adding_service_cluster/adding-service.adoc#adding-service[Adding services to your cluster]

View File

@@ -76,7 +76,6 @@ include::modules/rosa-getting-started-deleting-a-cluster.adoc[leveloffset=+1]
* xref:../adding_service_cluster/adding-service.adoc#adding-service[Adding services to a cluster using the {cluster-manager} console]
* xref:../rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc#rosa-managing-worker-nodes[Managing compute nodes]
* xref:../rosa_cluster_admin/rosa_monitoring/rosa-configuring-the-monitoring-stack.adoc#rosa-configuring-the-monitoring-stack[Configuring the monitoring stack]
* xref:../rosa_cluster_admin/rosa_logging/rosa-install-logging.adoc#rosa-install-logging[Installing logging add-on services]
[role="_additional-resources"]
[id="additional-resources_{context}"]

View File

@@ -160,7 +160,6 @@ include::modules/rosa-getting-started-deleting-a-cluster.adoc[leveloffset=+1]
* xref:../adding_service_cluster/adding-service.adoc#adding-service[Adding services to a cluster using the {cluster-manager} console]
* xref:../rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc#rosa-managing-worker-nodes[Managing compute nodes]
* xref:../rosa_cluster_admin/rosa_monitoring/rosa-configuring-the-monitoring-stack.adoc#rosa-configuring-the-monitoring-stack[Configuring the monitoring stack]
* xref:../rosa_cluster_admin/rosa_logging/rosa-install-logging.adoc#rosa-install-logging[Installing logging add-on services]
[role="_additional-resources"]
[id="additional-resources_{context}"]

View File

@@ -22,4 +22,4 @@ include::modules/sd-planning-considerations.adoc[leveloffset=+1]
[id="additional-resources_rosa-limits-scalability"]
== Additional resources
* xref:../rosa_cluster_admin/rosa_logging/rosa-accessing-the-service-logs.adoc#rosa-accessing-the-service-logs[Accessing the service logs for ROSA clusters]
* xref:../logging/sd-accessing-the-service-logs.adoc#sd-accessing-the-service-logs[Accessing the service logs for ROSA clusters]

View File

@@ -340,7 +340,7 @@ ifdef::openshift-dedicated[]
While cluster maintenance and host configuration is performed by the Red Hat Site Reliability Engineering (SRE) team, other ongoing tasks on your {product-title} {product-version} cluster can be performed by {product-title} cluster administrators. As an {product-title} cluster administrator, the documentation helps you:
- *Manage Dedicated Administrators*: Grant or revoke permissions to `dedicated admin` users.
- *Work with Logging*: Learn about OpenShift Logging and configure the logging add-on services.
- *Work with Logging*: Learn about OpenShift Logging and configure the Cluster Logging Operator.
- *Monitor clusters*: Learn to use the Web UI to access monitoring dashboards.
- *Manage nodes*: Learn to manage nodes, including configuring machine pools and autoscaling.
endif::[]