From aeff31d6657d2575adbcd12317e8911ea98601ff Mon Sep 17 00:00:00 2001 From: Ashleigh Brennan Date: Fri, 10 Nov 2023 09:48:23 -0600 Subject: [PATCH] OBSDOCS-603: Update attributes and additional improvements --- logging/cluster-logging-uninstall.adoc | 2 +- logging/logging-common-terms.adoc | 56 +++++++++++++------------- modules/cluster-logging-uninstall.adoc | 25 +++++------- 3 files changed, 40 insertions(+), 43 deletions(-) diff --git a/logging/cluster-logging-uninstall.adoc b/logging/cluster-logging-uninstall.adoc index a8587e4d32..754923a710 100644 --- a/logging/cluster-logging-uninstall.adoc +++ b/logging/cluster-logging-uninstall.adoc @@ -1,7 +1,7 @@ :_mod-docs-content-type: ASSEMBLY :context: cluster-logging-uninstall [id="cluster-logging-uninstall"] -= Uninstalling OpenShift Logging += Uninstalling Logging include::_attributes/common-attributes.adoc[] toc::[] diff --git a/logging/logging-common-terms.adoc b/logging/logging-common-terms.adoc index 8c561977b8..4be23e219d 100644 --- a/logging/logging-common-terms.adoc +++ b/logging/logging-common-terms.adoc @@ -7,37 +7,37 @@ include::_attributes/attributes-openshift-dedicated.adoc[] toc::[] -This glossary defines common terms that are used in the {product-title} Logging content. +This glossary defines common terms that are used in the {logging} documentation. -annotation:: +Annotation:: You can use annotations to attach metadata to objects. -Cluster Logging Operator (CLO):: -The Cluster Logging Operator provides a set of APIs to control the collection and forwarding of application, infrastructure, and audit logs. +{clo}:: +The {clo} provides a set of APIs to control the collection and forwarding of application, infrastructure, and audit logs. -Custom Resource (CR):: -A CR is an extension of the Kubernetes API. To configure {product-title} Logging and log forwarding, you can customize the `ClusterLogging` and the `ClusterLogForwarder` custom resources. +Custom resource (CR):: +A CR is an extension of the Kubernetes API. To configure the {logging} and log forwarding, you can customize the `ClusterLogging` and the `ClusterLogForwarder` custom resources. -event router:: -The event router is a pod that watches {product-title} events. It collects logs by using {product-title} Logging. +Event router:: +The event router is a pod that watches {product-title} events. It collects logs by using the {logging}. Fluentd:: Fluentd is a log collector that resides on each {product-title} node. It gathers application, infrastructure, and audit logs and forwards them to different outputs. -garbage collection:: +Garbage collection:: Garbage collection is the process of cleaning up cluster resources, such as terminated containers and images that are not referenced by any running pods. Elasticsearch:: -Elasticsearch is a distributed search and analytics engine. {product-title} uses ELasticsearch as a default log store for {product-title} Logging. +Elasticsearch is a distributed search and analytics engine. {product-title} uses Elasticsearch as a default log store for the {logging}. -Elasticsearch Operator:: -Elasticsearch operator is used to run Elasticsearch cluster on top of {product-title}. The Elasticsearch Operator provides self-service for the Elasticsearch cluster operations and is used by {product-title} Logging. +{es-op}:: +The {es-op} is used to run an Elasticsearch cluster on {product-title}. The {es-op} provides self-service for the Elasticsearch cluster operations and is used by the {logging}. -indexing:: +Indexing:: Indexing is a data structure technique that is used to quickly locate and access data. Indexing optimizes the performance by minimizing the amount of disk access required when a query is processed. JSON logging:: -{product-title} Logging Log Forwarding API enables you to parse JSON logs into a structured object and forward them to either {product-title} Logging-managed Elasticsearch or any other third-party system supported by the Log Forwarding API. +The Log Forwarding API enables you to parse JSON logs into a structured object and forward them to either the {Logging} managed Elasticsearch or any other third-party system supported by the Log Forwarding API. Kibana:: Kibana is a browser-based console interface to query, discover, and visualize your Elasticsearch data through histograms, line graphs, and pie charts. @@ -49,39 +49,39 @@ Labels:: Labels are key-value pairs that you can use to organize and select subsets of objects, such as a pod. Logging:: -With {product-title} Logging you can aggregate application, infrastructure, and audit logs throughout your cluster. You can also store them to a default log store, forward them to third party systems, and query and visualize the stored logs in the default log store. +With the {logging}, you can aggregate application, infrastructure, and audit logs throughout your cluster. You can also store them to a default log store, forward them to third party systems, and query and visualize the stored logs in the default log store. -logging collector:: +Logging collector:: A logging collector collects logs from the cluster, formats them, and forwards them to the log store or third party systems. -log store:: -A log store is used to store aggregated logs. You can use the default Elasticsearch log store or forward logs to external log stores. The default log store is optimized and tested for short-term storage. +Log store:: +A log store is used to store aggregated logs. You can use an internal log store or forward logs to external log stores. -log visualizer:: -Log visualizer is the user interface (UI) component you can use to view information such as logs, graphs, charts, and other metrics. The current implementation is Kibana. +Log visualizer:: +Log visualizer is the user interface (UI) component you can use to view information such as logs, graphs, charts, and other metrics. -node:: +Node:: A node is a worker machine in the {product-title} cluster. A node is either a virtual machine (VM) or a physical machine. Operators:: Operators are the preferred method of packaging, deploying, and managing a Kubernetes application in an {product-title} cluster. An Operator takes human operational knowledge and encodes it into software that is packaged and shared with customers. -pod:: -A pod is the smallest logical unit in Kubernetes. A pod consists of one or more containers and runs on a worker node.. +Pod:: +A pod is the smallest logical unit in Kubernetes. A pod consists of one or more containers and runs on a worker node. Role-based access control (RBAC):: RBAC is a key security control to ensure that cluster users and workloads have access only to resources required to execute their roles. -shards:: -Elasticsearch organizes the log data from Fluentd into datastores, or indices, then subdivides each index into multiple pieces called shards. +Shards:: +Elasticsearch organizes log data from Fluentd into datastores, or indices, then subdivides each index into multiple pieces called shards. -taint:: +Taint:: Taints ensure that pods are scheduled onto appropriate nodes. You can apply one or more taints on a node. -toleration:: +Toleration:: You can apply tolerations to pods. Tolerations allow the scheduler to schedule pods with matching taints. -web console:: +Web console:: A user interface (UI) to manage {product-title}. ifdef::openshift-rosa,openshift-dedicated[] The web console for {product-title} can be found at link:https://console.redhat.com/openshift[https://console.redhat.com/openshift]. diff --git a/modules/cluster-logging-uninstall.adoc b/modules/cluster-logging-uninstall.adoc index 6b14789f09..52964b5bcd 100644 --- a/modules/cluster-logging-uninstall.adoc +++ b/modules/cluster-logging-uninstall.adoc @@ -4,21 +4,18 @@ :_mod-docs-content-type: PROCEDURE [id="cluster-logging-uninstall_{context}"] -= Uninstalling the {logging-title} += Uninstalling the {logging} You can stop log aggregation by deleting the `ClusterLogging` custom resource (CR). After deleting the CR, there are other {logging} components that remain, which you can optionally remove. - Deleting the `ClusterLogging` CR does not remove the persistent volume claims (PVCs). To preserve or delete the remaining PVCs, persistent volumes (PVs), and associated data, you must take further action. .Prerequisites -* The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. +* The {clo} and {es-op} are installed. .Procedure -To remove OpenShift Logging: - . Use the ifndef::openshift-rosa,openshift-dedicated[] {product-title} web console @@ -46,15 +43,20 @@ endif::[] .. Click the Options menu {kebab} next to *Elasticsearch* and select *Delete Custom Resource Definition*. -. Optional: Remove the Red Hat OpenShift Logging Operator and OpenShift Elasticsearch Operator: +. Optional: Remove the {clo} and {es-op}: .. Switch to the *Operators* -> *Installed Operators* page. -.. Click the Options menu {kebab} next to the Red Hat OpenShift Logging Operator and select *Uninstall Operator*. +.. Click the Options menu {kebab} next to the {clo} and select *Uninstall Operator*. -.. Click the Options menu {kebab} next to the OpenShift Elasticsearch Operator and select *Uninstall Operator*. +.. Click the Options menu {kebab} next to the {es-op} and select *Uninstall Operator*. -. Optional: Remove the OpenShift Logging and Elasticsearch projects. +. Optional: Remove the `openshift-logging` and `openshift-operators-redhat` projects. ++ +[IMPORTANT] +==== +Do not delete the `openshift-operators-redhat` project if other global Operators are installed in this namespace. +==== .. Switch to the *Home* -> *Projects* page. @@ -63,11 +65,6 @@ endif::[] .. Confirm the deletion by typing `openshift-logging` in the dialog box and click *Delete*. .. Click the Options menu {kebab} next to the *openshift-operators-redhat* project and select *Delete Project*. -+ -[IMPORTANT] -==== -Do not delete the `openshift-operators-redhat` project if other global operators are installed in this namespace. -==== .. Confirm the deletion by typing `openshift-operators-redhat` in the dialog box and click *Delete*.