diff --git a/_topic_map.yml b/_topic_map.yml index 6109893462..f3b7daccec 100644 --- a/_topic_map.yml +++ b/_topic_map.yml @@ -1453,6 +1453,8 @@ Topics: File: using-pods-in-a-privileged-security-context - Name: Securing webhooks with event listeners File: securing-webhooks-with-event-listeners + - Name: Viewing pipeline logs using the OpenShift Logging Operator + File: viewing-pipeline-logs-using-the-openshift-logging-operator - Name: GitOps Dir: gitops Distros: openshift-enterprise diff --git a/cicd/pipelines/viewing-pipeline-logs-using-the-openshift-logging-operator.adoc b/cicd/pipelines/viewing-pipeline-logs-using-the-openshift-logging-operator.adoc new file mode 100644 index 0000000000..10952f328f --- /dev/null +++ b/cicd/pipelines/viewing-pipeline-logs-using-the-openshift-logging-operator.adoc @@ -0,0 +1,31 @@ +[id="viewing-pipeline-logs-using-the-openshift-logging-operator"] += Viewing pipeline logs using the OpenShift Logging Operator +include::modules/common-attributes.adoc[] +include::modules/pipelines-document-attributes.adoc[] +:context: viewing-pipeline-logs-using-the-openshift-logging-operator + +toc::[] + +The logs generated by pipeline runs, task runs, and event listeners are stored in their respective pods. It is useful to review and analyze logs for troubleshooting and audits. + +However, retaining the pods indefinitely leads to unnecessary resource consumption and cluttered namespaces. + +To eliminate any dependency on the pods for viewing pipeline logs, you can use the OpenShift Elasticsearch Operator and the OpenShift Logging Operator. These Operators help you to view pipeline logs by using the link:https://www.elastic.co/guide/en/kibana/6.8/connect-to-elasticsearch.html[Elasticsearch Kibana] stack, even after you have deleted the pods that contained the logs. + +[id="prerequisites_viewing-pipeline-logs-using-the-openshift-logging-operator"] +== Prerequisites + +Before trying to view pipeline logs in a Kibana dashboard, ensure the following: + +* The steps are performed by a cluster administrator. +* Logs for pipeline runs and task runs are available. +* The OpenShift Elasticsearch Operator and the OpenShift Logging Operator are installed. + +include::modules/op-viewing-pipeline-logs-in-kibana.adoc[leveloffset=+1] + +[id="additional-resources_viewing-pipeline-logs-using-the-openshift-logging-operator"] +== Additional resources + +* xref:../../logging/cluster-logging-deploying.adoc[Installing OpenShift Logging] +* xref:../../logging/viewing-resource-logs.adoc[Viewing logs for a resource] +* xref:../../logging/cluster-logging-visualizer.adoc[Viewing cluster logs by using Kibana] diff --git a/images/filtered-messages.png b/images/filtered-messages.png new file mode 100644 index 0000000000..a2849401fe Binary files /dev/null and b/images/filtered-messages.png differ diff --git a/images/grid.png b/images/grid.png new file mode 100644 index 0000000000..57998b4e58 Binary files /dev/null and b/images/grid.png differ diff --git a/images/not-placetools.png b/images/not-placetools.png new file mode 100644 index 0000000000..cb70ff8c03 Binary files /dev/null and b/images/not-placetools.png differ diff --git a/modules/op-viewing-pipeline-logs-in-kibana.adoc b/modules/op-viewing-pipeline-logs-in-kibana.adoc new file mode 100644 index 0000000000..4854088086 --- /dev/null +++ b/modules/op-viewing-pipeline-logs-in-kibana.adoc @@ -0,0 +1,97 @@ +// Module included in the following assemblies: +// cicd/pipelines/viewing-pipeline-logs-using-the-openshift-logging-operator.adoc +// + +[id="op-viewing-pipeline-logs-in-kibana_{context}"] += Viewing pipeline logs in Kibana + +To view pipeline logs in the Kibana web console: + +.Procedure + +. Log in to {product-title} web console as a cluster administrator. + +. In the top right of the menu bar, click the *grid* icon → *Observability* → *Logging*. The Kibana web console is displayed. + +. Create an index pattern: +.. On the left navigation panel of the *Kibana* web console, click *Management*. +.. Click *Create index pattern*. +.. Under *Step 1 of 2: Define index pattern* → *Index pattern*, enter a *`pass:[*]`* pattern and click *Next Step*. +.. Under *Step 2 of 2: Configure settings* → *Time filter field name*, select *@timestamp* from the drop-down menu, and click *Create index pattern*. + +. Add a filter: +.. On the left navigation panel of the *Kibana* web console, click *Discover*. +.. Click *Add a filter +* → *Edit Query DSL*. ++ +[NOTE] +==== +* For each of the example filters that follows, edit the query and click *Save*. +* The filters are applied one after another. +==== ++ +... Filter the containers related to pipelines: ++ +.Example query to filter pipelines containers +[source,json] +---- +{ + "query": { + "match": { + "kubernetes.flat_labels": { + "query": "app_kubernetes_io/managed-by=tekton-pipelines", + "type": "phrase" + } + } + } +} +---- ++ +... Filter all containers that are not `place-tools` container. As an illustration of using the graphical drop-down menus instead of editing the query DSL, consider the following approach: ++ +.Example of filtering using the drop-down fields +image::../../images/not-placetools.png[Not place-tools] ++ +... Filter `pipelinerun` in labels for highlighting: ++ +.Example query to filter `pipelinerun` in labels for highlighting +[source,json] +---- +{ + "query": { + "match": { + "kubernetes.flat_labels": { + "query": "tekton_dev/pipelineRun=", + "type": "phrase" + } + } + } +} +---- ++ +... Filter `pipeline` in labels for highlighting: ++ +.Example query to filter `pipeline` in labels for highlighting +[source,json] +---- +{ + "query": { + "match": { + "kubernetes.flat_labels": { + "query": "tekton_dev/pipeline=", + "type": "phrase" + } + } + } +} +---- ++ +.. From the *Available fields* list, select the following fields: +* `kubernetes.flat_labels` +* `message` ++ +Ensure that the selected fields are displayed under the *Selected fields* list. ++ +.. The logs are displayed under the *message* field. ++ +.Filtered messages +image::../../images/filtered-messages.png[Filtered messages]