1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
This commit is contained in:
Max Leonov
2023-09-06 08:04:35 +02:00
committed by openshift-cherrypick-robot
parent d4a44c5876
commit 5c49300ede
108 changed files with 3373 additions and 729 deletions

View File

@@ -114,18 +114,20 @@ ifdef::openshift-origin[]
:CNVSubscriptionSpecName: community-kubevirt-hyperconverged
endif::[]
//distributed tracing
:DTProductName: Red Hat OpenShift distributed tracing
:DTShortName: distributed tracing
:DTProductVersion: 2.8
:JaegerName: Red Hat OpenShift distributed tracing platform
:JaegerShortName: distributed tracing platform
:JaegerVersion: 1.42.0
:DTProductName: Red Hat OpenShift distributed tracing platform
:DTShortName: distributed tracing platform
:DTProductVersion: 2.9
:JaegerName: Red Hat OpenShift distributed tracing platform (Jaeger)
:JaegerShortName: distributed tracing platform (Jaeger)
:JaegerVersion: 1.47.0
:OTELName: Red Hat OpenShift distributed tracing data collection
:OTELShortName: distributed tracing data collection
:OTELVersion: 0.74.0
:TempoName: Tempo Operator
:TempoShortName: Tempo
:TempoVersion: 0.1.0
:OTELOperator: Red Hat OpenShift distributed tracing data collection Operator
:OTELVersion: 0.81.0
:TempoName: Red Hat OpenShift distributed tracing platform (Tempo)
:TempoShortName: distributed tracing platform (Tempo)
:TempoOperator: Tempo Operator
:TempoVersion: 2.1.1
//logging
:logging-title: logging subsystem for Red Hat OpenShift
:logging-title-uc: Logging subsystem for Red Hat OpenShift

View File

@@ -3596,25 +3596,68 @@ Dir: distr_tracing
Distros: openshift-enterprise
Topics:
- Name: Distributed tracing release notes
File: distributed-tracing-release-notes
Dir: distr_tracing_rn
Topics:
- Name: "2.9"
File: distr-tracing-rn-2-9
- Name: "2.8"
File: distr-tracing-rn-2-8
- Name: "2.7"
File: distr-tracing-rn-2-7
- Name: "2.6"
File: distr-tracing-rn-2-6
- Name: "2.5"
File: distr-tracing-rn-2-5
- Name: "2.4"
File: distr-tracing-rn-2-4
- Name: "2.3"
File: distr-tracing-rn-2-3
- Name: "2.2"
File: distr-tracing-rn-2-2
- Name: "2.1"
File: distr-tracing-rn-2-1
- Name: "2.0"
File: distr-tracing-rn-2-0
- Name: Distributed tracing architecture
Dir: distr_tracing_arch
Topics:
- Name: Distributed tracing architecture
File: distr-tracing-architecture
- Name: Distributed tracing installation
Dir: distr_tracing_install
- Name: Distributed tracing platform (Jaeger)
Dir: distr_tracing_jaeger
Topics:
- Name: Installing distributed tracing
File: distr-tracing-installing
- Name: Configuring the distributed tracing platform
File: distr-tracing-deploying-jaeger
- Name: Configuring distributed tracing data collection
File: distr-tracing-deploying-otel
- Name: Upgrading distributed tracing
File: distr-tracing-updating
- Name: Removing distributed tracing
File: distr-tracing-removing
- Name: Installation
File: distr-tracing-jaeger-installing
- Name: Configuration
File: distr-tracing-jaeger-configuring
- Name: Updating
File: distr-tracing-jaeger-updating
- Name: Removal
File: distr-tracing-jaeger-removing
- Name: Distributed tracing platform (Tempo)
Dir: distr_tracing_tempo
Topics:
- Name: Installation
File: distr-tracing-tempo-installing
- Name: Configuration
File: distr-tracing-tempo-configuring
- Name: Updating
File: distr-tracing-tempo-updating
- Name: Removal
File: distr-tracing-tempo-removing
- Name: Distributed tracing data collection (OpenTelemetry)
Dir: distr_tracing_otel
Topics:
- Name: Installation
File: distr-tracing-otel-installing
- Name: Configuration
File: distr-tracing-otel-configuring
- Name: Use
File: distr-tracing-otel-using
- Name: Migration
File: distr-tracing-otel-migrating
- Name: Removal
File: distr-tracing-otel-removing
---
Name: Virtualization
Dir: virt

View File

@@ -1 +1 @@
../images/
../../images/

View File

@@ -1 +1 @@
../modules/
../../modules/

View File

@@ -1 +0,0 @@
../images/

View File

@@ -1 +0,0 @@
../modules/

View File

@@ -1,94 +0,0 @@
:_content-type: ASSEMBLY
[id="distr-tracing-deploying"]
= Configuring and deploying distributed tracing
include::_attributes/common-attributes.adoc[]
:context: deploying-distr-tracing-platform
toc::[]
The {JaegerName} Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the {JaegerShortName} resources. You can either install the default configuration or modify the file to better suit your business requirements.
{JaegerName} has predefined deployment strategies. You specify a deployment strategy in the custom resource file. When you create a {JaegerShortName} instance the Operator uses this configuration file to create the objects necessary for the deployment.
.Jaeger custom resource file showing deployment strategy
[source,yaml]
----
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: MyConfigFile
spec:
strategy: production <1>
----
<1> The {JaegerName} Operator currently supports the following deployment strategies:
* *allInOne* (Default) - This strategy is intended for development, testing, and demo purposes; it is not intended for production use. The main backend components, Agent, Collector, and Query service, are all packaged into a single executable which is configured, by default. to use in-memory storage.
+
[NOTE]
====
In-memory storage is not persistent, which means that if the {JaegerShortName} instance shuts down, restarts, or is replaced, that your trace data will be lost. And in-memory storage cannot be scaled, since each pod has its own memory. For persistent storage, you must use the `production` or `streaming` strategies, which use Elasticsearch as the default storage.
====
* *production* - The production strategy is intended for production environments, where long term storage of trace data is important, as well as a more scalable and highly available architecture is required. Each of the backend components is therefore deployed separately. The Agent can be injected as a sidecar on the instrumented application. The Query and Collector services are configured with a supported storage type - currently Elasticsearch. Multiple instances of each of these components can be provisioned as required for performance and resilience purposes.
* *streaming* - The streaming strategy is designed to augment the production strategy by providing a streaming capability that effectively sits between the Collector and the Elasticsearch backend storage. This provides the benefit of reducing the pressure on the backend storage, under high load situations, and enables other trace post-processing capabilities to tap into the real time span data directly from the streaming platform (https://access.redhat.com/documentation/en-us/red_hat_amq/7.6/html/using_amq_streams_on_openshift/index[AMQ Streams]/ https://kafka.apache.org/documentation/[Kafka]).
+
[NOTE]
====
The streaming strategy requires an additional Red Hat subscription for AMQ Streams.
====
[NOTE]
====
The streaming deployment strategy is currently unsupported on {ibmzProductName}.
====
[NOTE]
====
There are two ways to install and use {DTProductName}, as part of a service mesh or as a stand alone component. If you have installed {DTShortName} as part of {SMProductName}, you can perform basic configuration as part of the xref:../../service_mesh/v2x/installing-ossm.adoc#installing-ossm[ServiceMeshControlPlane] but for completely control you should configure a Jaeger CR and then xref:../../service_mesh/v2x/ossm-observability.adoc#ossm-config-external-jaeger_observability[reference your distributed tracing configuration file in the ServiceMeshControlPlane].
====
include::modules/distr-tracing-deploy-default.adoc[leveloffset=+1]
include::modules/distr-tracing-deploy-production-es.adoc[leveloffset=+1]
include::modules/distr-tracing-deploy-streaming.adoc[leveloffset=+1]
[id="validating-your-jaeger-deployment"]
== Validating your deployment
include::modules/distr-tracing-accessing-jaeger-console.adoc[leveloffset=+2]
[id="customizing-your-deployment"]
== Customizing your deployment
include::modules/distr-tracing-deployment-best-practices.adoc[leveloffset=+2]
ifdef::openshift-enterprise,openshift-dedicated[]
For information about configuring persistent storage, see xref:../../storage/understanding-persistent-storage.adoc[Understanding persistent storage] and the appropriate configuration topic for your chosen storage option.
endif::[]
include::modules/distr-tracing-config-default.adoc[leveloffset=+2]
include::modules/distr-tracing-config-jaeger-collector.adoc[leveloffset=+2]
//include::modules/distr-tracing-config-otel-collector.adoc[leveloffset=+2]
include::modules/distr-tracing-config-sampling.adoc[leveloffset=+2]
include::modules/distr-tracing-config-storage.adoc[leveloffset=+2]
include::modules/distr-tracing-config-query.adoc[leveloffset=+2]
include::modules/distr-tracing-config-ingester.adoc[leveloffset=+2]
[id="injecting-sidecars"]
== Injecting sidecars
{JaegerName} relies on a proxy sidecar within the application's pod to provide the agent. The {JaegerName} Operator can inject Agent sidecars into Deployment workloads. You can enable automatic sidecar injection or manage it manually.
include::modules/distr-tracing-sidecar-automatic.adoc[leveloffset=+2]
include::modules/distr-tracing-sidecar-manual.adoc[leveloffset=+2]

View File

@@ -1,11 +0,0 @@
:_content-type: ASSEMBLY
[id="distr-tracing-deploying-otel"]
= Configuring and deploying distributed tracing data collection
include::_attributes/common-attributes.adoc[]
:context: deploying-distr-tracing-data-collection
toc::[]
The {OTELName} Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the {OTELName} resources. You can either install the default configuration or modify the file to better suit your business requirements.
include::modules/distr-tracing-config-otel-collector.adoc[leveloffset=+1]

View File

@@ -1 +0,0 @@
../images/

View File

@@ -1 +0,0 @@
../modules/

View File

@@ -0,0 +1,91 @@
:_content-type: ASSEMBLY
[id="distr-tracing-jaeger-configuring"]
= Configuring and deploying the distributed tracing platform Jaeger
include::_attributes/common-attributes.adoc[]
:context: distr-tracing-jaeger-configuring
toc::[]
The {JaegerName} Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the {JaegerShortName} resources. You can install the default configuration or modify the file.
If you have installed {DTShortName} as part of {SMProductName}, you can perform basic configuration as part of the xref:../../service_mesh/v2x/installing-ossm.adoc#installing-ossm[ServiceMeshControlPlane], but for complete control, you must configure a Jaeger CR and then xref:../../service_mesh/v2x/ossm-observability.adoc#ossm-config-external-jaeger_observability[reference your distributed tracing configuration file in the ServiceMeshControlPlane].
The {JaegerName} has predefined deployment strategies. You specify a deployment strategy in the custom resource file. When you create a {JaegerShortName} instance, the Operator uses this configuration file to create the objects necessary for the deployment.
.Jaeger custom resource file showing deployment strategy
[source,yaml]
----
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: MyConfigFile
spec:
strategy: production <1>
----
<1> Deployment strategy.
[id="supported-deployment-strategies"]
== Supported deployment strategies
The {JaegerName} Operator currently supports the following deployment strategies:
`allInOne`:: - This strategy is intended for development, testing, and demo purposes; it is not intended for production use. The main backend components, Agent, Collector, and Query service, are all packaged into a single executable which is configured, by default. to use in-memory storage.
+
[NOTE]
====
In-memory storage is not persistent, which means that if the {JaegerShortName} instance shuts down, restarts, or is replaced, that your trace data will be lost. And in-memory storage cannot be scaled, since each pod has its own memory. For persistent storage, you must use the `production` or `streaming` strategies, which use Elasticsearch as the default storage.
====
`production`:: The production strategy is intended for production environments, where long term storage of trace data is important, as well as a more scalable and highly available architecture is required. Each of the backend components is therefore deployed separately. The Agent can be injected as a sidecar on the instrumented application. The Query and Collector services are configured with a supported storage type - currently Elasticsearch. Multiple instances of each of these components can be provisioned as required for performance and resilience purposes.
`streaming`:: The streaming strategy is designed to augment the production strategy by providing a streaming capability that effectively sits between the Collector and the Elasticsearch backend storage. This provides the benefit of reducing the pressure on the backend storage, under high load situations, and enables other trace post-processing capabilities to tap into the real time span data directly from the streaming platform (https://access.redhat.com/documentation/en-us/red_hat_amq/7.6/html/using_amq_streams_on_openshift/index[AMQ Streams]/ https://kafka.apache.org/documentation/[Kafka]).
+
[NOTE]
====
* The streaming strategy requires an additional Red Hat subscription for AMQ Streams.
* The streaming deployment strategy is currently unsupported on {ibmzProductName}.
====
include::modules/distr-tracing-deploy-default.adoc[leveloffset=+1]
include::modules/distr-tracing-deploy-production-es.adoc[leveloffset=+1]
include::modules/distr-tracing-deploy-streaming.adoc[leveloffset=+1]
[id="validating-your-jaeger-deployment"]
== Validating your deployment
include::modules/distr-tracing-accessing-jaeger-console.adoc[leveloffset=+2]
[id="customizing-your-deployment"]
== Customizing your deployment
include::modules/distr-tracing-deployment-best-practices.adoc[leveloffset=+2]
ifdef::openshift-enterprise,openshift-dedicated[]
For information about configuring persistent storage, see xref:../../storage/understanding-persistent-storage.adoc#understanding-persistent-storage[Understanding persistent storage] and the appropriate configuration topic for your chosen storage option.
endif::[]
include::modules/distr-tracing-config-default.adoc[leveloffset=+2]
include::modules/distr-tracing-config-jaeger-collector.adoc[leveloffset=+2]
include::modules/distr-tracing-config-sampling.adoc[leveloffset=+2]
include::modules/distr-tracing-config-storage.adoc[leveloffset=+2]
include::modules/distr-tracing-config-query.adoc[leveloffset=+2]
include::modules/distr-tracing-config-ingester.adoc[leveloffset=+2]
[id="injecting-sidecars"]
== Injecting sidecars
The {JaegerName} relies on a proxy sidecar within the application's pod to provide the Agent. The {JaegerName} Operator can inject Agent sidecars into deployment workloads. You can enable automatic sidecar injection or manage it manually.
include::modules/distr-tracing-sidecar-automatic.adoc[leveloffset=+2]
include::modules/distr-tracing-sidecar-manual.adoc[leveloffset=+2]

View File

@@ -1,8 +1,8 @@
:_content-type: ASSEMBLY
[id="installing-distributed-tracing"]
= Installing distributed tracing
[id="dist-tracing-jaeger-installing"]
= Installing the distributed tracing platform Jaeger
include::_attributes/common-attributes.adoc[]
:context: install-distributed-tracing
:context: dist-tracing-jaeger-installing
toc::[]
@@ -12,6 +12,7 @@ You can install {DTProductName} on {product-title} in either of two ways:
* If you do not want to install a service mesh, you can use the {DTProductName} Operators to install {DTShortName} by itself. To install {DTProductName} without a service mesh, use the following instructions.
[id="prerequisites"]
== Prerequisites
Before you can install {DTProductName}, review the installation activities, and ensure that you meet the prerequisites:
@@ -25,7 +26,7 @@ Before you can install {DTProductName}, review the installation activities, and
** xref:../../installing/installing_aws/installing-aws-user-infra.adoc#installing-aws-user-infra[Install {product-title} {product-version} on user-provisioned AWS]
** xref:../../installing/installing_bare_metal/installing-bare-metal.adoc#installing-bare-metal[Install {product-title} {product-version} on bare metal]
** xref:../../installing/installing_vsphere/installing-vsphere.adoc#installing-vsphere[Install {product-title} {product-version} on vSphere]
* Install the version of the OpenShift CLI (`oc`) that matches your {product-title} version and add it to your path.
* Install the version of the `oc` CLI tool that matches your {product-title} version and add it to your path.
* An account with the `cluster-admin` role.
@@ -34,10 +35,3 @@ include::modules/distr-tracing-install-overview.adoc[leveloffset=+1]
include::modules/distr-tracing-install-elasticsearch.adoc[leveloffset=+1]
include::modules/distr-tracing-install-jaeger-operator.adoc[leveloffset=+1]
include::modules/distr-tracing-install-otel-operator.adoc[leveloffset=+1]
////
== Next steps
* xref:../../distr_tracing/distr_tracing_install/distr-tracing-deploying.adoc#deploying-distributed-tracing[Deploy {DTProductName}].
////

View File

@@ -1,8 +1,8 @@
:_content-type: ASSEMBLY
[id="removing-distributed-tracing"]
= Removing distributed tracing
[id="dist-tracing-jaeger-removing"]
= Removing the distributed tracing platform Jaeger
include::_attributes/common-attributes.adoc[]
:context: removing-distributed-tracing
:context: dist-tracing-jaeger-removing
toc::[]
@@ -17,15 +17,11 @@ include::modules/distr-tracing-removing-instance.adoc[leveloffset=+1]
include::modules/distr-tracing-removing-instance-cli.adoc[leveloffset=+1]
[id="removing-distributed-tracing-operators"]
== Removing the {DTProductName} Operators
.Procedure
. Follow the instructions for xref:../../operators/admin/olm-deleting-operators-from-cluster.adoc#olm-deleting-operators-from-a-cluster[Deleting Operators from a cluster].
. Follow the instructions in xref:../../operators/admin/olm-deleting-operators-from-cluster.adoc#olm-deleting-operators-from-a-cluster[Deleting Operators from a cluster] to remove the {JaegerName} Operator.
* Remove the {JaegerName} Operator.
//* Remove the {OTELName} Operator.
* After the {JaegerName} Operator has been removed, if appropriate, remove the OpenShift Elasticsearch Operator.
. Optional: After the {JaegerName} Operator has been removed, remove the OpenShift Elasticsearch Operator.

View File

@@ -1,24 +1,25 @@
:_content-type: ASSEMBLY
[id="upgrading-distributed-tracing"]
= Upgrading distributed tracing
[id="dist-tracing-jaeger-updating"]
= Updating the distributed tracing platform Jaeger
include::_attributes/common-attributes.adoc[]
:context: upgrading-distributed-tracing
:context: dist-tracing-jaeger-updating
toc::[]
Operator Lifecycle Manager (OLM) controls the installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. The OLM runs by default in {product-title}.
OLM queries for available Operators as well as upgrades for installed Operators.
For more information about how {product-title} handles upgrades, see the xref:../../operators/understanding/olm/olm-understanding-olm.adoc#olm-understanding-olm[Operator Lifecycle Manager] documentation.
During an update, the {DTProductName} Operators upgrade the managed {DTShortName} instances to the version associated with the Operator. Whenever a new version of the {JaegerName} Operator is installed, all the {JaegerShortName} application instances managed by the Operator are upgraded to the Operator's version. For example, after upgrading the Operator from 1.10 installed to 1.11, the Operator scans for running {JaegerShortName} instances and upgrades them to 1.11 as well.
For specific instructions on how to update the OpenShift Elasticsearch Operator, see xref:../../logging/cluster-logging-upgrading.adoc#cluster-logging-upgrading_cluster-logging-upgrading[Updating OpenShift Logging].
include::modules/distr-tracing-change-operator-20.adoc[leveloffset=+1]
[IMPORTANT]
====
If you have not already updated your OpenShift Elasticsearch Operator as described in xref:../../logging/cluster-logging-upgrading.adoc[Updating OpenShift Logging] complete that update before updating your {JaegerName} Operator.
If you have not already updated your OpenShift Elasticsearch Operator as described in xref:../../logging/cluster-logging-upgrading.adoc#cluster-logging-upgrading_cluster-logging-upgrading[Updating OpenShift Logging], complete that update before updating your {JaegerName} Operator.
====
For instructions on how to update the Operator channel, see xref:../../operators/admin/olm-upgrading-operators.adoc[Updating installed Operators].
[role="_additional-resources"]
[id="additional-resources_dist-tracing-jaeger-updating"]
== Additional resources
* xref:../../operators/understanding/olm/olm-understanding-olm.adoc#olm-understanding-olm[Operator Lifecycle Manager concepts and resources]
* xref:../../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[Updating installed Operators]
* xref:../../logging/cluster-logging-upgrading.adoc#cluster-logging-upgrading_cluster-logging-upgrading[Updating OpenShift Logging]

View File

@@ -0,0 +1 @@
../../images/

View File

@@ -0,0 +1 @@
../../modules/

View File

@@ -0,0 +1,18 @@
:_content-type: ASSEMBLY
[id="distr-tracing-otel-configuring"]
= Configuring and deploying the {OTELShortName}
include::_attributes/common-attributes.adoc[]
:context: distr-tracing-otel-configuring
toc::[]
The {OTELName} Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the {OTELShortName} resources. You can install the default configuration or modify the file.
include::modules/distr-tracing-otel-config-collector.adoc[leveloffset=+1]
include::modules/distr-tracing-otel-config-send-metrics-monitoring-stack.adoc[leveloffset=+1]
[role="_additional-resources"]
[id="additional-resources_deploy-otel"]
== Additional resources
* xref:../../monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects[Enabling monitoring for user-defined projects]

View File

@@ -0,0 +1,28 @@
:_content-type: ASSEMBLY
[id="install-distributed-tracing-otel"]
= Installing the {OTELShortName}
include::_attributes/common-attributes.adoc[]
:context: install-distributed-tracing-otel
toc::[]
:FeatureName: The {OTELOperator}
include::snippets/technology-preview.adoc[leveloffset=+1]
Installing the {OTELShortName} involves the following steps:
. Installing the {OTELOperator}.
. Creating a namespace for an OpenTelemetry Collector instance.
. Creating an `OpenTelemetryCollector` custom resource to deploy the OpenTelemetry Collector instance.
include::modules/distr-tracing-otel-install-web-console.adoc[leveloffset=+1]
[role="_additional-resources"]
[id="additional-resources_dist-tracing-otel-installing"]
== Additional resources
* xref:../../post_installation_configuration/preparing-for-users.adoc#creating-cluster-admin_post-install-preparing-for-users[Creating a cluster admin]
* link:https://operatorhub.io/[OperatorHub.io]
* xref:../../web_console/web-console.adoc#web-console[Accessing the web console]
* xref:../../operators/admin/olm-adding-operators-to-cluster.adoc#olm-installing-from-operatorhub-using-web-console_olm-adding-operators-to-a-cluster[Installing from OperatorHub using the web console]
* xref:../../operators/user/olm-creating-apps-from-installed-operators.adoc#olm-creating-apps-from-installed-operators[Creating applications from installed Operators]
* xref:../../cli_reference/openshift_cli/getting-started-cli.adoc#getting-started-cli[Getting started with the OpenShift CLI]

View File

@@ -0,0 +1,17 @@
:_content-type: ASSEMBLY
[id="dist-tracing-otel-migrating"]
= Migrating from the {JaegerShortName} to the {OTELShortName}
include::_attributes/common-attributes.adoc[]
:context: dist-tracing-otel-migrating
toc::[]
If you are already using {JaegerName} for your applications, you can migrate to the {OTELName}, which is based on the link:https://opentelemetry.io/[OpenTelemetry] open-source project.
The {OTELShortName} provides a set of APIs, libraries, agents, and instrumentation to facilitate observability in distributed systems. The OpenTelemetry Collector in the {OTELShortName} can ingest the Jaeger protocol, so you do not need to change the SDKs in your applications.
Migration from the {JaegerShortName} to the {OTELShortName} requires configuring the OpenTelemetry Collector and your applications to report traces seamlessly. You can migrate sidecar and sidecarless deployments.
include::modules/distr-tracing-otel-migrating-from-jaeger-with-sidecars.adoc[leveloffset=+1]
include::modules/distr-tracing-otel-migrating-from-jaeger-without-sidecars.adoc[leveloffset=+1]

View File

@@ -0,0 +1,24 @@
:_content-type: ASSEMBLY
[id="dist-tracing-otel-removing"]
= Removing the {OTELShortName}
include::_attributes/common-attributes.adoc[]
:context: dist-tracing-otel-removing
toc::[]
The steps for removing the {OTELShortName} from an {product-title} cluster are as follows:
. Shut down all {OTELShortName} pods.
. Remove any OpenTelemetryCollector instances.
. Remove the {OTELOperator}.
include::modules/distr-tracing-otel-remove-web-console.adoc[leveloffset=+1]
include::modules/distr-tracing-otel-remove-cli.adoc[leveloffset=+1]
[role="_additional-resources"]
[id="additional-resources_dist-tracing-otel-removing"]
== Additional resources
* xref:../../operators/admin/olm-deleting-operators-from-cluster.adoc#olm-deleting-operators-from-a-cluster[Deleting Operators from a cluster]
* xref:../../cli_reference/openshift_cli/getting-started-cli.adoc#getting-started-cli[Getting started with the OpenShift CLI]

View File

@@ -0,0 +1,18 @@
:_content-type: ASSEMBLY
[id="distr-tracing-otel-temp"]
= Using the {OTELShortName}
include::_attributes/common-attributes.adoc[]
:context: distr-tracing-otel-temp
toc::[]
include::modules/distr-tracing-otel-forwarding.adoc[leveloffset=+1]
[id="distr-tracing-otel-send-traces-and-metrics-to-otel-collector_{context}"]
== Sending traces and metrics to the OpenTelemetry Collector
Sending tracing and metrics to the OpenTelemetry Collector is possible with or without sidecar injection.
include::modules/distr-tracing-otel-send-traces-and-metrics-to-otel-collector-with-sidecar.adoc[leveloffset=+2]
include::modules/distr-tracing-otel-send-traces-and-metrics-to-otel-collector-without-sidecar.adoc[leveloffset=+2]

View File

@@ -0,0 +1 @@
../../images/

View File

@@ -0,0 +1 @@
../../modules/

View File

@@ -0,0 +1 @@
../../_attributes/

View File

@@ -0,0 +1,57 @@
:_content-type: ASSEMBLY
[id="distributed-tracing-rn-2-0"]
= Release notes for {DTProductName} 2.0
include::_attributes/common-attributes.adoc[]
:context: distributed-tracing-rn-2-0
toc::[]
include::modules/distr-tracing-product-overview.adoc[leveloffset=+1]
[id="component-versions_distributed-tracing-rn-2-0"]
== Component versions in the {DTProductName} 2.0.0
[options="header"]
|===
|Operator |Component |Version
|{JaegerName}
|Jaeger
|1.28.0
|{OTELName}
|OpenTelemetry
|0.33.0
|===
[id="new-features-and-enhancements_distributed-tracing-rn-2-0"]
== New features and enhancements
This release introduces the following new features and enhancements:
* Rebrands Red Hat OpenShift Jaeger as the {DTProductName}.
* Updates {JaegerName} Operator to Jaeger 1.28. Going forward, the {DTProductName} will only support the `stable` Operator channel.
Channels for individual releases are no longer supported.
* Adds support for OpenTelemetry protocol (OTLP) to the Query service.
* Introduces a new distributed tracing icon that appears in the OperatorHub.
* Includes rolling updates to the documentation to support the name change and new features.
[id="technology-preview-features_distributed-tracing-rn-2-0"]
== Technology Preview features
* This release adds the {OTELName} as a link:https://access.redhat.com/support/offerings/techpreview/[Technology Preview], which you install using the {OTELName} Operator. {OTELName} is based on the link:https://opentelemetry.io/[OpenTelemetry] APIs and instrumentation. The {OTELName} includes the OpenTelemetry Operator and Collector. You can use the Collector to receive traces in the OpenTelemetry or Jaeger protocol and send the trace data to the {DTProductName}. Other capabilities of the Collector are not supported at this time. The OpenTelemetry Collector allows developers to instrument their code with vendor agnostic APIs, avoiding vendor lock-in and enabling a growing ecosystem of observability tooling.
[id="bug-fixes_distributed-tracing-rn-2-0"]
== Bug fixes
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
//[id="known-issues_distributed-tracing-rn-2-0"]
//== Known issues
include::modules/support.adoc[leveloffset=+1]
include::modules/making-open-source-more-inclusive.adoc[leveloffset=+1]

View File

@@ -0,0 +1,68 @@
:_content-type: ASSEMBLY
[id="distributed-tracing-rn-2-1"]
= Release notes for {DTProductName} 2.1
include::_attributes/common-attributes.adoc[]
:context: distributed-tracing-rn-2-1
toc::[]
include::modules/distr-tracing-product-overview.adoc[leveloffset=+1]
[id="component-versions_distributed-tracing-rn-2-1"]
== Component versions in the {DTProductName} 2.1.0
[options="header"]
|===
|Operator |Component |Version
|{JaegerName}
|Jaeger
|1.29.1
|{OTELName}
|OpenTelemetry
|0.41.1
|===
[id="technology-preview-features_distributed-tracing-rn-2-1"]
== Technology Preview features
* This release introduces a breaking change to how to configure certificates in the OpenTelemetry custom resource file. With this update, the `ca_file` moves under `tls` in the custom resource, as shown in the following examples.
+
.CA file configuration for OpenTelemetry version 0.33
+
[source,yaml]
----
spec:
mode: deployment
config: |
exporters:
jaeger:
endpoint: jaeger-production-collector-headless.tracing-system.svc:14250
ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt"
----
+
.CA file configuration for OpenTelemetry version 0.41.1
+
[source,yaml]
----
spec:
mode: deployment
config: |
exporters:
jaeger:
endpoint: jaeger-production-collector-headless.tracing-system.svc:14250
tls:
ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt"
----
[id="bug-fixes_distributed-tracing-rn-2-1"]
== Bug fixes
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
//[id="known-issues_distributed-tracing-rn-2-1"]
//== Known issues
include::modules/support.adoc[leveloffset=+1]
include::modules/making-open-source-more-inclusive.adoc[leveloffset=+1]

View File

@@ -0,0 +1,23 @@
:_content-type: ASSEMBLY
[id="distributed-tracing-rn-2-2"]
= Release notes for {DTProductName} 2.2
include::_attributes/common-attributes.adoc[]
:context: distributed-tracing-rn-2-2
toc::[]
include::modules/distr-tracing-product-overview.adoc[leveloffset=+1]
[id="technology-preview-features_distributed-tracing-rn-2-2"]
== Technology Preview features
* The unsupported OpenTelemetry Collector components included in the 2.1 release are removed.
[id="bug-fixes_distributed-tracing-rn-2-2"]
== Bug fixes
This release of the {DTProductName} addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
include::modules/support.adoc[leveloffset=+1]
include::modules/making-open-source-more-inclusive.adoc[leveloffset=+1]

View File

@@ -0,0 +1,56 @@
:_content-type: ASSEMBLY
[id="distributed-tracing-rn-2-3"]
= Release notes for {DTProductName} 2.3
include::_attributes/common-attributes.adoc[]
:context: distributed-tracing-rn-2-3
toc::[]
include::modules/distr-tracing-product-overview.adoc[leveloffset=+1]
[id="component-versions-2-3-0_distributed-tracing-rn-2-3"]
== Component versions in the {DTProductName} 2.3.0
[options="header"]
|===
|Operator |Component |Version
|{JaegerName}
|Jaeger
|1.30.1
|{OTELName}
|OpenTelemetry
|0.44.0
|===
[id="component-versions-2-3-1_distributed-tracing-rn-2-3"]
== Component versions in the {DTProductName} 2.3.1
[options="header"]
|===
|Operator |Component |Version
|{JaegerName}
|Jaeger
|1.30.2
|{OTELName}
|OpenTelemetry
|0.44.1-1
|===
[id="new-features-and-enhancements_distributed-tracing-rn-2-3"]
== New features and enhancements
With this release, the {JaegerName} Operator is now installed to the `openshift-distributed-tracing` namespace by default. Before this update, the default installation had been in the `openshift-operators` namespace.
[id="bug-fixes_distributed-tracing-rn-2-3"]
== Bug fixes
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
//[id="known-issues_distributed-tracing-rn-2-3"]
//== Known issues
include::modules/support.adoc[leveloffset=+1]
include::modules/making-open-source-more-inclusive.adoc[leveloffset=+1]

View File

@@ -0,0 +1,53 @@
:_content-type: ASSEMBLY
[id="distributed-tracing-rn-2-4"]
= Release notes for {DTProductName} 2.4
include::_attributes/common-attributes.adoc[]
:context: distributed-tracing-rn-2-4
toc::[]
include::modules/distr-tracing-product-overview.adoc[leveloffset=+1]
[id="component-versions_distributed-tracing-rn-2-4"]
== Component versions in the {DTProductName} 2.4
[options="header"]
|===
|Operator |Component |Version
|{JaegerName}
|Jaeger
|1.34.1
|{OTELName}
|OpenTelemetry
|0.49
|===
[id="new-features-and-enhancements_distributed-tracing-rn-2-4"]
== New features and enhancements
This release adds support for auto-provisioning certificates using the Red Hat Elasticsearch Operator.
* Self-provisioning by using the {JaegerName} Operator to call the Red Hat Elasticsearch Operator during installation.
+
[IMPORTANT]
====
When upgrading to the {DTProductName} 2.4, the operator recreates the Elasticsearch instance, which might take five to ten minutes. Distributed tracing will be down and unavailable for that period.
====
[id="technology-preview-features_distributed-tracing-rn-2-4"]
== Technology Preview features
* Creating the Elasticsearch instance and certificates first and then configuring the {JaegerShortName} to use the certificate is a link:https://access.redhat.com/support/offerings/techpreview/[Technology Preview] for this release.
[id="bug-fixes_distributed-tracing-rn-2-4"]
== Bug fixes
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
//[id="known-issues_distributed-tracing-rn-2-4"]
//== Known issues
include::modules/support.adoc[leveloffset=+1]
include::modules/making-open-source-more-inclusive.adoc[leveloffset=+1]

View File

@@ -0,0 +1,45 @@
:_content-type: ASSEMBLY
[id="distributed-tracing-rn-2-5"]
= Release notes for {DTProductName} 2.5
include::_attributes/common-attributes.adoc[]
:context: distributed-tracing-rn-2-5
toc::[]
include::modules/distr-tracing-product-overview.adoc[leveloffset=+1]
[id="component-versions_distributed-tracing-rn-2-5"]
== Component versions in the {DTProductName} 2.5
[options="header"]
|===
|Operator |Component |Version
|{JaegerName}
|Jaeger
|1.36
|{OTELName}
|OpenTelemetry
|0.56
|===
[id="new-features-and-enhancements_distributed-tracing-rn-2-5"]
== New features and enhancements
This release introduces support for ingesting OpenTelemetry protocol (OTLP) to the {JaegerName} Operator.
The Operator now automatically enables the OTLP ports:
* Port 4317 for the OTLP gRPC protocol.
* Port 4318 for the OTLP HTTP protocol.
This release also adds support for collecting Kubernetes resource attributes to the {OTELName} Operator.
[id="bug-fixes_distributed-tracing-rn-2-5"]
== Bug fixes
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
include::modules/support.adoc[leveloffset=+1]
include::modules/making-open-source-more-inclusive.adoc[leveloffset=+1]

View File

@@ -0,0 +1,34 @@
:_content-type: ASSEMBLY
[id="distributed-tracing-rn-2-6"]
= Release notes for {DTProductName} 2.6
include::_attributes/common-attributes.adoc[]
:context: distributed-tracing-rn-2-6
toc::[]
include::modules/distr-tracing-product-overview.adoc[leveloffset=+1]
[id="component-versions_distributed-tracing-rn-2-6"]
== Component versions in the {DTProductName} 2.6
[options="header"]
|===
|Operator |Component |Version
|{JaegerName}
|Jaeger
|1.38
|{OTELName}
|OpenTelemetry
|0.60
|===
[id="bug-fixes_distributed-tracing-rn-2-6"]
== Bug fixes
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
include::modules/support.adoc[leveloffset=+1]
include::modules/making-open-source-more-inclusive.adoc[leveloffset=+1]

View File

@@ -0,0 +1,33 @@
:_content-type: ASSEMBLY
[id="distributed-tracing-rn-2-7"]
= Release notes for {DTProductName} 2.7
include::_attributes/common-attributes.adoc[]
:context: distributed-tracing-rn-2-7
toc::[]
include::modules/distr-tracing-product-overview.adoc[leveloffset=+1]
[id="component-versions_distributed-tracing-rn-2-7"]
== Component versions in the {DTProductName} 2.7
[options="header"]
|===
|Operator |Component |Version
|{JaegerName}
|Jaeger
|1.39
|{OTELName}
|OpenTelemetry
|0.63.1
|===
[id="bug-fixes_distributed-tracing-rn-2-7"]
== Bug fixes
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
include::modules/support.adoc[leveloffset=+1]
include::modules/making-open-source-more-inclusive.adoc[leveloffset=+1]

View File

@@ -0,0 +1,63 @@
:_content-type: ASSEMBLY
[id="distributed-tracing-rn-2-8"]
= Release notes for {DTProductName} 2.8
include::_attributes/common-attributes.adoc[]
:context: distributed-tracing-rn-2-8
toc::[]
include::modules/distr-tracing-product-overview.adoc[leveloffset=+1]
[id="component-versions_distributed-tracing-rn-2-8"]
== Component versions in the {DTProductName} 2.8
[options="header"]
|===
|Operator |Component |Version
|{JaegerName}
|Jaeger
|1.42
|{OTELName}
|OpenTelemetry
|0.74.0
|{TempoName}
|Tempo
|0.1.0
|===
[id="technology-preview-features_distributed-tracing-rn-2-8"]
== Technology Preview features
This release introduces support for the {TempoName} as a link:https://access.redhat.com/support/offerings/techpreview/[Technology Preview] feature for {DTProductName}.
:FeatureName: The {TempoName}
include::snippets/technology-preview.adoc[leveloffset=+1]
The feature uses version 0.1.0 of the {TempoName} and version 2.0.1 of the upstream {TempoShortName} components.
You can use the {TempoShortName} to replace Jaeger so that you can use S3-compatible storage instead of ElasticSearch.
Most users who use the {TempoShortName} instead of Jaeger will not notice any difference in functionality because the {TempoShortName} supports the same ingestion and query protocols as Jaeger and uses the same user interface.
If you enable this Technology Preview feature, note the following limitations of the current implementation:
* The {TempoShortName} currently does not support disconnected installations. (link:https://issues.redhat.com/browse/TRACING-3145[TRACING-3145])
* When you use the Jaeger user interface (UI) with the {TempoShortName}, the Jaeger UI lists only services that have sent traces within the last 15 minutes. For services that have not sent traces within the last 15 minutes, those traces are still stored even though they are not visible in the Jaeger UI. (link:https://issues.redhat.com/browse/TRACING-3139[TRACING-3139])
Expanded support for the {TempoOperator} is planned for future releases of the {DTProductName}.
Possible additional features might include support for TLS authentication, multitenancy, and multiple clusters.
For more information about the {TempoOperator}, see the link:https://tempo-operator.netlify.app[Tempo community documentation].
[id="bug-fixes_distributed-tracing-rn-2-8"]
== Bug fixes
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
//[id="known-issues_distributed-tracing-rn-2-8"]
//== Known issues
include::modules/support.adoc[leveloffset=+1]
include::modules/making-open-source-more-inclusive.adoc[leveloffset=+1]

View File

@@ -0,0 +1,207 @@
:_content-type: ASSEMBLY
[id="distributed-tracing-rn-2-9"]
= Release notes for {DTProductName} 2.9
include::_attributes/common-attributes.adoc[]
:context: distributed-tracing-rn-2-9
toc::[]
include::modules/distr-tracing-product-overview.adoc[leveloffset=+1]
[id="component-versions_distributed-tracing-rn-2-9"]
== Component versions in the {DTProductName} 2.9
[options="header"]
|===
|Operator |Component |Version
|{JaegerName}
|Jaeger
|1.47.0
|{OTELName}
|OpenTelemetry
|0.81.0
|{TempoName}
|Tempo
|2.1.1
|===
[id="jaeger-release-notes_distributed-tracing-rn-2-9"]
== {JaegerName}
[id="new-features-and-enhancements_jaeger-release-notes_distributed-tracing-rn-2-9"]
=== New features and enhancements
* None.
//[id="technology-preview-features_jaeger-release-notes_distributed-tracing-rn-2-9"]
//=== Technology Preview features
//not for 2.9
[id="bug-fixes_jaeger-release-notes_distributed-tracing-rn-2-9"]
=== Bug fixes
* Before this update, connection was refused due to a missing gRPC port on the `jaeger-query` deployment. This issue resulted in `transport: Error while dialing: dial tcp :16685: connect: connection refused` error message. With this update, the Jaeger Query gRPC port (16685) is successfully exposed on the Jaeger Query service. (link:https://issues.redhat.com/browse/TRACING-3322[TRACING-3322])
* Before this update, the wrong port was exposed for `jaeger-production-query`, resulting in refused connection. With this update, the issue is fixed by exposing the Jaeger Query gRPC port (16685) on the Jaeger Query deployment. (link:https://issues.redhat.com/browse/TRACING-2968[TRACING-2968])
* Before this update, when deploying {SMProductShortName} on {sno} clusters in disconnected environments, the Jaeger pod frequently went into the `Pending` state. With this update, the issue is fixed. (link:https://issues.redhat.com/browse/TRACING-3312[TRACING-3312])
* Before this update, the Jaeger Operator pod restarted with the default memory value due to the `reason: OOMKilled` error message. With this update, this issue is fixed by removing the resource limits. (link:https://issues.redhat.com/browse/TRACING-3173[TRACING-3173])
[id="known-issues_jaeger-release-notes_distributed-tracing-rn-2-9"]
=== Known issues
* Apache Spark is not supported.
ifndef::openshift-rosa[]
* The streaming deployment via AMQ/Kafka is unsupported on IBM Z and IBM Power Systems.
endif::openshift-rosa[]
[id="tempo-release-notes_distributed-tracing-rn-2-9"]
== {TempoName}
:FeatureName: The {TempoName}
include::snippets/technology-preview.adoc[leveloffset=+1]
[id="new-features-and-enhancements_tempo-release-notes_distributed-tracing-rn-2-9"]
=== New features and enhancements
This release introduces the following enhancements for the {TempoShortName}:
* Support the link:https://operatorframework.io/operator-capabilities/[operator maturity] Level IV, Deep Insights, which enables upgrading, monitoring, and alerting of `TempoStack` instances and the {TempoOperator}.
* Add Ingress and Route configuration for the Gateway.
* Support the `managed` and `unmanaged` states in the `TempoStack` custom resource.
* Expose the following additional ingestion protocols in the Distributor service: Jaeger Thrift binary, Jaeger Thrift compact, Jaeger gRPC, and Zipkin. When the Gateway is enabled, only the OpenTelemetry protocol (OTLP) gRPC is enabled.
* Expose the Jaeger Query gRPC endpoint on the Query Frontend service.
* Support multitenancy without Gateway authentication and authorization.
//[id="technology-preview-features_tempo-release-notes_distributed-tracing-rn-2-9"]
//=== Technology Preview features
//not for 2.9
[id="bug-fixes_tempo-release-notes_distributed-tracing-rn-2-9"]
=== Bug fixes
* Before this update, the {TempoOperator} was not compatible with disconnected environments. With this update, the {TempoOperator} supports disconnected environments. (link:https://issues.redhat.com/browse/TRACING-3145[TRACING-3145])
* Before this update, the {TempoOperator} with TLS failed to start on {product-title}. With this update, the mTLS communication is enabled between Tempo components, the Operand starts successfully, and the Jaeger UI is accessible. (link:https://issues.redhat.com/browse/TRACING-3091[TRACING-3091])
* Before this update, the resource limits from the {TempoOperator} caused error messages such as `reason: OOMKilled`. With this update, the resource limits for the {TempoOperator} are removed to avoid such errors. (link:https://issues.redhat.com/browse/TRACING-3204[TRACING-3204])
[id="known-issues_tempo-release-notes_distributed-tracing-rn-2-9"]
=== Known issues
* Currently, the custom TLS CA option is not implemented for connecting to object storage. (link:https://issues.redhat.com/browse/TRACING-3462[TRACING-3462])
* Currently, when used with the {TempoOperator}, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. (link:https://issues.redhat.com/browse/TRACING-3139[TRACING-3139])
* Currently, the {TempoShortName} fails on the IBM Z (`s390x`) architecture. (link:https://issues.redhat.com/browse/TRACING-3545[TRACING-3545])
* Currently, the Tempo query frontend service must not use internal mTLS when Gateway is not deployed. This issue does not affect the Jaeger Query API. The workaround is to disable mTLS. (link:https://issues.redhat.com/browse/TRACING-3510[TRACING-3510])
+
.Workaround
+
Disable mTLS as follows:
+
. Open the {TempoOperator} ConfigMap for editing by running the following command:
+
[source,terminal]
----
$ oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator <1>
----
<1> The project where the {TempoOperator} is installed.
. Disable the mTLS in the operator configuration by updating the YAML file:
+
[source,yaml]
----
data:
controller_manager_config.yaml: |
featureGates:
httpEncryption: false
grpcEncryption: false
builtInCertManagement:
enabled: false
----
. Restart the {TempoOperator} pod by running the following command:
+
[source,terminal]
----
$ oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator
----
* Missing images for running the {TempoOperator} in restricted environments. The {TempoName} CSV is missing references to the operand images. (link:https://issues.redhat.com/browse/TRACING-3173[TRACING-3523])
+
.Workaround
+
Add the {TempoOperator} related images in the mirroring tool to mirror the images to the registry:
+
[source,yaml]
----
kind: ImageSetConfiguration
apiVersion: mirror.openshift.io/v1alpha2
archiveSize: 20
storageConfig:
local:
path: /home/user/images
mirror:
operators:
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13
packages:
- name: tempo-product
channels:
- name: stable
additionalImages:
- name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a
- name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23
- name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9
- name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e
----
[id="otel-release-notes_distributed-tracing-rn-2-9"]
== {OTELName}
:FeatureName: The {OTELName}
include::snippets/technology-preview.adoc[leveloffset=+1]
[id="new-features-and-enhancements_otel-release-notes_distributed-tracing-rn-2-9"]
=== New features and enhancements
This release introduces the following enhancements for the {OTELShortName}:
* Support OTLP metrics ingestion. The metrics can be forwarded and stored in the `user-workload-monitoring` via the Prometheus exporter.
* Support the link:https://operatorframework.io/operator-capabilities/[operator maturity] Level IV, Deep Insights, which enables upgrading and monitoring of `OpenTelemetry Collector` instances and the {OTELOperator}.
* Report traces and metrics from remote clusters using OTLP or HTTP and HTTPS.
* Collect {product-title} resource attributes via the `resourcedetection` processor.
* Support the `managed` and `unmanaged` states in the `OpenTelemetryCollector` custom resouce.
//[id="technology-preview-features_otel-release-notes_distributed-tracing-rn-2-9"]
//=== Technology Preview features
//not for 2.9
[id="bug-fixes_otel-release-notes_distributed-tracing-rn-2-9"]
=== Bug fixes
None.
[id="known-issues_otel-release-notes_distributed-tracing-rn-2-9"]
=== Known issues
* Currently, you must manually set link:https://operatorframework.io/operator-capabilities/[operator maturity] to Level IV, Deep Insights. (link:https://issues.redhat.com/browse/TRACING-3431[TRACING-3431])
include::modules/support.adoc[leveloffset=+1]
include::modules/making-open-source-more-inclusive.adoc[leveloffset=+1]

View File

@@ -0,0 +1 @@
../../images/

View File

@@ -0,0 +1 @@
../../modules/

View File

@@ -0,0 +1 @@
../../snippets/

View File

@@ -0,0 +1 @@
../../_attributes/

View File

@@ -0,0 +1,31 @@
:_content-type: ASSEMBLY
[id="distr-tracing-tempo-configuring"]
= Configuring and deploying the {TempoShortName}
include::_attributes/common-attributes.adoc[]
:context: distr-tracing-tempo-configuring
toc::[]
The {TempoOperator} uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the {TempoShortName} resources. You can install the default configuration or modify the file.
[id="customizing-your-tempo-deployment"]
== Customizing your deployment
ifdef::openshift-enterprise,openshift-dedicated[]
For information about configuring the back-end storage, see xref:../../storage/understanding-persistent-storage.adoc#understanding-persistent-storage[Understanding persistent storage] and the appropriate configuration topic for your chosen storage option.
endif::[]
include::modules/distr-tracing-tempo-config-default.adoc[leveloffset=+2]
include::modules/distr-tracing-tempo-config-storage.adoc[leveloffset=+2]
include::modules/distr-tracing-tempo-config-query-frontend.adoc[leveloffset=+2]
[id="setting-up-monitoring-for-tempo"]
== Setting up monitoring for the {TempoShortName}
The {TempoOperator} supports monitoring and alerting of each TempoStack component such as distributor, ingester, and so on, and exposes upgrade and operational metrics about the Operator itself.
include::modules/distr-tracing-tempo-configuring-tempostack-metrics-and-alerts.adoc[leveloffset=+2]
include::modules/distr-tracing-tempo-configuring-tempooperator-metrics-and-alerts.adoc[leveloffset=+2]

View File

@@ -0,0 +1,32 @@
:_content-type: ASSEMBLY
[id="dist-tracing-tempo-installing"]
= Installing the {TempoShortName}
include::_attributes/common-attributes.adoc[]
:context: dist-tracing-tempo-installing
toc::[]
:FeatureName: The {TempoOperator}
include::snippets/technology-preview.adoc[leveloffset=+1]
Installing the {TempoShortName} involves the following steps:
. Setting up supported object storage.
. Installing the {TempoOperator}.
. Creating a secret for the object storage credentials.
. Creating a namespace for a TempoStack instance.
. Creating a `TempoStack` custom resource to deploy at least one TempoStack instance.
include::modules/distr-tracing-tempo-install-web-console.adoc[leveloffset=+1]
include::modules/distr-tracing-tempo-install-cli.adoc[leveloffset=+1]
[role="_additional-resources"]
[id="additional-resources_dist-tracing-tempo-installing"]
== Additional resources
* xref:../../post_installation_configuration/preparing-for-users.adoc#creating-cluster-admin_post-install-preparing-for-users[Creating a cluster admin]
* link:https://operatorhub.io/[OperatorHub.io]
* xref:../../web_console/web-console.adoc#web-console[Accessing the web console]
* xref:../../operators/admin/olm-adding-operators-to-cluster.adoc#olm-installing-from-operatorhub-using-web-console_olm-adding-operators-to-a-cluster[Installing from OperatorHub using the web console]
* xref:../../operators/user/olm-creating-apps-from-installed-operators.adoc#olm-creating-apps-from-installed-operators[Creating applications from installed Operators]
* xref:../../cli_reference/openshift_cli/getting-started-cli.adoc#getting-started-cli[Getting started with the OpenShift CLI]

View File

@@ -0,0 +1,24 @@
:_content-type: ASSEMBLY
[id="dist-tracing-tempo-removing"]
= Removing the {TempoName}
include::_attributes/common-attributes.adoc[]
:context: dist-tracing-tempo-removing
toc::[]
The steps for removing the {TempoName} from an {product-title} cluster are as follows:
. Shut down all {TempoShortName} pods.
. Remove any TempoStack instances.
. Remove the {TempoOperator}.
include::modules/distr-tracing-tempo-remove-web-console.adoc[leveloffset=+1]
include::modules/distr-tracing-tempo-remove-cli.adoc[leveloffset=+1]
[role="_additional-resources"]
[id="additional-resources_dist-tracing-tempo-removing"]
== Additional resources
* xref:../../operators/admin/olm-deleting-operators-from-cluster.adoc#olm-deleting-operators-from-a-cluster[Deleting Operators from a cluster]
* xref:../../cli_reference/openshift_cli/getting-started-cli.adoc#getting-started-cli[Getting started with the OpenShift CLI]

View File

@@ -0,0 +1,16 @@
:_content-type: ASSEMBLY
[id="dist-tracing-tempo-updating"]
= Updating the {TempoShortName}
include::_attributes/common-attributes.adoc[]
:context: dist-tracing-tempo-updating
toc::[]
include::modules/distr-tracing-tempo-update-olm.adoc[leveloffset=+1]
[role="_additional-resources"]
[id="additional-resources_dist-tracing-tempo-updating"]
== Additional resources
* xref:../../operators/understanding/olm/olm-understanding-olm.adoc#olm-understanding-olm[Operator Lifecycle Manager concepts and resources]
* xref:../../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[Updating installed Operators]

View File

@@ -0,0 +1 @@
../../images/

View File

@@ -0,0 +1 @@
../../modules/

View File

@@ -0,0 +1 @@
../../snippets/

View File

@@ -1,21 +0,0 @@
:_content-type: ASSEMBLY
[id="distr-tracing-release-notes"]
= Distributed tracing release notes
include::_attributes/common-attributes.adoc[]
:context: distributed-tracing-release-notes
toc::[]
include::modules/distr-tracing-product-overview.adoc[leveloffset=+1]
include::modules/making-open-source-more-inclusive.adoc[leveloffset=+1]
include::modules/support.adoc[leveloffset=+1]
include::modules/distr-tracing-rn-new-features.adoc[leveloffset=+1]
include::modules/distr-tracing-rn-technology-preview.adoc[leveloffset=+1]
include::modules/distr-tracing-rn-known-issues.adoc[leveloffset=+1]
include::modules/distr-tracing-rn-fixed-issues.adoc[leveloffset=+1]

View File

@@ -1,7 +1,6 @@
////
Module included in the following assemblies:
* distr_tracing/distr_tracing_install/distr-tracing-deploying-jaeger.adoc
* distr_tracing/distr_tracing_install/distr-tracing-deploying-otel.adoc
* distr_tracing_jaeger/distr-tracing-jaeger-configuring.adoc
////
:_content-type: PROCEDURE
[id="distr-tracing-accessing-jaeger-console_{context}"]
@@ -13,7 +12,7 @@ The installation process creates a route to access the Jaeger console.
If you know the URL for the Jaeger console, you can access it directly. If you do not know the URL, use the following directions.
.Procedure from OpenShift console
.Procedure from the web console
. Log in to the {product-title} web console as a user with cluster-admin rights. If you use {product-dedicated}, you must have an account with the `dedicated-admin` role.
. Navigate to *Networking* -> *Routes*.
@@ -38,7 +37,7 @@ The *Location* column displays the linked address for each route.
.Procedure from the CLI
. Log in to the {product-title} CLI as a user with the `cluster-admin` role. If you use {product-dedicated}, you must have an account with the `dedicated-admin` role.
. Log in to the {product-title} CLI as a user with the `cluster-admin` role by running the following command. If you use {product-dedicated}, you must have an account with the `dedicated-admin` role.
+
[source,terminal]
----

View File

@@ -25,6 +25,21 @@ This module included in the following assemblies:
** *Jaeger Console* With the {JaegerName} user interface, you can visualize your distributed tracing data. On the Search page, you can find traces and explore details of the spans that make up an individual trace.
* *{TempoName}* - This component is based on the open source link:https://grafana.com/oss/tempo/[Grafana Tempo project].
** *Gateway* The Gateway handles authentication, authorization, and forwarding requests to the Distributor or Query front-end service.
** *Distributor* The Distributor accepts spans in multiple formats including Jaeger, OpenTelemetry, and Zipkin. It routes spans to Ingesters by hashing the `+traceID+` and using a distributed consistent hash ring.
** *Ingester* The Ingester batches a trace into blocks, creates bloom filters and indexes, and then flushes it all to the back end.
** *Query Frontend* The Query Frontend is responsible for sharding the search space for an incoming query. The search query is then sent to the Queriers. The Query Frontend deployment exposes the Jaeger UI through the Tempo Query sidecar.
** *Querier* - The Querier is responsible for finding the requested trace ID in either the Ingesters or the back-end storage. Depending on parameters, it can query the Ingesters and pull Bloom indexes from the back end to search blocks in object storage.
** *Compactor* The Compactors stream blocks to and from the back-end storage to reduce the total number of blocks.
* *{OTELName}* - This component is based on the open source link:https://opentelemetry.io/[OpenTelemetry project].
** *OpenTelemetry Collector* - The OpenTelemetry Collector is a vendor-agnostic way to receive, process, and export telemetry data. The OpenTelemetry Collector supports open-source observability data formats, for example, Jaeger and Prometheus, sending to one or more open-source or commercial back-ends. The Collector is the default location instrumentation libraries export their telemetry data.

View File

@@ -1,22 +0,0 @@
////
This module included in the following assemblies:
- dist_tracing/dist_tracing_install/dist-tracing-updating.adoc
////
[id="distr-tracing-changing-operator-channel_{context}"]
= Changing the Operator channel for 2.0
{DTProductName} 2.0.0 made the following changes:
* Renamed the Red Hat OpenShift Jaeger Operator to the {JaegerName} Operator.
* Stopped support for individual release channels. Going forward, the {JaegerName} Operator will only support the *stable* Operator channel. Maintenance channels, for example *1.24-stable*, will no longer be supported by future Operators.
As part of the update to version 2.0, you must update your OpenShift Elasticsearch and {JaegerName} Operator subscriptions.
.Prerequisites
* The {product-title} version is 4.6 or later.
* You have updated the OpenShift Elasticsearch Operator.
* You have backed up the Jaeger custom resource file.
* An account with the `cluster-admin` role. If you use {product-dedicated}, you must have an account with the `dedicated-admin` role.

View File

@@ -1,6 +1,6 @@
////
This module included in the following assemblies:
- distr_tracing_install/distr-tracing-deploying-jaeger.adoc
- distr_tracing_jaeger/distr-tracing-jaeger-configuring.adoc
////
:_content-type: REFERENCE
[id="distr-tracing-config-default_{context}"]
@@ -8,7 +8,7 @@ This module included in the following assemblies:
The Jaeger custom resource (CR) defines the architecture and settings to be used when creating the {JaegerShortName} resources. You can modify these parameters to customize your {JaegerShortName} implementation to your business needs.
.Jaeger generic YAML example
.Generic YAML example of the Jaeger CR
[source,yaml]
----
apiVersion: jaegertracing.io/v1
@@ -46,7 +46,7 @@ spec:
|Parameter |Description |Values |Default value
|`apiVersion:`
||API version to use when creating the object.
|API version to use when creating the object.
|`jaegertracing.io/v1`
|`jaegertracing.io/v1`

View File

@@ -1,6 +1,6 @@
////
This module included in the following assemblies:
- distr_tracing_install/distr-tracing-deploying-jaeger.adoc
- distr_tracing_jaeger/distr-tracing-jaeger-configuring.adoc
////
:_content-type: REFERENCE
[id="distr-tracing-config-ingester_{context}"]

View File

@@ -1,6 +1,6 @@
////
This module included in the following assemblies:
- distr_tracing_install/distr-tracing-deploying-jaeger.adoc
- distr_tracing_jaeger/distr-tracing-jaeger-configuring.adoc
////
:_content-type: REFERENCE
[id="distr-tracing-config-jaeger-collector_{context}"]

View File

@@ -1,6 +1,6 @@
////
This module included in the following assemblies:
- distr_tracing_install/distr-tracing-deploying-jaeger.adoc
- distr_tracing_jaeger/distr-tracing-jaeger-configuring.adoc
////
:_content-type: REFERENCE
[id="distr-tracing-config-query_{context}"]

View File

@@ -1,6 +1,6 @@
////
This module included in the following assemblies:
- distr_tracing_install/distr-tracing-deploying-jaeger.adoc
- distr_tracing_jaeger/distr-tracing-jaeger-configuring.adoc
////
:_content-type: REFERENCE
[id="distr-tracing-config-sampling_{context}"]

View File

@@ -6,18 +6,18 @@ service_mesh/v2x/ossm-reference-jaeger.adoc
[id="distr-tracing-config-security-ossm-cli_{context}"]
= Configuring distributed tracing security for service mesh from the command line
You can modify the Jaeger resource to configure {JaegerShortName} security for use with {SMproductShortName} from the command line using the `oc` utility.
You can modify the Jaeger resource to configure {JaegerShortName} security for use with {SMproductShortName} from the command line by running the {oc-first}.
.Prerequisites
* You have access to the cluster as a user with the `cluster-admin` role. If you use {product-dedicated}, you must have an account with the `dedicated-admin` role.
* The {SMProductName} Operator must be installed.
* The `ServiceMeshControlPlane` deployed to the cluster.
* You have access to the OpenShift CLI (oc) that matches your OpenShift Container Platform version.
* You have access to the {oc-first} that matches your {product-title} version.
.Procedure
. Log in to the {product-title} CLI as a user with the `cluster-admin` role. If you use {product-dedicated}, you must have an account with the `dedicated-admin` role.
. Log in to the {oc-first} as a user with the `cluster-admin` role by running the following command. If you use {product-dedicated}, you must have an account with the `dedicated-admin` role.
+
[source,terminal]
----

View File

@@ -4,16 +4,16 @@ service_mesh/v2x/ossm-reference-jaeger.adoc
////
:_content-type: PROCEDURE
[id="distr-tracing-config-security-ossm-web_{context}"]
= Configuring distributed tracing security for service mesh from the OpenShift console
= Configuring distributed tracing security for service mesh from the web console
You can modify the Jaeger resource to configure {JaegerShortName} security for use with {SMproductShortName} in the OpenShift console.
You can modify the Jaeger resource to configure {JaegerShortName} security for use with {SMproductShortName} in the web console.
.Prerequisites
* You have access to the cluster as a user with the `cluster-admin` role. If you use {product-dedicated}, you must have an account with the `dedicated-admin` role.
* The {SMProductName} Operator must be installed.
* The `ServiceMeshControlPlane` deployed to the cluster.
* You have access to the OpenShift Container Platform web console.
* You have access to the {product-title} web console.
.Procedure
@@ -29,7 +29,7 @@ You can modify the Jaeger resource to configure {JaegerShortName} security for u
. Click the name of your Jaeger instance.
. On the Jaeger details page, click the `YAML` tab to modify your configuration.
. On the Jaeger details page, click the *YAML* tab to modify your configuration.
. Edit the `Jaeger` custom resource file to add the `htpasswd` configuration as shown in the following example.

View File

@@ -1,6 +1,6 @@
////
This module included in the following assemblies:
- distr_tracing_install/distr-tracing-deploying-jaeger.adoc
- distr_tracing_jaeger/distr-tracing-jaeger-configuring.adoc
////
:_content-type: REFERENCE
[id="distr-tracing-config-storage_{context}"]
@@ -618,7 +618,7 @@ spec:
The following example shows a Jaeger CR using an external Elasticsearch cluster with TLS CA certificate mounted from a volume and user/password stored in a secret.
.External Elasticsearch example:
.External Elasticsearch example
[source,yaml]
----
apiVersion: jaegertracing.io/v1

View File

@@ -1,6 +1,6 @@
////
This module included in the following assemblies:
- distr_tracing_install/distr-tracing-deploying-jaeger.adoc
- distr_tracing_jaeger/distr-tracing-jaeger-configuring.adoc
////
:_content-type: PROCEDURE
@@ -30,8 +30,8 @@ In-memory storage is not persistent. If the Jaeger pod shuts down, restarts, or
====
If you are installing as part of Service Mesh, the {DTShortName} resources must be installed in the same namespace as the `ServiceMeshControlPlane` resource, for example `istio-system`.
====
+
.. Navigate to *Home* -> *Projects*.
.. Go to *Home* -> *Projects*.
.. Click *Create Project*.
@@ -63,19 +63,19 @@ Follow this procedure to create an instance of {JaegerShortName} from the comman
* The {JaegerName} Operator has been installed and verified.
* You have reviewed the instructions for how to customize the deployment.
* You have access to the OpenShift CLI (`oc`) that matches your {product-title} version.
* You have access to the {oc-first} that matches your {product-title} version.
* You have access to the cluster as a user with the `cluster-admin` role.
.Procedure
. Log in to the {product-title} CLI as a user with the `cluster-admin` role.
. Log in to the {product-title} CLI as a user with the `cluster-admin` role by running the following command:
+
[source,terminal]
----
$ oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443
----
. Create a new project named `tracing-system`.
. Create a new project named `tracing-system` by running the following command:
+
[source,terminal]
----
@@ -107,7 +107,7 @@ $ oc create -n tracing-system -f jaeger.yaml
$ oc get pods -n tracing-system -w
----
+
After the installation process has completed, you should see output similar to the following example:
After the installation process has completed, the output is similar to the following example:
+
[source,terminal]
----

View File

@@ -1,6 +1,6 @@
////
This module included in the following assemblies:
- distr_tracing_install/distr-tracing-deploying-jaeger.adoc
- distr_tracing_jaeger/distr-tracing-jaeger-configuring.adoc
////
:_content-type: PROCEDURE
@@ -26,7 +26,7 @@ The `production` deployment strategy is intended for production environments tha
====
If you are installing as part of Service Mesh, the {DTShortName} resources must be installed in the same namespace as the `ServiceMeshControlPlane` resource, for example `istio-system`.
====
+
.. Navigate to *Home* -> *Projects*.
.. Click *Create Project*.
@@ -44,7 +44,6 @@ If you are installing as part of Service Mesh, the {DTShortName} resources must
. Under *Jaeger*, click *Create Instance*.
. On the *Create Jaeger* page, replace the default `all-in-one` YAML text with your production YAML configuration, for example:
+
.Example jaeger-production.yaml file with Elasticsearch
[source,yaml]
@@ -70,7 +69,6 @@ spec:
esRollover:
schedule: '*/30 * * * *'
----
+
. Click *Create* to create the {JaegerShortName} instance.
@@ -89,19 +87,19 @@ Follow this procedure to create an instance of {JaegerShortName} from the comman
* The OpenShift Elasticsearch Operator has been installed.
* The {JaegerName} Operator has been installed.
* You have reviewed the instructions for how to customize the deployment.
* You have access to the OpenShift CLI (`oc`) that matches your {product-title} version.
* You have access to the {oc-first} that matches your {product-title} version.
* You have access to the cluster as a user with the `cluster-admin` role.
.Procedure
. Log in to the {product-title} CLI as a user with the `cluster-admin` role.
. Log in to the {oc-first} as a user with the `cluster-admin` role by running the following command:
+
[source,terminal]
----
$ oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443
----
. Create a new project named `tracing-system`.
. Create a new project named `tracing-system` by running the following command:
+
[source,terminal]
----
@@ -124,7 +122,7 @@ $ oc create -n tracing-system -f jaeger-production.yaml
$ oc get pods -n tracing-system -w
----
+
After the installation process has completed, you should see output similar to the following example:
After the installation process has completed, you will see output similar to the following example:
+
[source,terminal]
----

View File

@@ -1,6 +1,6 @@
////
This module included in the following assemblies:
- distr_tracing_install/distr-tracing-deploying-jaeger.adoc
- distr_tracing_jaeger/distr-tracing-jaeger-configuring.adoc
////
:_content-type: PROCEDURE
@@ -33,13 +33,11 @@ The streaming deployment strategy is currently unsupported on {ibmzProductName}.
. Log in to the {product-title} web console as a user with the `cluster-admin` role.
. Create a new project, for example `tracing-system`.
+
[NOTE]
====
If you are installing as part of Service Mesh, the {DTShortName} resources must be installed in the same namespace as the `ServiceMeshControlPlane` resource, for example `istio-system`.
====
+
.. Navigate to *Home* -> *Projects*.
@@ -58,7 +56,7 @@ If you are installing as part of Service Mesh, the {DTShortName} resources must
. Under *Jaeger*, click *Create Instance*.
. On the *Create Jaeger* page, replace the default `all-in-one` YAML text with your streaming YAML configuration, for example:
+
.Example jaeger-streaming.yaml file
[source,yaml]
----
@@ -104,19 +102,19 @@ Follow this procedure to create an instance of {JaegerShortName} from the comman
* The AMQ Streams Operator has been installed. If using version 1.4.0 or higher you can use self-provisioning. Otherwise you must create the Kafka instance.
* The {JaegerName} Operator has been installed.
* You have reviewed the instructions for how to customize the deployment.
* You have access to the OpenShift CLI (`oc`) that matches your {product-title} version.
* You have access to the {oc-first} that matches your {product-title} version.
* You have access to the cluster as a user with the `cluster-admin` role.
Procedure
. Log in to the {product-title} CLI as a user with the `cluster-admin` role.
. Log in to the {oc-first} as a user with the `cluster-admin` role by running the following command:
+
[source,terminal]
----
$ oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443
----
. Create a new project named `tracing-system`.
. Create a new project named `tracing-system` by running the following command:
+
[source,terminal]
----

View File

@@ -1,6 +1,6 @@
////
This module included in the following assemblies:
- distr_tracing_install/distr-tracing-deploying-jaeger.adoc
- distr_tracing_jaeger/distr-tracing-jaeger-configuring.adoc
////
:_content-type: CONCEPT
[id="distr-tracing-deployment-best-practices_{context}"]
@@ -9,7 +9,3 @@ This module included in the following assemblies:
* {DTProductName} instance names must be unique. If you want to have multiple {JaegerName} instances and are using sidecar injected agents, then the {JaegerName} instances should have unique names, and the injection annotation should explicitly specify the {JaegerName} instance name the tracing data should be reported to.
* If you have a multitenant implementation and tenants are separated by namespaces, deploy a {JaegerName} instance to each tenant namespace.
** Agent as a daemonset is not supported for multitenant installations or {product-dedicated}. Agent as a sidecar is the only supported configuration for these use cases.
* If you are installing {DTShortName} as part of {SMProductName}, the {DTShortName} resources must be installed in the same namespace as the `ServiceMeshControlPlane` resource.

View File

@@ -1,6 +1,6 @@
////
This module included in the following assemblies:
- distr_tracing_install/distr-tracing-installing.adoc
- distr_tracing_jaeger/distr-tracing-jaeger-installing.adoc
////
:_content-type: PROCEDURE
@@ -45,7 +45,7 @@ The Elasticsearch installation requires the *openshift-operators-redhat* namespa
====
+
* Accept the default *Automatic* approval strategy. By accepting the default, when a new version of this Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select *Manual* updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.
. Accept the default *Automatic* approval strategy. By accepting the default, when a new version of this Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select *Manual* updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.
+
[NOTE]
====

View File

@@ -1,6 +1,6 @@
////
This module included in the following assemblies:
- distr_tracing_install/distr-tracing-installing.adoc
- distr_tracing_jaeger/distr-tracing-jaeger-installing.adoc
////
:_content-type: PROCEDURE
@@ -44,7 +44,6 @@ Do not install Community versions of the Operators. Community Operators are not
====
The *Manual* approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process.
====
+
. Click *Install*.

View File

@@ -45,10 +45,9 @@ Do not install Community versions of the Operators. Community Operators are not
====
The *Manual* approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process.
====
+
. Click *Install*.
. Navigate to *Operators* -> *Installed Operators*.
. Go to *Operators* -> *Installed Operators*.
. On the *Installed Operators* page, select the `openshift-operators` project. Wait until you see that the {OTELName} Operator shows a status of "Succeeded" before continuing.

View File

@@ -1,6 +1,6 @@
////
This module included in the following assemblies:
- distr_tracing_install/distr-tracing-installing.adoc
- distr_tracing_jaeger/distr-tracing-jaeger-installing.adoc
////
:_content-type: CONCEPT

View File

@@ -0,0 +1,395 @@
////
This module included in the following assemblies:
-distr_tracing_otel/distr-tracing-otel-configuring.adoc
////
:_content-type: REFERENCE
[id="distr-tracing-config-otel-collector_{context}"]
= OpenTelemetry Collector configuration options
The OpenTelemetry Collector consists of three components that access telemetry data:
Receivers:: A receiver, which can be push or pull based, is how data gets into the Collector. Generally, a receiver accepts data in a specified format, translates it into the internal format, and passes it to processors and exporters defined in the applicable pipelines. By default, no receivers are configured. One or more receivers must be configured. Receivers may support one or more data sources.
Processors:: Optional. Processors run through the data between it is received and exported. By default, no processors are enabled. Processors must be enabled for every data source. Not all processors support all data sources. Depending on the data source, multiple processors might be enabled. Note that the order of processors matters.
Exporters:: An exporter, which can be push or pull based, is how you send data to one or more back ends or destinations. By default, no exporters are configured. One or more exporters must be configured. Exporters can support one or more data sources. Exporters might be used with their default settings, but many exporters require configuration to specify at least the destination and security settings.
You can define multiple instances of components in a custom resource YAML file. When configured, these components must be enabled through pipelines defined in the `spec.config.service` section of the YAML file. As a best practice, only enable the components that you need.
.Example of the OpenTelemetry Collector custom resource file
[source,yaml]
----
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: cluster-collector
namespace: tracing-system
spec:
mode: deployment
ports:
- name: promexporter
port: 8889
protocol: TCP
config: |
receivers:
otlp:
protocols:
grpc:
http:
processors:
exporters:
jaeger:
endpoint: jaeger-production-collector-headless.tracing-system.svc:14250
tls:
ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt"
prometheus:
endpoint: 0.0.0.0:8889
resource_to_telemetry_conversion:
enabled: true # by default resource attributes are dropped
service: <1>
pipelines:
traces:
receivers: [otlp]
processors: []
exporters: [jaeger]
metrics:
receivers: [otlp]
processors: []
exporters: [prometheus]
----
<1> If a component is configured but not defined in the `service` section, the component is not enabled.
.Parameters used by the Operator to define the OpenTelemetry Collector
[options="header"]
[cols="l, a, a, a"]
|===
|Parameter |Description |Values |Default
|receivers:
|A receiver is how data gets into the Collector. By default, no receivers are configured. There must be at least one enabled receiver for a configuration to be considered valid. Receivers are enabled by being added to a pipeline.
|`otlp`, `jaeger`, `zipkin`
|None
|processors:
|Processors run through the data between it is received and exported. By default, no processors are enabled.
|
|None
|exporters:
|An exporter sends data to one or more back ends or destinations. By default, no exporters are configured. There must be at least one enabled exporter for a configuration to be considered valid. Exporters are enabled by being added to a pipeline. Exporters might be used with their default settings, but many require configuration to specify at least the destination and security settings.
|`otlp`, `otlphttp`, `jaeger`, `logging`, `prometheus`
|None
|service:
pipelines:
|Components are enabled by adding them to a pipeline under `services.pipeline`.
|
|
|service:
pipelines:
traces:
receivers:
|You enable receivers for tracing by adding them under `service.pipelines.traces`.
|
|None
|service:
pipelines:
traces:
processors:
|You enable processors for tracing by adding them under `service.pipelines.traces`.
|
|None
|service:
pipelines:
traces:
exporters:
|You enable exporters for tracing by adding them under `service.pipelines.traces`.
|
|None
|service:
pipelines:
metrics:
receivers:
|You enable receivers for metrics by adding them under `service.pipelines.metrics`.
|
|None
|service:
pipelines:
metrics:
processors:
|You enable processors for metircs by adding them under `service.pipelines.metrics`.
|
|None
|service:
pipelines:
metrics:
exporters:
|You enable exporters for metrics by adding them under `service.pipelines.metrics`.
|
|None
|===
[id="otel-collector-components_{context}"]
== OpenTelemetry Collector components
[id="receivers_{context}"]
=== Receivers
[id="otlp-receiver_{context}"]
==== OTLP Receiver
The OTLP receiver ingests data using the OpenTelemetry protocol (OTLP).
* Support level: link:https://access.redhat.com/support/offerings/techpreview[Technology Preview]
* Supported signals: traces, metrics
.OpenTelemetry Collector custom resource with an enabled OTLP receiver
[source,yaml]
----
config: |
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317 <1>
http:
endpoint: 0.0.0.0:4318 <2>
tls: <3>
ca_file: ca.pem
cert_file: cert.pem
key_file: key.pem
service:
pipelines:
traces:
receivers: [otlp]
metrics:
receivers: [otlp]
----
<1> The OTLP gRPC endpoint. If omitted, the default `+0.0.0.0:4317+` is used.
<2> The OTLP HTTP endpoint. If omitted, the default `+0.0.0.0:4318+` is used.
<3> The TLS server side configuration. Defines paths to TLS certificates. If omitted, TLS is disabled.
[id="jaeger-receiver_{context}"]
==== Jaeger Receiver
The Jaeger receiver ingests data in Jaeger formats.
* Support level: link:https://access.redhat.com/support/offerings/techpreview[Technology Preview]
* Supported signals: traces
.OpenTelemetry Collector custom resource with an enabled Jaeger receiver
[source,yaml]
----
config: |
receivers:
jaeger:
protocols:
grpc:
endpoint: 0.0.0.0:14250 <1>
thrift_http:
endpoint: 0.0.0.0:14268 <2>
thrift_compact:
endpoint: 0.0.0.0:6831 <3>
thrift_binary:
endpoint: 0.0.0.0:6832 <4>
tls: <5>
service:
pipelines:
traces:
receivers: [jaeger]
----
<1> The Jaeger gRPC endpoint. If omitted, the default `+0.0.0.0:14250+` is used.
<2> The Jaeger Thrift HTTP endpoint. If omitted, the default `+0.0.0.0:14268+` is used.
<3> The Jaeger Thrift Compact endpoint. If omitted, the default `+0.0.0.0:6831+` is used.
<4> The Jaeger Thrift Binary endpoint. If omitted, the default `+0.0.0.0:6832+` is used.
<5> The TLS server side configuration. See the OTLP receiver configuration section for more details.
[id="zipkin-receiver_{context}"]
==== Zipkin Receiver
The Zipkin receiver ingests data in the Zipkin v1 and v2 formats.
* Support level: link:https://access.redhat.com/support/offerings/techpreview[Technology Preview]
* Supported signals: traces
.OpenTelemetry Collector custom resource with enabled Zipkin receiver
[source,yaml]
----
config: |
receivers:
zipkin:
endpoint: 0.0.0.0:9411 <1>
tls: <2>
service:
pipelines:
traces:
receivers: [zipkin]
----
<1> The Zipkin HTTP endpoint. If omitted, the default `+0.0.0.0:9411+` is used.
<2> The TLS server side configuration. See the OTLP receiver configuration section for more details.
[id="processors_{context}"]
=== Processors
[id="resource-detection-processor_{context}"]
==== Resource Detection processor
The Resource Detection processor is designed to identify host resource details in alignment with OpenTelemetry's resource semantic standards. Using this detected information, it can add or replace the resource values in telemetry data.
* Support level: link:https://access.redhat.com/support/offerings/techpreview[Technology Preview]
* Supported signals: traces, metrics
.{product-title} permissions required for the Resource Detection processor
[source,yaml]
----
kind: ClusterRole
metadata:
name: otel-collector
rules:
- apiGroups: ["config.openshift.io"]
resources: ["infrastructures", "infrastructures/status"]
verbs: ["get", "watch", "list"]
----
.OpenTelemetry Collector using the Resource Detection processor
[source,yaml]
----
config: |
processor:
resourcedetection:
detectors: [openshift]
override: true
service:
pipelines:
traces:
processors: [resourcedetection]
metrics:
processors: [resourcedetection]
----
[id="exporters_{context}"]
=== Exporters
[id="otlp-exporter_{context}"]
==== OTLP exporter
The OTLP gRPC exporter exports data using the OpenTelemetry protocol (OTLP).
* Support level: link:https://access.redhat.com/support/offerings/techpreview[Technology Preview]
* Supported signals: traces, metrics
.OpenTelemetry Collector custom resource with an enabled OTLP exporter
[source,yaml]
----
config: |
exporters:
otlp:
endpoint: tempo-ingester:4317 <1>
tls: <2>
ca_file: ca.pem
cert_file: cert.pem
key_file: key.pem
headers: <3>
X-Scope-OrgID: "dev"
service:
pipelines:
traces:
exporters: [otlp]
metrics:
exporters: [otlp]
----
<1> The OTLP gRPC endpoint. If the `+https://+` scheme is used, then client transport security is enabled and overrides the `insecure` setting in the `tls`.
<2> The client side TLS configuration. Defines paths to TLS certificates.
<3> Headers are sent for every RPC performed during an established connection.
[id="otlp-http-exporter_{context}"]
==== OTLP HTTP exporter
The OTLP HTTP exporter exports data using the OpenTelemetry protocol (OTLP).
* Support level: link:https://access.redhat.com/support/offerings/techpreview[Technology Preview]
* Supported signals: traces, metrics
.OpenTelemetry Collector custom resource with an enabled OTLP exporter
[source,yaml]
----
config: |
exporters:
otlphttp:
endpoint: http://tempo-ingester:4318 <1>
tls: <2>
ca_file: ca.pem
cert_file: cert.pem
key_file: key.pem
headers: <3>
X-Scope-OrgID: "dev"
service:
pipelines:
traces:
exporters: [otlphttp]
metrics:
expoters: [otlphttp]
----
<1> The OTLP HTTP endpoint. If the `+https://+` scheme is used, then client transport security is enabled and overrides the `insecure` setting in the `tls`.
<2> The client side TLS configuration. Defines paths to TLS certificates.
<3> Headers are sent in every HTTP request.
[id="jaeger-exporter_{context}"]
==== Jaeger exporter
The Jaeger exporter exports data using the Jaeger proto format through gRPC.
* Support level: link:https://access.redhat.com/support/offerings/techpreview[Technology Preview]
* Supported signals: traces
.OpenTelemetry Collector custom resource with enabled Jaeger exporter
[source,yaml]
----
config: |
exporters:
jaeger:
endpoint: jaeger-all-in-one:14250 <1>
tls: <2>
ca_file: ca.pem
cert_file: cert.pem
key_file: key.pem
service:
pipelines:
traces:
exporters: [jaeger]
----
<1> The Jaeger gRPC endpoint.
<2> The client side TLS configuration. Defines paths to TLS certificates.
[id="logging-exporter_{context}"]
==== Logging exporter
The Logging exporter prints data to the standard output.
* Support level: link:https://access.redhat.com/support/offerings/techpreview[Technology Preview]
* Supported signals: traces, metrics
.OpenTelemetry Collector custom resource with an enabled Logging exporter
[source,yaml]
----
config: |
exporters:
logging:
verbosity: detailed <1>
service:
pipelines:
traces:
exporters: [logging]
metrics:
exporters: [logging]
----
<1> Verbosity of the logging export: `detailed` or `normal` or `basic`. When set to `detailed`, pipeline data is verbosely logged. Defaults to `normal`.

View File

@@ -0,0 +1,39 @@
////
This module is included in the following assemblies:
- distr_tracing_install/distributed-tracing-deploying-otel.adoc
////
:_content-type: REFERENCE
[id="distr-tracing-send-metrics-monitoring-stack_{context}"]
= Sending metrics to the monitoring stack
You can configure the monitoring stack to scrape OpenTelemetry Collector metrics endpoints and to remove duplicated labels that the monitoring stack has added during scraping.
.Sample `PodMonitor` custom resource (CR) that configures the monitoring stack to scrape Collector metrics
[source,yaml]
----
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: otel-collector
spec:
selector:
matchLabels:
app.kubernetes.io/name: otel-collector
podMetricsEndpoints:
- port: metrics <1>
- port: promexporter <2>
relabelings:
- action: labeldrop
regex: pod
- action: labeldrop
regex: container
- action: labeldrop
regex: endpoint
metricRelabelings:
- action: labeldrop
regex: instance
- action: labeldrop
regex: job
----
<1> The name of the internal metrics port for the OpenTelemetry Collector. This port name is always `metrics`.
<2> The name of the Prometheus exporter port for the OpenTelemetry Collector. This port name is defined in the `.spec.ports` section of the `OpenTelemetryCollector` CR.

View File

@@ -0,0 +1,149 @@
// Module included in the following assemblies:
//
// * /distr_tracing/distr_tracing_otel/distr-tracing-otel-using.adoc
:_content-type: PROCEDURE
[id="distr-tracing-otel-forwarding_{context}"]
= Forwarding traces to a TempoStack by using the OpenTelemetry Collector
To configure forwarding traces to a TempoStack, you can deploy and configure the OpenTelemetry Collector. You can deploy the OpenTelemetry Collector in the deployment mode by using the specified processors, receivers, and exporters. For other modes, see the OpenTelemetry Collector documentation linked in _Additional resources_.
.Prerequisites
* The {OTELOperator} is installed.
* The {TempoOperator} is installed.
* A TempoStack is deployed on the cluster.
.Procedure
. Create a service account for the OpenTelemetry Collector.
+
.Example ServiceAccount
[source,yaml]
----
apiVersion: v1
kind: ServiceAccount
metadata:
name: otel-collector-deployment
----
. Create a cluster role for the service account.
+
.Example ClusterRole
[source,yaml]
----
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: otel-collector
rules:
<1>
<2>
- apiGroups: ["", "config.openshift.io"]
resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"]
verbs: ["get", "watch", "list"]
----
<1> The `k8sattributesprocessor` requires permissions for pods and namespaces resources.
<2> The `resourcedetectionprocessor` requires permissions for infrastructures and status.
. Bind the cluster role to the service account.
+
.Example ClusterRoleBinding
[source,yaml]
----
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: otel-collector
subjects:
- kind: ServiceAccount
name: otel-collector-deployment
namespace: otel-collector-example
roleRef:
kind: ClusterRole
name: otel-collector
apiGroup: rbac.authorization.k8s.io
----
. Create the YAML file to define the `OpenTelemetryCollector` custom resource (CR).
+
.Example OpenTelemetryCollector
[source,yaml]
----
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: otel
spec:
mode: deployment
serviceAccount: otel-collector-deployment
config: |
receivers:
jaeger:
protocols:
grpc:
thrift_binary:
thrift_compact:
thrift_http:
opencensus:
otlp:
protocols:
grpc:
http:
zipkin:
processors:
batch:
k8sattributes:
memory_limiter:
check_interval: 1s
limit_percentage: 50
spike_limit_percentage: 30
resourcedetection:
detectors: [openshift]
exporters:
otlp:
endpoint: "tempo-simplest-distributor:4317" <1>
tls:
insecure: true
service:
pipelines:
traces:
receivers: [jaeger, opencensus, otlp, zipkin] <2>
processors: [memory_limiter, k8sattributes, resourcedetection, batch]
exporters: [otlp]
----
<1> The Collector exporter is configured to export OTLP and points to the Tempo distributor endpoint, `"tempo-simplest-distributor:4317"` in this example, which is already created.
<2> The Collector is configured with a receiver for Jaeger traces, OpenCensus traces over the OpenCensus protocol, Zipkin traces over the Zipkin protocol, and OTLP traces over the GRPC protocol.
[TIP]
====
You can deploy `tracegen` as a test:
[source,yaml]
----
apiVersion: batch/v1
kind: Job
metadata:
name: tracegen
spec:
template:
spec:
containers:
- name: tracegen
image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/tracegen:latest
command:
- "./tracegen"
args:
- -otlp-endpoint=otel-collector:4317
- -otlp-insecure
- -duration=30s
- -workers=1
restartPolicy: Never
backoffLimit: 4
----
====
[role="_additional-resources"]
.Additional resources
* link:https://opentelemetry.io/docs/collector/[OpenTelemetry Collector documentation]
* link:https://github.com/os-observability/redhat-rhosdt-samples[Deployment examples on GitHub]

View File

@@ -0,0 +1,116 @@
////
This module included in the following assemblies:
- distr_tracing_otel/distr-tracing-otel-installing.adoc
////
:_content-type: PROCEDURE
[id="distr-tracing-install-otel-operator_{context}"]
= Installing the {OTELShortName} from the web console
You can install the {OTELShortName} from the *Administrator* view of the web console.
.Prerequisites
* You are logged in to the web console as a cluster administrator with the `cluster-admin` role.
* For {product-dedicated}, you must be logged in using an account with the `dedicated-admin` role.
* An active {oc-first} session by a cluster administrator with the `cluster-admin` role.
+
[TIP]
====
* Ensure that your {oc-first} version is up to date and matches your {product-title} version.
* Run `oc login`:
+
[source,terminal]
----
$ oc login --username=<your_username>
----
====
.Procedure
. Install the {OTELOperator}:
.. Go to *Operators* -> *OperatorHub* and search for `{OTELOperator}`.
.. Select the *{OTELOperator}* that is *provided by Red Hat* -> *Install* -> *Install* -> *View Operator*.
+
[IMPORTANT]
====
This installs the Operator with the default presets:
* *Update channel* -> *stable*
* *Installation mode* -> *All namespaces on the cluster*
* *Installed Namespace* -> *openshift-operators*
* *Update approval* -> *Automatic*
====
.. In the *Details* tab of the installed Operator page, under *ClusterServiceVersion details*, verify that the installation *Status* is *Succeeded*.
. Create a project of your choice for the *OpenTelemetry Collector* instance that you will create in the next step by going to *Home* -> *Projects* -> *Create Project*.
. Create an *OpenTelemetry Collector* instance.
.. Go to *Operators* -> *Installed Operators*.
.. Select *OpenTelemetry Collector* -> *Create OpenTelemetryCollector* -> *YAML view*.
.. In the *YAML view*, customize the `OpenTelemetryCollector` custom resource (CR) with the OTLP, Jaeger, Zipkin receiver, and logging exporter.
+
[source,yaml]
----
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: otel
namespace: <project_of_opentelemetry_collector_instance>
spec:
mode: deployment
config: |
receivers:
otlp:
protocols:
grpc:
http:
jaeger:
protocols:
grpc:
thrift_binary:
thrift_compact:
thrift_http:
zipkin:
processors:
batch:
memory_limiter:
check_interval: 1s
limit_percentage: 50
spike_limit_percentage: 30
exporters:
logging:
service:
pipelines:
traces:
receivers: [otlp,jaeger,zipkin]
processors: [memory_limiter,batch]
exporters: [logging]
----
.. Select *Create*.
.Verification
. Verify that the `status.phase` of the OpenTelemetry Collector pod is `Running` and the `conditions` are `type: Ready` by running the following command:
+
[source,terminal]
----
$ oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> -o yaml
----
. Get the OpenTelemetry Collector service by running the following command:
+
[source,terminal]
----
$ oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name>
----

View File

@@ -0,0 +1,110 @@
// Module included in the following assemblies:
//
// * distr-tracing-otel-migrating.adoc
:_content-type: PROCEDURE
[id="distr-tracing-otel-migrating-from-jaeger-with-sidecars_{context}"]
= Migrating from the {JaegerShortName} to the {OTELShortName} with sidecars
The {OTELShortName} Operator supports sidecar injection into deployment workloads, so you can migrate from a {JaegerShortName} sidecar to a {OTELShortName} sidecar.
.Prerequisites
* The {JaegerName} is used on the cluster.
* The {OTELName} is installed.
.Procedure
. Configure the OpenTelemetry Collector as a sidecar.
+
[source,yaml]
----
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: otel
namespace: <otel-collector-namespace>
spec:
mode: sidecar
config: |
receivers:
jaeger:
protocols:
grpc:
thrift_binary:
thrift_compact:
thrift_http:
processors:
batch:
memory_limiter:
check_interval: 1s
limit_percentage: 50
spike_limit_percentage: 30
resourcedetection:
detectors: [openshift]
timeout: 2s
exporters:
otlp:
endpoint: "tempo-<example>-gateway:8090" <1>
tls:
insecure: true
service:
pipelines:
traces:
receivers: [jaeger]
processors: [memory_limiter, resourcedetection, batch]
exporters: [otlp]
----
<1> This endpoint points to the Gateway of a TempoStack instance deployed by using the `<example>` {TempoOperator}.
. Create a service account for running your application.
+
[source,yaml]
----
apiVersion: v1
kind: ServiceAccount
metadata:
name: otel-collector-sidecar
----
. Create a cluster role for the permissions needed by some processors.
+
[source,yaml]
----
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: otel-collector-sidecar
rules:
<1>
- apiGroups: ["config.openshift.io"]
resources: ["infrastructures", "infrastructures/status"]
verbs: ["get", "watch", "list"]
----
<1> The `resourcedetectionprocessor` requires permissions for infrastructures and infrastructures/status.
. Create a `ClusterRoleBinding` to set the permissions for the service account.
+
[source,yaml]
----
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: otel-collector-sidecar
subjects:
- kind: ServiceAccount
name: otel-collector-deployment
namespace: otel-collector-example
roleRef:
kind: ClusterRole
name: otel-collector
apiGroup: rbac.authorization.k8s.io
----
. Deploy the OpenTelemetry Collector as a sidecar.
. Remove the injected Jaeger Agent from your application by removing the `"sidecar.jaegertracing.io/inject": "true"` annotation from your `Deployment` object.
. Enable automatic injection of the OpenTelemetry sidecar by adding the `sidecar.opentelemetry.io/inject: "true"` annotation to the `.spec.template.metadata.annotations` field of your `Deployment` object.
. Use the created service account for the deployment of your application to allow the processors to get the correct information and add it to your traces.

View File

@@ -0,0 +1,130 @@
// Module included in the following assemblies:
//
// * distr-tracing-otel-migrating.adoc
:_content-type: PROCEDURE
[id="distr-tracing-otel-migrating-from-jaeger-without-sidecars_{context}"]
= Migrating from the {JaegerShortName} to the {OTELShortName} without sidecars
You can migrate from the {JaegerShortName} to the {OTELShortName} without sidecar deployment.
.Prerequisites
* The {JaegerName} is used on the cluster.
* The {OTELName} is installed.
.Procedure
. Configure OpenTelemetry Collector deployment.
. Create the project where the OpenTelemetry Collector will be deployed.
+
[source,yaml]
----
apiVersion: project.openshift.io/v1
kind: Project
metadata:
name: observability
----
. Create a service account for running the OpenTelemetry Collector instance.
+
[source,yaml]
----
apiVersion: v1
kind: ServiceAccount
metadata:
name: otel-collector-deployment
namespace: observability
----
. Create a cluster role for setting the required permissions for the processors.
+
[source,yaml]
----
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: otel-collector
rules:
<1>
<2>
- apiGroups: ["", "config.openshift.io"]
resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"]
verbs: ["get", "watch", "list"]
----
<1> Permissions for the `pods` and `namespaces` resources are required for the `k8sattributesprocessor`.
<2> Permissions for `infrastructures` and `infrastructures/status` are required for `resourcedetectionprocessor`.
. Create a ClusterRoleBinding to set the permissions for the service account.
+
[source,yaml]
----
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: otel-collector
subjects:
- kind: ServiceAccount
name: otel-collector-deployment
namespace: observability
roleRef:
kind: ClusterRole
name: otel-collector
apiGroup: rbac.authorization.k8s.io
----
. Create the OpenTelemetry Collector instance.
+
NOTE: This collector will export traces to a TempoStack instance. You must create your TempoStack instance by using the Red Hat Tempo Operator and place here the correct endpoint.
+
[source,yaml]
----
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: otel
namespace: observability
spec:
mode: deployment
serviceAccount: otel-collector-deployment
config: |
receivers:
jaeger:
protocols:
grpc:
thrift_binary:
thrift_compact:
thrift_http:
processors:
batch:
k8sattributes:
memory_limiter:
check_interval: 1s
limit_percentage: 50
spike_limit_percentage: 30
resourcedetection:
detectors: [openshift]
exporters:
otlp:
endpoint: "tempo-example-gateway:8090"
tls:
insecure: true
service:
pipelines:
traces:
receivers: [jaeger]
processors: [memory_limiter, k8sattributes, resourcedetection, batch]
exporters: [otlp]
----
. Point your tracing endpoint to the OpenTelemetry Operator.
. If you are exporting your traces directly from your application to Jaeger, change the API endpoint from the Jaeger endpoint to the OpenTelemetry Collector endpoint.
+
.Example of exporting traces by using the `jaegerexporter` with Golang
[source,golang]
----
exp, err := jaeger.New(jaeger.WithCollectorEndpoint(jaeger.WithEndpoint(url))) <1>
----
<1> The URL points to the OpenTelemetry Collector API endpoint.

View File

@@ -0,0 +1,52 @@
//Module included in the following assemblies:
//
//* distr_tracing_install/dist-tracing-otel-removing.adoc
:_content-type: PROCEDURE
[id="distr-tracing-removing-otel-instance-cli_{context}"]
= Removing a {OTELShortName} instance by using the CLI
You can remove a {OTELShortName} instance on the command line.
.Prerequisites
* An active {oc-first} session by a cluster administrator with the `cluster-admin` role.
+
[TIP]
====
* Ensure that your {oc-first} version is up to date and matches your {product-title} version.
* Run `oc login`:
+
[source,terminal]
----
$ oc login --username=<your_username>
----
====
.Procedure
. Get the name of the {OTELShortName} instance by running the following command:
+
[source,terminal]
----
$ oc get deployments -n <project_of_opentelemetry_instance>
----
. Remove the {OTELShortName} instance by running the following command:
+
[source,terminal]
----
$ oc delete opentelemetrycollectors <opentelemetry_instance_name> -n <project_of_opentelemetry_instance>
----
. Optional: Remove the {OTELOperator}.
.Verification
* To verify successful removal of the {OTELShortName} instance, run `oc get deployments` again:
+
[source,terminal]
----
$ oc get deployments -n <project_of_opentelemetry_instance>
----

View File

@@ -0,0 +1,23 @@
//Module included in the following assemblies:
//
//* distr_tracing_install/dist-tracing-otel-removing.adoc
:_content-type: PROCEDURE
[id="distr-tracing-removing-otel-instance_{context}"]
= Removing a {OTELShortName} instance by using the web console
You can remove a {OTELShortName} instance in the *Administrator* view of the web console.
.Prerequisites
* You are logged in to the web console as a cluster administrator with the `cluster-admin` role.
* For {product-dedicated}, you must be logged in using an account with the `dedicated-admin` role.
.Procedure
. Go to *Operators* -> *Installed Operators* -> *{OTELOperator}* -> *OpenTelemetryInstrumentation* or *OpenTelemetryCollector*.
. To remove the relevant instance, select {kebab} -> *Delete* ... -> *Delete*.
. Optional: Remove the {OTELOperator}.

View File

@@ -0,0 +1,119 @@
// Module included in the following assemblies:
//
// * /distr_tracing/distr_tracing_otel/distr-tracing-otel-using.adoc
:_content-type: PROCEDURE
[id="distr-tracing-otel-send-traces-and-metrics-to-otel-collector-with-sidecar_{context}"]
= Sending traces and metrics to the OpenTelemetry Collector with sidecar injection
You can set up sending telemetry data to an OpenTelemetryCollector instance with sidecar injection.
The {OTELOperator} allows sidecar injection into deployment workloads and automatic configuration of your instrumentation to send telemetry data to the OpenTelemetry Collector.
.Prerequisites
* The {TempoName} is installed and a TempoStack instance is deployed.
* You have access to the cluster through the web console or the {oc-first}:
** You are logged in to the web console as a cluster administrator with the `cluster-admin` role.
** An active {oc-first} session by a cluster administrator with the `cluster-admin` role.
** For {product-dedicated}, you must have an account with the `dedicated-admin` role.
.Procedure
. Create a project for the OpenTelemtry Collector.
+
[source,yaml]
----
apiVersion: project.openshift.io/v1
kind: Project
metadata:
name: observability
----
. Create a service account.
+
[source,yaml]
----
apiVersion: v1
kind: ServiceAccount
metadata:
name: otel-collector-sidecar
namespace: observability
----
. Grant permissions to the service account for the `k8sattributes` and `resourcedetection` processors.
+
[source,yaml]
----
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: otel-collector
rules:
- apiGroups: ["", "config.openshift.io"]
resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: otel-collector
subjects:
- kind: ServiceAccount
name: otel-collector-sidecar
namespace: observability
roleRef:
kind: ClusterRole
name: otel-collector
apiGroup: rbac.authorization.k8s.io
----
. Deploy the OpenTelemetry Collector as a sidecar.
+
[source,yaml]
----
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: otel
namespace: observability
spec:
serviceAccount: otel-collector-sidecar
mode: sidecar
config: |
serviceAccount: otel-collector-sidecar
receivers:
otlp:
protocols:
grpc:
http:
processors:
batch:
memory_limiter:
check_interval: 1s
limit_percentage: 50
spike_limit_percentage: 30
resourcedetection:
detectors: [openshift]
timeout: 2s
exporters:
otlp:
endpoint: "tempo-<example>-gateway:8090" <1>
tls:
insecure: true
service:
pipelines:
traces:
receivers: [jaeger]
processors: [memory_limiter, resourcedetection, batch]
exporters: [otlp]
----
<1> This points to the Gateway of the TempoStack instance deployed by using the `<example>` {TempoOperator}.
. Create your deployment using the `otel-collector-sidecar` service account.
. Add the `sidecar.opentelemetry.io/inject: "true"` annotation to your `Deployment` object. This will inject all the needed environment variables to send data from your workloads to the OpenTelemetryCollector instance.

View File

@@ -0,0 +1,155 @@
// Module included in the following assemblies:
//
// * /distr_tracing/distr_tracing_otel/distr-tracing-otel-using.adoc
:_content-type: PROCEDURE
[id="distr-tracing-otel-send-traces-and-metrics-to-otel-collector-without-sidecar_{context}"]
= Sending traces and metrics to the OpenTelemetry Collector without sidecar injection
You can set up sending telemetry data to an OpenTelemetryCollector instance without sidecar injection, which involves manually setting several environment variables.
.Prerequisites
* The {TempoName} is installed and a TempoStack instance is deployed.
* You have access to the cluster through the web console or the {oc-first}:
** You are logged in to the web console as a cluster administrator with the `cluster-admin` role.
** An active {oc-first} session by a cluster administrator with the `cluster-admin` role.
** For {product-dedicated}, you must have an account with the `dedicated-admin` role.
.Procedure
. Create a project for the OpenTelemtry Collector.
+
[source,yaml]
----
apiVersion: project.openshift.io/v1
kind: Project
metadata:
name: observability
----
. Create a service account.
+
[source,yaml]
----
apiVersion: v1
kind: ServiceAccount
metadata:
name: otel-collector-deployment
namespace: observability
----
. Grant permissions to the service account for the `k8sattributes` and `resourcedetection` processors.
+
[source,yaml]
----
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: otel-collector
rules:
- apiGroups: ["", "config.openshift.io"]
resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: otel-collector
subjects:
- kind: ServiceAccount
name: otel-collector
namespace: observability
roleRef:
kind: ClusterRole
name: otel-collector
apiGroup: rbac.authorization.k8s.io
----
. Deploy the OpenTelemetryCollector instance.
+
[source,yaml]
----
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: otel
namespace: observability
spec:
mode: deployment
serviceAccount: otel-collector-deployment
config: |
receivers:
jaeger:
protocols:
grpc:
thrift_binary:
thrift_compact:
thrift_http:
opencensus:
otlp:
protocols:
grpc:
http:
zipkin:
processors:
batch:
k8sattributes:
memory_limiter:
check_interval: 1s
limit_percentage: 50
spike_limit_percentage: 30
resourcedetection:
detectors: [openshift]
exporters:
otlp:
endpoint: "tempo-<example>-distributor:4317" <1>
tls:
insecure: true
service:
pipelines:
traces:
receivers: [jaeger, opencensus, otlp, zipkin]
processors: [memory_limiter, k8sattributes, resourcedetection, batch]
exporters: [otlp]
----
<1> This points to the Gateway of the TempoStack instance deployed by using the `<example>` {TempoOperator}.
. Set the following environment variables in the container with your instrumented application:
+
[options="header"]
[cols="l, a, a"]
|===
|Name |Description |Default value
|OTEL_SERVICE_NAME
|Sets the value of the `service.name` resource attribute.
|`""`
|OTEL_EXPORTER_OTLP_ENDPOINT
|Base endpoint URL for any signal type with an optionally specified port number.
|`\https://localhost:4317`
|OTEL_EXPORTER_OTLP_CERTIFICATE
|Path to the certificate file for the TLS credentials of the gRPC client.
|`\https://localhost:4317`
|OTEL_TRACES_SAMPLER
|Sampler to be used for traces.
|`parentbased_always_on`
|OTEL_EXPORTER_OTLP_PROTOCOL
|Transport protocol for the OTLP exporter.
|`grpc`
|OTEL_EXPORTER_OTLP_TIMEOUT
|Maximum time the OTLP exporter will wait for each batch export.
|`10s`
|OTEL_EXPORTER_OTLP_INSECURE
|Disables client transport security for gRPC requests; an HTTPS schema overrides it.
|`False`
|===

View File

@@ -1,19 +1,27 @@
////
This module included in the following assemblies:
-service_mesh/v2x/ossm-architecture.adoc
- distributed-tracing-release-notes.adoc
-distr_tracing_arch/distr-tracing-architecture.adoc
-serverless/serverless-tracing.adoc
////
// Module included in the following assemblies:
//
// * distr_tracing/distr_tracing_rn/distr-tracing-rn-2-0.adoc
// * distr_tracing/distr_tracing_rn/distr-tracing-rn-2-1.adoc
// * distr_tracing/distr_tracing_rn/distr-tracing-rn-2-2.adoc
// * distr_tracing/distr_tracing_rn/distr-tracing-rn-2-3.adoc
// * distr_tracing/distr_tracing_rn/distr-tracing-rn-2-4.adoc
// * distr_tracing/distr_tracing_rn/distr-tracing-rn-2-5.adoc
// * distr_tracing/distr_tracing_rn/distr-tracing-rn-2-6.adoc
// * distr_tracing/distr_tracing_rn/distr-tracing-rn-2-7.adoc
// * distr_tracing/distr_tracing_rn/distr-tracing-rn-2-8.adoc
// * distr_tracing/distr_tracing_rn/distr-tracing-rn-2-9.adoc
// * distr_tracing_arch/distr-tracing-architecture.adoc
// * service_mesh/v2x/ossm-architecture.adoc
// * serverless/serverless-tracing.adoc
:_content-type: CONCEPT
[id="distr-tracing-product-overview_{context}"]
= Distributed tracing overview
As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture.
You can use {DTShortName} for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications.
You can use the {DTProductName} for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications.
With {DTShortName} you can perform the following functions:
With the {DTShortName}, you can perform the following functions:
* Monitor distributed transactions
@@ -21,8 +29,10 @@ With {DTShortName} you can perform the following functions:
* Perform root cause analysis
{DTProductName} consists of two main components:
The {DTShortName} consists of three components:
* *{JaegerName}* - This component is based on the open source link:https://www.jaegertracing.io/[Jaeger project].
* *{JaegerName}*, which is based on the open source link:https://www.jaegertracing.io/[Jaeger project].
* *{OTELNAME}* - This component is based on the open source link:https://opentelemetry.io/[OpenTelemetry project].
* *{TempoName}*, which is based on the open source link:https://grafana.com/oss/tempo/[Grafana Tempo project].
* *{OTELNAME}*, which is based on the open source link:https://opentelemetry.io/[OpenTelemetry project].

View File

@@ -4,16 +4,36 @@ This module included in the following assemblies:
////
[id="distr-tracing-removing-instance-cli_{context}"]
= Removing a {JaegerName} instance from the CLI
= Removing a {JaegerShortName} instance by using the CLI
. Log in to the {product-title} CLI.
You can remove a {JaegerShortName} instance on the command line.
.Prerequisites
* An active {oc-first} session by a cluster administrator with the `cluster-admin` role.
+
[TIP]
====
* Ensure that your {oc-first} version is up to date and matches your {product-title} version.
* Run `oc login`:
+
[source,terminal]
----
$ oc login --username=<your_username>
----
====
.Procedure
. Log in with the {oc-first} by running the following command:
+
[source,terminal]
----
$ oc login --username=<NAMEOFUSER>
----
+
. To display the {JaegerShortName} instances run the command:
. To display the {JaegerShortName} instances, run the following command:
+
[source,terminal]
----
@@ -34,7 +54,7 @@ The names of Operators have the suffix `-operator`. The following example shows
$ oc get deployments -n openshift-operators
----
+
You should see output similar to the following:
You will see output similar to the following:
+
[source,terminal]
----
@@ -77,7 +97,7 @@ For example:
$ oc get deployments -n openshift-operators
----
+
You should see generated output that is similar to the following example:
You will see generated output that is similar to the following example:
+
[source,terminal]
----

View File

@@ -5,13 +5,19 @@ This module included in the following assemblies:
:_content-type: PROCEDURE
[id="distr-tracing-removing-instance_{context}"]
= Removing a {JaegerName} instance using the web console
= Removing a {JaegershortName} instance by using the web console
[NOTE]
You can remove a {JaegerShortName} instance in the *Administrator* view of the web console.
[WARNING]
====
When deleting an instance that uses the in-memory storage, all data is permanently lost. Data stored in a persistent storage such as Elasticsearch is not be deleted when a {JaegerName} instance is removed.
When deleting an instance that uses in-memory storage, all data is irretrievably lost. Data stored in persistent storage such as Elasticsearch is not deleted when a {JaegerName} instance is removed.
====
.Prerequisites
* You are logged in to the web console as a cluster administrator with the `cluster-admin` role.
.Procedure
. Log in to the {product-title} web console.

View File

@@ -1,57 +0,0 @@
////
Module included in the following assemblies:
* distributed-tracing-release-notes.adoc
* service_mesh/v2x/servicemesh-release-notes.adoc
////
:_content-type: REFERENCE
[id="distr-tracing-rn-fixed-issues_{context}"]
= {DTProductName} fixed issues
////
Provide the following info for each issue if possible:
Consequence - What user action or situation would make this problem appear (If you have the foo option enabled and did x)? What did the customer experience as a result of the issue? What was the symptom?
Cause - Why did this happen?
Fix - What did we change to fix the problem?
Result - How has the behavior changed as a result? Try to avoid “It is fixed” or “The issue is resolved” or “The error no longer presents”.
////
* link:https://issues.redhat.com/browse/OSSM-1910[OSSM-1910]
Because of an issue introduced in version 2.6, TLS connections could not be established with {product-title} {SMProductShortName}.
This update resolves the issue by changing the service port names to match conventions used by {product-title} {SMProductShortName} and Istio.
* link:https://issues.redhat.com/browse/OBSDA-208[OBSDA-208]
Before this update, the default 200m CPU and 256Mi memory resource limits could cause {OTELShortName} to restart continuously on large clusters.
This update resolves the issue by removing these resource limits.
* link:https://issues.redhat.com/browse/OBSDA-222[OBSDA-222]
Before this update, spans could be dropped in the {product-title} {JaegerShortName}.
To help prevent this issue from occurring, this release updates version dependencies.
* link:https://issues.redhat.com/browse/TRACING-2337[TRACING-2337]
Jaeger is logging a repetitive warning message in the Jaeger logs similar to the following:
+
[source,terminal]
----
{"level":"warn","ts":1642438880.918793,"caller":"channelz/logging.go:62","msg":"[core]grpc: Server.Serve failed to create ServerTransport: connection error: desc = \"transport: http2Server.HandleStreams received bogus greeting from client: \\\"\\\\x16\\\\x03\\\\x01\\\\x02\\\\x00\\\\x01\\\\x00\\\\x01\\\\xfc\\\\x03\\\\x03vw\\\\x1a\\\\xc9T\\\\xe7\\\\xdaCj\\\\xb7\\\\x8dK\\\\xa6\\\"\"","system":"grpc","grpc_log":true}
----
+
This issue was resolved by exposing only the HTTP(S) port of the query service, and not the gRPC port.
* link:https://issues.redhat.com/browse/TRACING-2009[TRACING-2009] The Jaeger Operator has been updated to include support for the Strimzi Kafka Operator 0.23.0.
* link:https://issues.redhat.com/browse/TRACING-1907[TRACING-1907] The Jaeger agent sidecar injection was failing due to missing config maps in the application namespace. The config maps were getting automatically deleted due to an incorrect `OwnerReference` field setting and as a result, the application pods were not moving past the "ContainerCreating" stage. The incorrect settings have been removed.
* link:https://issues.redhat.com/browse/TRACING-1725[TRACING-1725] Follow-up to TRACING-1631. Additional fix to ensure that Elasticsearch certificates are properly reconciled when there are multiple Jaeger production instances, using same name but within different namespaces. See also link:https://bugzilla.redhat.com/show_bug.cgi?id=1918920[BZ-1918920].
* link:https://issues.jboss.org/browse/TRACING-1631[TRACING-1631] Multiple Jaeger production instances, using same name but within different namespaces, causing Elasticsearch certificate issue. When multiple service meshes were installed, all of the Jaeger Elasticsearch instances had the same Elasticsearch secret instead of individual secrets, which prevented the OpenShift Elasticsearch Operator from communicating with all of the Elasticsearch clusters.
* link:https://issues.redhat.com/browse/TRACING-1300[TRACING-1300] Failed connection between Agent and Collector when using Istio sidecar. An update of the Jaeger Operator enabled TLS communication by default between a Jaeger sidecar agent and the Jaeger Collector.
* link:https://issues.redhat.com/browse/TRACING-1208[TRACING-1208] Authentication "500 Internal Error" when accessing Jaeger UI. When trying to authenticate to the UI using OAuth, I get a 500 error because oauth-proxy sidecar doesn't trust the custom CA bundle defined at installation time with the `additionalTrustBundle`.
* link:https://issues.redhat.com/browse/TRACING-1166[TRACING-1166] It is not currently possible to use the Jaeger streaming strategy within a disconnected environment. When a Kafka cluster is being provisioned, it results in a error: `Failed to pull image registry.redhat.io/amq7/amq-streams-kafka-24-rhel7@sha256:f9ceca004f1b7dccb3b82d9a8027961f9fe4104e0ed69752c0bdd8078b4a1076`.
* link:https://issues.redhat.com/browse/TRACING-809[TRACING-809] Jaeger Ingester is incompatible with Kafka 2.3. When there are two or more instances of the Jaeger Ingester and enough traffic it will continuously generate rebalancing messages in the logs. This is due to a regression in Kafka 2.3 that was fixed in Kafka 2.3.1. For more information, see https://github.com/jaegertracing/jaeger/issues/1819[Jaegertracing-1819].
* link:https://bugzilla.redhat.com/show_bug.cgi?id=1918920[BZ-1918920]/link:https://issues.redhat.com/browse/LOG-1619[LOG-1619] The Elasticsearch pods does not get restarted automatically after an update.
+
Workaround: Restart the pods manually.

View File

@@ -1,35 +0,0 @@
////
Module included in the following assemblies:
* service_mesh/v2x/servicemesh-release-notes.adoc
* distributed-tracing--release-notes.adoc
////
:_content-type: REFERENCE
[id="distr-tracing-rn-known-issues_{context}"]
= {DTProductName} known issues
////
Consequence - What user action or situation would make this problem appear (Selecting the Foo option with the Bar version 1.3 plugin enabled results in an error message)? What did the customer experience as a result of the issue? What was the symptom?
Cause (if it has been identified) - Why did this happen?
Workaround (If there is one)- What can you do to avoid or negate the effects of this issue in the meantime? Sometimes if there is no workaround it is worthwhile telling readers to contact support for advice. Never promise future fixes.
Result - If the workaround does not completely address the problem.
////
These limitations exist in {DTProductName}:
* Apache Spark is not supported.
ifndef::openshift-rosa[]
* The streaming deployment via AMQ/Kafka is unsupported on IBM Z and IBM Power Systems.
endif::openshift-rosa[]
These are the known issues for {DTProductName}:
* link:https://issues.redhat.com/browse/OBSDA-220[OBSDA-220] In some cases, if you try to pull an image using {OTELShortName}, the image pull fails and a `Failed to pull image` error message appears.
There is no workaround for this issue.
* link:https://issues.redhat.com/browse/TRACING-2057[TRACING-2057] The Kafka API has been updated to `v1beta2` to support the Strimzi Kafka Operator 0.23.0. However, this API version is not supported by AMQ Streams 1.6.3. If you have the following environment, your Jaeger services will not be upgraded, and you cannot create new Jaeger services or modify existing Jaeger services:
** Jaeger Operator channel: *1.17.x stable* or *1.20.x stable*
** AMQ Streams Operator channel: *amq-streams-1.6.x*
+
To resolve this issue, switch the subscription channel for your AMQ Streams Operator to either *amq-streams-1.7.x* or *stable*.

View File

@@ -1,236 +0,0 @@
////
Module included in the following assemblies:
- distributed-tracing-release-notes.adoc
////
////
Feature Describe the new functionality available to the customer. For enhancements, try to describe as specifically as possible where the customer will see changes.
Reason If known, include why has the enhancement been implemented (use case, performance, technology, etc.). For example, showcases integration of X with Y, demonstrates Z API feature, includes latest framework bug fixes.
Result If changed, describe the current user experience.
////
:_content-type: REFERENCE
[id="distr-tracing-rn-new-features_{context}"]
= New features and enhancements
This release adds improvements related to the following components and concepts.
== New features and enhancements {DTProductName} 2.8
This release of {DTProductName} addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
=== Component versions supported in {DTProductName} version 2.8
[options="header"]
|===
|Operator |Component |Version
|{JaegerName}
|Jaeger
|1.42
|{OTELName}
|OpenTelemetry
|0.74.0
|{TempoName}
|{TempoShortName}
|0.1.0
|===
== New features and enhancements {DTProductName} 2.7
This release of {DTProductName} addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
=== Component versions supported in {DTProductName} version 2.7
[options="header"]
|===
|Operator |Component |Version
|{JaegerName}
|Jaeger
|1.39
|{OTELName}
|OpenTelemetry
|0.63.1
|===
== New features and enhancements {DTProductName} 2.6
This release of {DTProductName} addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
=== Component versions supported in {DTProductName} version 2.6
[options="header"]
|===
|Operator |Component |Version
|{JaegerName}
|Jaeger
|1.38
|{OTELName}
|OpenTelemetry
|0.60
|===
== New features and enhancements {DTProductName} 2.5
This release of {DTProductName} addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
This release introduces support for ingesting OpenTelemetry protocol (OTLP) to the {JaegerName} Operator. The Operator now automatically enables the OTLP ports:
* Port 4317 is used for OTLP gRPC protocol.
* Port 4318 is used for OTLP HTTP protocol.
This release also adds support for collecting Kubernetes resource attributes to the {OTELName} Operator.
=== Component versions supported in {DTProductName} version 2.5
[options="header"]
|===
|Operator |Component |Version
|{JaegerName}
|Jaeger
|1.36
|{OTELName}
|OpenTelemetry
|0.56
|===
== New features and enhancements {DTProductName} 2.4
This release of {DTProductName} addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
This release also adds support for auto-provisioning certificates using the Red Hat Elasticsearch Operator.
* Self-provisioning, which means using the {JaegerName} Operator to call the Red Hat Elasticsearch Operator during installation. Self provisioning is fully supported with this release.
* Creating the Elasticsearch instance and certificates first and then configuring the {JaegerShortName} to use the certificate is a Technology Preview for this release.
[NOTE]
====
When upgrading to {DTProductName} 2.4, the Operator recreates the Elasticsearch instance, which might take five to ten minutes. Distributed tracing will be down and unavailable for that period.
====
=== Component versions supported in {DTProductName} version 2.4
[options="header"]
|===
|Operator |Component |Version
|{JaegerName}
|Jaeger
|1.34.1
|{OTELName}
|OpenTelemetry
|0.49
|===
== New features and enhancements {DTProductName} 2.3.1
This release of {DTProductName} addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
=== Component versions supported in {DTProductName} version 2.3.1
[options="header"]
|===
|Operator |Component |Version
|{JaegerName}
|Jaeger
|1.30.2
|{OTELName}
|OpenTelemetry
|0.44.1-1
|===
== New features and enhancements {DTProductName} 2.3.0
This release of {DTProductName} addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
With this release, the {JaegerName} Operator is now installed to the `openshift-distributed-tracing` namespace by default. Before this update, the default installation had been in the `openshift-operators` namespace.
=== Component versions supported in {DTProductName} version 2.3.0
[options="header"]
|===
|Operator |Component |Version
|{JaegerName}
|Jaeger
|1.30.1
|{OTELName}
|OpenTelemetry
|0.44.0
|===
== New features and enhancements {DTProductName} 2.2.0
This release of {DTProductName} addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
=== Component versions supported in {DTProductName} version 2.2.0
[options="header"]
|===
|Operator |Component |Version
|{JaegerName}
|Jaeger
|1.30.0
|{OTELName}
|OpenTelemetry
|0.42.0
|===
== New features and enhancements {DTProductName} 2.1.0
This release of {DTProductName} addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
=== Component versions supported in {DTProductName} version 2.1.0
[options="header"]
|===
|Operator |Component |Version
|{JaegerName}
|Jaeger
|1.29.1
|{OTELName}
|OpenTelemetry
|0.41.1
|===
== New features and enhancements {DTProductName} 2.0.0
This release marks the rebranding of Red Hat OpenShift Jaeger to {DTProductName}. This release consists of the following changes, additions, and improvements:
* {DTProductName} now consists of the following two main components:
** *{JaegerName}* - This component is based on the open source link:https://www.jaegertracing.io/[Jaeger project].
** *{OTELName}* - This component is based on the open source link:https://opentelemetry.io/[OpenTelemetry project].
* Updates {JaegerName} Operator to Jaeger 1.28. Going forward, {DTProductName} will only support the `stable` Operator channel. Channels for individual releases are no longer supported.
* Introduces a new {OTELName} Operator based on OpenTelemetry 0.33. Note that this Operator is a Technology Preview feature.
* Adds support for OpenTelemetry protocol (OTLP) to the Query service.
* Introduces a new distributed tracing icon that appears in the OpenShift OperatorHub.
* Includes rolling updates to the documentation to support the name change and new features.
This release also addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
=== Component versions supported in {DTProductName} version 2.0.0
[options="header"]
|===
|Operator |Component |Version
|{JaegerName}
|Jaeger
|1.28.0
|{OTELName}
|OpenTelemetry
|0.33.0
|===

View File

@@ -1,86 +0,0 @@
////
Module included in the following assemblies:
- rhbjaeger-release-notes.adoc
////
:_content-type: CONCEPT
[id="distr-tracing-rn-technology-preview_{context}"]
= {DTProductName} Technology Preview
////
Provide the following info for each issue if possible:
Description - Describe the new functionality available to the customer. For enhancements, try to describe as specifically as possible where the customer will see changes. Avoid the word “supports” as in [product] now supports [feature] to avoid customer confusion with full support. Say, for example, “available as a Technology Preview.”
Package - A brief description of what the customer has to install or enable to use the Technology Preview feature. (e.g., available in quickstart.zip on customer portal, JDF website, container on registry, enable option, etc.)
////
[IMPORTANT]
====
Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see link:https://access.redhat.com/support/offerings/techpreview/[Technology Preview Features Support Scope].
====
== {DTProductName} 2.8.0 Technology Preview
This release introduces support for {TempoName} as a Technology Preview feature for {DTProductName}.
The feature uses version 0.1.0 of {TempoName} and version 2.0.1 of the upstream {TempoShortName} components.
You can use {TempoName} to replace Jaeger so that you can use S3-compatible storage instead of ElasticSearch.
Most users who use {TempoName} instead of Jaeger will not notice any difference in functionality because {TempoShortName} supports the same ingestion and query protocols as Jaeger and uses the same user interface.
If you enable this Technology Preview feature, note the following limitations of the current implementation:
* {TempoName} currently does not support disconnected installations. (link:https://issues.redhat.com/browse/TRACING-3145[TRACING-3145])
* When you use the Jaeger user interface (UI) with {TempoName}, the Jaeger UI lists only services that have sent traces within the last 15 minutes.
For services that have not sent traces within the last 15 minutes, those traces are still stored even though they are not visible in the Jaeger UI. (link:https://issues.redhat.com/browse/TRACING-3139[TRACING-3139])
Expanded support for the Tempo Operator is planned for future releases of {DTProductName}.
Possible additional features might include support for TLS authentication, multitenancy, and multiple clusters.
For more information about the Tempo Operator, see the documentation for the link:https://grafana.com/docs/tempo/latest/setup/operator/[Community Tempo Operator].
== {DTProductName} 2.4.0 Technology Preview
This release also adds support for auto-provisioning certificates using the Red Hat Elasticsearch Operator.
* Self-provisioning, which means using the {JaegerName} Operator to call the Red Hat Elasticsearch Operator during installation. Self provisioning is fully supported with this release.
* Creating the Elasticsearch instance and certificates first and then configuring the {JaegerShortName} to use the certificate is a Technology Preview for this release.
== {DTProductName} 2.2.0 Technology Preview
Unsupported OpenTelemetry Collector components included in the 2.1 release have been removed.
== {DTProductName} 2.1.0 Technology Preview
This release introduces a breaking change to how to configure certificates in the OpenTelemetry custom resource file. In the new version, the `ca_file` moves under `tls` in the custom resource, as shown in the following examples.
.CA file configuration for OpenTelemetry version 0.33
[source,yaml]
----
spec:
mode: deployment
config: |
exporters:
jaeger:
endpoint: jaeger-production-collector-headless.tracing-system.svc:14250
ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt"
----
.CA file configuration for OpenTelemetry version 0.41.1
[source,yaml]
----
spec:
mode: deployment
config: |
exporters:
jaeger:
endpoint: jaeger-production-collector-headless.tracing-system.svc:14250
tls:
ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt"
----
== {DTProductName} 2.0.0 Technology Preview
This release includes the addition of the {OTELName}, which you install using the {OTELName} Operator. {OTELName} is based on the link:https://opentelemetry.io/[OpenTelemetry] APIs and instrumentation.
{OTELName} includes the OpenTelemetry Operator and Collector. The Collector can be used to receive traces in either the OpenTelemetry or Jaeger protocol and send the trace data to {DTProductName}. Other capabilities of the Collector are not supported at this time.
The OpenTelemetry Collector allows developers to instrument their code with vendor agnostic APIs, avoiding vendor lock-in and enabling a growing ecosystem of observability tooling.

View File

@@ -1,13 +1,13 @@
////
This module included in the following assemblies:
- distr_tracing_install/distr-tracing-deploying-jaeger.adoc
- distr_tracing_jaeger/distr-tracing-jaeger-configuring.adoc
////
:_content-type: REFERENCE
[id="dist-tracing-sidecar-automatic_{context}"]
= Automatically injecting sidecars
The {JaegerName} Operator can inject Jaeger Agent sidecars into Deployment workloads. To enable automatic injection of sidecars, add the `sidecar.jaegertracing.io/inject` annotation set to either the string `true` or to the {JaegerShortName} instance name that is returned by running `$ oc get jaegers`.
When you specify `true`, there should be only a single {JaegerShortName} instance for the same namespace as the deployment, otherwise, the Operator cannot determine which {JaegerShortName} instance to use. A specific {JaegerShortName} instance name on a deployment has a higher precedence than `true` applied on its namespace.
The {JaegerName} Operator can inject Jaeger Agent sidecars into deployment workloads. To enable automatic injection of sidecars, add the `sidecar.jaegertracing.io/inject` annotation set to either the string `true` or to the {JaegerShortName} instance name that is returned by running `$ oc get jaegers`.
When you specify `true`, there must be only a single {JaegerShortName} instance for the same namespace as the deployment. Otherwise, the Operator is unable to determine which {JaegerShortName} instance to use. A specific {JaegerShortName} instance name on a deployment has a higher precedence than `true` applied on its namespace.
The following snippet shows a simple application that will inject a sidecar, with the agent pointing to the single {JaegerShortName} instance available in the same namespace:

View File

@@ -1,6 +1,6 @@
////
This module included in the following assemblies:
- distr_tracing_install/distr-tracing-deploying-jaeger.adoc
- distr_tracing_jaeger/distr-tracing-jaeger-configuring.adoc
////
:_content-type: REFERENCE
[id="distr-tracing-sidecar-manual_{context}"]

View File

@@ -0,0 +1,149 @@
// Module included in the following assemblies:
//
// * distr_tracing_tempo/distr-tracing-tempo-configuring.adoc
:_content-type: REFERENCE
[id="distr-tracing-tempo-config-default_{context}"]
= Distributed tracing default configuration options
The Tempo custom resource (CR) defines the architecture and settings to be used when creating the {TempoShortName} resources. You can modify these parameters to customize your {TempoShortName} implementation to your business needs.
.Example of a generic Tempo YAML file
[source,yaml]
----
apiVersion: tempo.grafana.com/v1alpha1
kind: TempoStack
metadata:
name: name
spec:
storage: {}
resources: {}
storageSize: 200M
replicationFactor: 1
retention: {}
template:
distributor:{}
ingester: {}
compactor: {}
querier: {}
queryFrontend: {}
gateway: {}
----
.Tempo parameters
[options="header"]
|===
|Parameter |Description |Values |Default value
|`apiVersion:`
|API version to use when creating the object.
|`tempotracing.io/v1`
|`tempotracing.io/v1`
|`kind:`
|Defines the kind of Kubernetes object to create.
|`tempo`
|
|`metadata:`
|Data that uniquely identifies the object, including a `name` string, `UID`, and optional `namespace`.
|
|{product-title} automatically generates the `UID` and completes the `namespace` with the name of the project where the object is created.
|`name:`
|Name for the object.
|Name of your TempoStack instance.
|`tempo-all-in-one-inmemory`
|`spec:`
|Specification for the object to be created.
|Contains all of the configuration parameters for your TempoStack instance. When a common definition for all Tempo components is required, it is defined under the `spec` node. When the definition relates to an individual component, it is placed under the `spec/template/<component>` node.
|N/A
|`resources:`
|Resources assigned to the TempoStack.
|
|
|`storageSize:`
|Storage size for ingester PVCs.
|
|
|`replicationFactor:`
|Configuration for the replication factor.
|
|
|`retention:`
|Configuration options for retention of traces.
|
|
|`storage:`
|Configuration options that define the storage. All storage-related options must be placed under `storage` and not under the `allInOne` or other component options.
|
|
|`template.distributor:`
|Configuration options for the Tempo `distributor`.
|
|
|`template.ingester:`
|Configuration options for the Tempo `ingester`.
|
|
|`template.compactor:`
|Configuration options for the Tempo `compactor`.
|
|
|`template.querier:`
|Configuration options for the Tempo `querier`.
|
|
|`template.queryFrontend:`
|Configuration options for the Tempo `query-frontend`.
|
|
|`template.gateway:`
|Configuration options for the Tempo `gateway`.
|
|
|===
.Minimum required configuration
The following is the required minimum for creating a {TempoShortName} deployment with the default settings:
[source,yaml]
----
apiVersion: tempo.grafana.com/v1alpha1
kind: TempoStack
metadata:
name: simplest
spec:
storage: <1>
secret:
name: minio
type: s3
resources:
total:
limits:
memory: 2Gi
cpu: 2000m
template:
queryFrontend:
jaegerQuery:
enabled: true
ingress:
type: route
----
<1> This section specifies the deployed object storage back end, which requires a created secret with credentials for access to the object storage.

View File

@@ -0,0 +1,67 @@
// Module included in the following assemblies:
//
// * distr_tracing_tempo/distr-tracing-tempo-configuring.adoc
:_content-type: REFERENCE
[id="distr-tracing-tempo-config-query_{context}"]
= Query configuration options
Query is a service that retrieves traces from storage and hosts the user interface to display them.
.Parameters used by the {TempoOperator} to define Query
[options="header"]
[cols="l, a, a, a"]
|===
|Parameter |Description |Values |Default value
|spec:
query:
replicas:
|Specifies the number of Query replicas to create.
|Positive integer
|
|===
.Configuration parameters passed to Query
[options="header"]
[cols="l, a, a, a"]
|===
|Parameter |Description |Values |Default value
|spec:
query:
options: {}
|Configuration options that define the Query service.
|
|
|options:
log-level:
|Logging level for Query.
|`debug`, `info`, `warn`, `error`, `fatal`, `panic`
|
|options:
query:
base-path:
|You can set the base path for all tempo-query HTTP routes to a non-root value: for example, `/tempo` will cause all UI URLs to start with `/tempo`. This can be useful when running `tempo-query` behind a reverse proxy.
|`/<path>`
|
|===
.Sample Query configuration
[source,yaml]
----
apiVersion: tempotracing.io/v1
kind: "Tempo"
metadata:
name: "my-tempo"
spec:
strategy: allInOne
allInOne:
options:
log-level: debug
query:
base-path: /tempo
----

View File

@@ -0,0 +1,39 @@
// Module included in the following assemblies:
//
// * distr_tracing_tempo/distr-tracing-tempo-configuring.adoc
:_content-type: REFERENCE
[id="distr-tracing-tempo-config-storage_{context}"]
= The {TempoShortName} storage configuration
You can configure object storage for the {TempoShortName} in the `TempoStack` custom resource under `spec.storage`. You can choose from among several storage providers that are supported.
.General storage parameters used by the {TempoOperator} to define distributed tracing storage
[options="header"]
[cols="l, a, a, a"]
|===
|Parameter |Description |Values |Default value
|spec:
storage:
secret
type:
|Type of storage to use for the deployment.
|`memory`. Memory storage is only appropriate for development, testing, demonstrations, and proof of concept environments because the data does not persist when the pod is shut down.
|`memory`
|storage:
secretname:
|Name of the secret that contains the credentials for the set object storage type.
|
|N/A
|storage:
tls:
caName:
|CA is the name of a `ConfigMap` object containing a CA certificate.
|
|
|===
include::snippets/distr-tracing-tempo-required-secret-parameters.adoc[]

View File

@@ -0,0 +1,23 @@
// Module included in the following assemblies:
//
// * distr-tracing-tempo-configuring.adoc
:_content-type: PROCEDURE
[id="configuring-tempooperator-metrics-and-alerts_{context}"]
= Configuring {TempoOperator} metrics and alerts
When installing the {TempoOperator} from the web console, you can select the *Enable Operator recommended cluster monitoring on this Namespace* checkbox, which enables creating metrics and alerts of the {TempoOperator}.
If the checkbox was not selected during installation, you can manually enable metrics and alerts even after installing the {TempoOperator}.
.Procedure
* Add the `openshift.io/cluster-monitoring: "true"` label in the project where the {TempoOperator} is installed, which is `openshift-tempo-operator` by default.
.Verification
You can use the *Administrator* view of the web console to verify successful configuration:
. Go to *Observe* -> *Targets*, filter for *Source: Platform*, and search for `tempo-operator`, which must have the *Up* status.
. To verify that alerts are set up correctly, go to *Observe* -> *Alerting* -> *Alerting rules*, filter for *Source: Platform*, and locate the *Alert rules* for the *{TempoOperator}*.

View File

@@ -0,0 +1,51 @@
// Module included in the following assemblies:
//
// * distr-tracing-tempo-configuring.adoc
:_content-type: PROCEDURE
[id="configuring-tempostack-metrics-and-alerts_{context}"]
= Configuring TempoStack metrics and alerts
You can enable metrics and alerts of TempoStack instances.
.Prerequisites
* Monitoring for user-defined projects is enabled in the cluster. See xref:../../monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects[Enabling monitoring for user-defined projects].
.Procedure
. To enable metrics of a TempoStack instance, set the `spec.observability.metrics.createServiceMonitors` field to `true`:
+
[source,yaml]
----
apiVersion: tempo.grafana.com/v1alpha1
kind: TempoStack
metadata:
name: <name>
spec:
observability:
metrics:
createServiceMonitors: true
----
. To enable alerts for a TempoStack instance, set the `spec.observability.metrics.createPrometheusRules` field to `true`:
+
[source,yaml]
----
apiVersion: tempo.grafana.com/v1alpha1
kind: TempoStack
metadata:
name: <name>
spec:
observability:
metrics:
createPrometheusRules: true
----
.Verification
You can use the *Administrator* view of the web console to verify successful configuration:
. Go to *Observe* -> *Targets*, filter for *Source: User*, and check that *ServiceMonitors* in the format `tempo-<instance_name>-<component>` have the *Up* status.
. To verify that alerts are set up correctly, go to *Observe* -> *Alerting* -> *Alerting rules*, filter for *Source: User*, and check that the *Alert rules* for the TempoStack instance components are available.

View File

@@ -0,0 +1,236 @@
// Module included in the following assemblies:
//
//* distr_tracing_tempo/distr-tracing-tempo-installing.adoc
:_content-type: PROCEDURE
[id="distr-tracing-tempo-install-cli_{context}"]
= Installing the {TempoShortName} by using the CLI
You can install the {TempoShortName} from the command line.
.Prerequisites
* An active {oc-first} session by a cluster administrator with the `cluster-admin` role.
+
[TIP]
====
* Ensure that your {oc-first} version is up to date and matches your {product-title} version.
* Run `oc login`:
+
[source,terminal]
----
$ oc login --username=<your_username>
----
====
* You are using a supported provider of object storage: link:https://www.redhat.com/en/technologies/cloud-computing/openshift-data-foundation[Red Hat OpenShift Data Foundation], link:https://min.io/[MinIO], link:https://aws.amazon.com/s3/[Amazon S3], link:https://azure.microsoft.com/en-us/products/storage/blobs/[Azure Blob Storage], link:https://cloud.google.com/storage/[Google Cloud Storage].
.Procedure
. Install the {TempoOperator}:
.. Create a project for the {TempoOperator} by running the following command:
+
[source,terminal]
----
$ oc apply -f - << EOF
apiVersion: project.openshift.io/v1
kind: Project
metadata:
labels:
kubernetes.io/metadata.name: openshift-tempo-operator
openshift.io/cluster-monitoring: "true"
name: openshift-tempo-operator
EOF
----
.. Create an operator group by running the following command:
+
[source,terminal]
----
$ oc apply -f - << EOF
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-tempo-operator
namespace: openshift-tempo-operator
spec:
upgradeStrategy: Default
EOF
----
.. Create a subscription by running the following command:
+
[source,terminal]
----
$ oc apply -f - << EOF
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: tempo-product
namespace: openshift-tempo-operator
spec:
channel: stable
installPlanApproval: Automatic
name: tempo-product
source: redhat-operators
sourceNamespace: openshift-marketplace
EOF
----
.. Check the operator status by running the following command:
+
[source,terminal]
----
$ oc get csv -n openshift-tempo-operator
----
. Create a secret for your object storage bucket by running one of the following commands:
** To create a secret from a YAML file:
+
[source,terminal]
----
$ oc apply -f <secret_file>.yaml
----
** To create a secret from standard input:
+
[source,terminal]
----
$ oc apply -f - << EOF
<object_storage_secret>
EOF
----
+
--
include::snippets/distr-tracing-tempo-required-secret-parameters.adoc[]
--
+
--
include::snippets/distr-tracing-tempo-secret-example.adoc[]
--
. Create a project of your choice for the *TempoStack* instance that you will create in the next step:
** To create a project from standard input without metadata:
+
[source,terminal]
----
$ oc new-project <project_of_tempostack_instance>
----
** To create a project from standard input with metadata:
+
[source,terminal]
----
$ oc apply -f - << EOF
apiVersion: project.openshift.io/v1
kind: Project
metadata:
name: <project_of_tempostack_instance>
EOF
----
. Create a *TempoStack* instance in the project that you created for the *TempoStack* instance in the previous step.
+
NOTE: You can create multiple *TempoStack* instances in separate projects on the same cluster.
+
.. Customize the `TempoStack` custom resource (CR):
+
[source,yaml]
----
apiVersion: tempo.grafana.com/v1alpha1
kind: TempoStack
metadata:
name: sample
namespace: <project_of_tempostack_instance>
spec:
storageSize: 1Gi
storage:
secret:
name: <secret-name> <1>
type: <secret-provider> <2>
template:
queryFrontend:
jaegerQuery:
enabled: true
ingress:
route:
termination: edge
type: route
----
<1> The value of the `name` in the `metadata` of the secret.
<2> The accepted values are `azure` for Azure Blob Storage; `gcs` for Google Cloud Storage; and `s3` for Amazon S3, MinIO, or Red Hat OpenShift Data Foundation.
+
.TempoStack CR for AWS S3 and MinIO storage
[source,yaml]
----
apiVersion: tempo.grafana.com/v1alpha1
kind: TempoStack
metadata:
name: simplest
namespace: project_of_tempostack_instance
spec:
storageSize: 1Gi
storage:
secret:
name: minio-test
type: s3
resources:
total:
limits:
memory: 2Gi
cpu: 2000m
template:
queryFrontend:
jaegerQuery:
enabled: true
ingress:
route:
termination: edge
type: route
----
The stack deployed in this example is configured to receive Jaeger Thrift over HTTP and OpenTelemetry Protocol (OTLP), which permits visualizing the data with the Jaeger UI.
.. Apply the customized CR by running the following command.
+
[source,terminal]
----
$ oc apply -f - << EOF
<TempoStack_custom_resource>
EOF
----
.Verification
. Verify that the `status` of all TempoStack `components` is `Running` and the `conditions` are `type: Ready` by running the following command:
+
[source,terminal]
----
$ oc get tempostacks.tempo.grafana.com simplest -o yaml
----
. Verify that all the TempoStack component pods are running by running the following command:
+
[source,terminal]
----
$ oc get pods
----
. Access the Tempo console:
.. Query the route details by running the following command:
+
[source,terminal]
----
$ export TEMPO_URL=$(oc get route -n <control_plane_namespace> tempo -o jsonpath='{.spec.host}')
----
.. Open `\https://<route_from_previous_step>` in a web browser.
.. Log in using your cluster administrator credentials for the web console.
+
NOTE: The Tempo console initially shows no trace data following the Tempo console installation.

View File

@@ -0,0 +1,135 @@
// Module included in the following assemblies:
//
//* distr_tracing_tempo/distr-tracing-tempo-installing.adoc
:_content-type: PROCEDURE
[id="distr-tracing-tempo-install-web-console_{context}"]
= Installing the {TempoShortName} from the web console
You can install the {TempoShortName} from the *Administrator* view of the web console.
.Prerequisites
* You are logged in to the {product-title} web console as a cluster administrator with the `cluster-admin` role.
* For {product-dedicated}, you must be logged in using an account with the `dedicated-admin` role.
* You are using a supported provider of object storage: link:https://www.redhat.com/en/technologies/cloud-computing/openshift-data-foundation[Red Hat OpenShift Data Foundation], link:https://min.io/[MinIO], link:https://aws.amazon.com/s3/[Amazon S3], link:https://azure.microsoft.com/en-us/products/storage/blobs/[Azure Blob Storage], link:https://cloud.google.com/storage/[Google Cloud Storage].
.Procedure
. Install the {TempoOperator}:
.. Go to *Operators* -> *OperatorHub* and search for `{TempoOperator}`.
.. Select the *{TempoOperator}* that is *OpenShift Operator for Tempo* -> *Install* -> *Install* -> *View Operator*.
+
[IMPORTANT]
====
This installs the Operator with the default presets:
* *Update channel* -> *stable*
* *Installation mode* -> *All namespaces on the cluster*
* *Installed Namespace* -> *openshift-tempo-operator*
* *Update approval* -> *Automatic*
====
.. In the *Details* tab of the page of the installed Operator, under *ClusterServiceVersion details*, verify that the installation *Status* is *Succeeded*.
. Create a secret for your object storage bucket: go to *Workloads* -> *Secrets* -> *Create* -> *From YAML*.
+
--
include::snippets/distr-tracing-tempo-required-secret-parameters.adoc[]
--
+
--
include::snippets/distr-tracing-tempo-secret-example.adoc[]
--
. Create a project of your choice for the *TempoStack* instance that you will create in the next step: go to *Home* -> *Projects* -> *Create Project*.
. Create a *TempoStack* instance.
+
NOTE: You can create multiple *TempoStack* instances in separate projects on the same cluster.
.. Go to *Operators* -> *Installed Operators*.
.. Select *TempoStack* -> *Create TempoStack* -> *YAML view*.
.. In the *YAML view*, customize the `TempoStack` custom resource (CR):
+
[source,yaml]
----
apiVersion: tempo.grafana.com/v1alpha1
kind: TempoStack
metadata:
name: sample
namespace: <project_of_tempostack_instance>
spec:
storageSize: 1Gi
storage:
secret:
name: <secret-name> <1>
type: <secret-provider> <2>
template:
queryFrontend:
jaegerQuery:
enabled: true
ingress:
route:
termination: edge
type: route
----
<1> The value of the `name` in the `metadata` of the secret.
<2> The accepted values are `azure` for Azure Blob Storage; `gcs` for Google Cloud Storage; and `s3` for Amazon S3, MinIO, or Red Hat OpenShift Data Foundation.
+
.Example of a TempoStack CR for AWS S3 and MinIO storage
[source,yaml]
----
apiVersion: tempo.grafana.com/v1alpha1
kind: TempoStack
metadata:
name: simplest
namespace: <project_of_tempostack_instance>
spec:
storageSize: 1Gi
storage:
secret:
name: minio-test
type: s3
resources:
total:
limits:
memory: 2Gi
cpu: 2000m
template:
queryFrontend:
jaegerQuery:
enabled: true
ingress:
route:
termination: edge
type: route
----
The stack deployed in this example is configured to receive Jaeger Thrift over HTTP and OpenTelemetry Protocol (OTLP), which permits visualizing the data with the Jaeger UI.
.. Select *Create*.
.Verification
. Use the *Project:* dropdown list to select the project of the *TempoStack* instance.
. Go to *Operators* -> *Installed Operators* to verify that the *Status* of the *TempoStack* instance is *Condition: Ready*.
. Go to *Workloads* -> *Pods* to verify that all the component pods of the *TempoStack* instance are running.
. Access the Tempo console:
.. Go to *Networking* -> *Routes* and kbd:[Ctrl+F] to search for `tempo`.
.. In the *Location* column, open the URL to access the Tempo console.
.. Select *Log In With OpenShift* to use your cluster administrator credentials for the web console.
+
NOTE: The Tempo console initially shows no trace data following the Tempo console installation.

View File

@@ -0,0 +1,52 @@
//Module included in the following assemblies:
//
//* distr_tracing_install/dist-tracing-tempo-removing.adoc
:_content-type: PROCEDURE
[id="distr-tracing-removing-tempo-instance-cli_{context}"]
= Removing a TempoStack instance by using the CLI
You can remove a TempoStack instance on the command line.
.Prerequisites
* An active {oc-first} session by a cluster administrator with the `cluster-admin` role.
+
[TIP]
====
* Ensure that your {oc-first} version is up to date and matches your {product-title} version.
* Run `oc login`:
+
[source,terminal]
----
$ oc login --username=<your_username>
----
====
.Procedure
. Get the name of the TempoStack instance by running the following command:
+
[source,terminal]
----
$ oc get deployments -n <project_of_tempostack_instance>
----
. Remove the TempoStack instance by running the following command:
+
[source,terminal]
----
$ oc delete tempo <tempostack_instance_name> -n <project_of_tempostack_instance>
----
. Optional: Remove the {TempoOperator}.
.Verification
. Run the following command to verify that the TempoStack instance is not found in the output, which indicates its successful removal:
+
[source,terminal]
----
$ oc get deployments -n <project_of_tempostack_instance>
----

View File

@@ -0,0 +1,23 @@
//Module included in the following assemblies:
//
//* distr_tracing_install/dist-tracing-tempo-removing.adoc
:_content-type: PROCEDURE
[id="distr-tracing-removing-tempo-instance_{context}"]
= Removing a TempoStack instance by using the web console
You can remove a TempoStack instance in the *Administrator* view of the web console.
.Prerequisites
* You are logged in to the {product-title} web console as a cluster administrator with the `cluster-admin` role.
* For {product-dedicated}, you must be logged in using an account with the `dedicated-admin` role.
.Procedure
. Go to *Operators* -> *Installed Operators* -> *{TempoOperator}* -> *TempoStack*.
. To remove the TempoStack instance, select {kebab} -> *Delete TempoStack* -> *Delete*.
. Optional: Remove the {TempoOperator}.

View File

@@ -0,0 +1,13 @@
//Module included in the following assemblies:
//
//* distr_tracing_install/dist-tracing-tempo-updating.adoc
:_content-type: CONCEPT
[id="distr-tracing-tempo-update-olm_{context}"]
= Automatic updates of the {TempoShortName}
For version upgrades, the {TempoOperator} uses the Operator Lifecycle Manager (OLM), which controls installation, upgrade, and role-based access control (RBAC) of Operators in a cluster.
The OLM runs in {product-title} by default. The OLM queries for available Operators as well as upgrades for installed Operators.
When the {TempoOperator} is upgraded to the new version, it scans for running TempoStack instances that it manages and upgrades them to the version corresponding to the Operator's new version.

View File

@@ -10,43 +10,43 @@ When updating from Elasticsearch 5 to 6, you must delete your {JaegerShortName}
.Procedure if {JaegerShortName} is installed as part of {SMProductName}
. Determine the name of your Jaeger custom resource file. In this example, `istio-system` is the control plane namespace.
. Determine the name of your Jaeger custom resource file by running the following command. In this example, `istio-system` is the control plane namespace.
+
[source,terminal]
----
$ oc get jaeger -n <istio-system>
----
+
You should see something like the following:
You will see output similar to the following:
+
[source,terminal]
----
NAME STATUS VERSION STRATEGY STORAGE AGE
jaeger Running 1.24.1 production elasticsearch d21h
----
+
. Copy the generated custom resource file into a temporary directory:
. Copy the generated custom resource file into a temporary directory by running the following command:
+
[source,terminal]
----
$ oc get jaeger jaeger -oyaml -n <istio-system> > /tmp/jaeger-cr.yaml
----
+
. Delete the {JaegerShortName} instance:
. Delete the {JaegerShortName} instance by running the following command:
+
[source,terminal]
----
$ oc delete jaeger jaeger -n <istio-system>
----
+
. Recreate the {JaegerShortName} instance from your copy of the custom resource file:
. Recreate the {JaegerShortName} instance from your copy of the custom resource file by running the following command:
+
[source,terminal]
----
$ oc create -f /tmp/jaeger-cr.yaml -n <istio-system>
----
+
. Delete the copy of the generated custom resource file:
. Delete the copy of the generated custom resource file by running the following command:
+
[source,terminal]
----
@@ -58,7 +58,7 @@ $ rm /tmp/jaeger-cr.yaml
Before you begin, create a copy of your Jaeger custom resource file.
. Delete the {JaegerShortName} instance by deleting the custom resource file:
. Run the following command to delete the Jaeger custom resource file, which deletes the {JaegerShortName} instance:
+
[source,terminal]
----
@@ -71,18 +71,18 @@ For example:
----
$ oc delete -f jaeger-prod-elasticsearch.yaml
----
+
. Recreate your {JaegerShortName} instance from the backup copy of your custom resource file:
. Recreate your {JaegerShortName} instance from the backup copy of your custom resource file by running the following command:
+
[source,terminal]
----
$ oc create -f <jaeger-cr-file>
----
+
. Validate that your pods have restarted:
. Validate that your pods have restarted by running the following command:
+
[source,terminal]
----
$ oc get pods -n <tracing-system> -w
----
+

View File

@@ -1,4 +1,17 @@
:_module-type: CONCEPT
// Module included in the following assemblies:
//
// * distr_tracing/distr_tracing_rn/distr-tracing-rn-2-0.adoc
// * distr_tracing/distr_tracing_rn/distr-tracing-rn-2-1.adoc
// * distr_tracing/distr_tracing_rn/distr-tracing-rn-2-2.adoc
// * distr_tracing/distr_tracing_rn/distr-tracing-rn-2-3.adoc
// * distr_tracing/distr_tracing_rn/distr-tracing-rn-2-4.adoc
// * distr_tracing/distr_tracing_rn/distr-tracing-rn-2-5.adoc
// * distr_tracing/distr_tracing_rn/distr-tracing-rn-2-6.adoc
// * distr_tracing/distr_tracing_rn/distr-tracing-rn-2-7.adoc
// * distr_tracing/distr_tracing_rn/distr-tracing-rn-2-8.adoc
// * distr_tracing/distr_tracing_rn/distr-tracing-rn-2-9.adoc
:_content-type: CONCEPT
[id="making-open-source-more-inclusive_{context}"]
= Making open source more inclusive

Some files were not shown because too many files have changed in this diff Show More