mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
OBSDOCS-694
move 'distr_tracing' and 'otel' dirs into 'observability' dir update xref's in 'distr_tracing' and 'otel' dirs update xref's in 'welcome/index.adoc' file update xref's in 'observability/index.adoc' file
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
62d6ddf29c
commit
262d7f726b
1
observability/distr_tracing/_attributes
Symbolic link
1
observability/distr_tracing/_attributes
Symbolic link
@@ -0,0 +1 @@
|
||||
../_attributes/
|
||||
1
observability/distr_tracing/distr_tracing_arch/_attributes
Symbolic link
1
observability/distr_tracing/distr_tracing_arch/_attributes
Symbolic link
@@ -0,0 +1 @@
|
||||
../../_attributes/
|
||||
@@ -0,0 +1,31 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="distr-tracing-architecture"]
|
||||
= Distributed tracing architecture
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: distributed-tracing-architecture
|
||||
|
||||
toc::[]
|
||||
|
||||
Every time a user takes an action in an application, a request is executed by the architecture that may require dozens of different services to participate to produce a response.
|
||||
{DTProductName} lets you perform distributed tracing, which records the path of a request through various microservices that make up an application.
|
||||
|
||||
_Distributed tracing_ is a technique that is used to tie the information about different units of work together — usually executed in different processes or hosts — to understand a whole chain of events in a distributed transaction.
|
||||
Developers can visualize call flows in large microservice architectures with distributed tracing.
|
||||
It is valuable for understanding serialization, parallelism, and sources of latency.
|
||||
|
||||
{DTProductName} records the execution of individual requests across the whole stack of microservices, and presents them as traces. A _trace_ is a data/execution path through the system. An end-to-end trace is comprised of one or more spans.
|
||||
|
||||
A _span_ represents a logical unit of work in {DTProductName} that has an operation name, the start time of the operation, and the duration, as well as potentially tags and logs. Spans may be nested and ordered to model causal relationships.
|
||||
|
||||
include::modules/distr-tracing-product-overview.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/distr-tracing-features.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/distr-tracing-architecture.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources_distributed-tracing-architecture"]
|
||||
== Additional resources
|
||||
|
||||
// xreffing to the installation page until further notice because OTEL content is currently planned for internal restructuring across pages that is likely to result in renamed page files
|
||||
* xref:../../otel/otel-installing.adoc#install-otel[{OTELName}]
|
||||
1
observability/distr_tracing/distr_tracing_arch/images
Symbolic link
1
observability/distr_tracing/distr_tracing_arch/images
Symbolic link
@@ -0,0 +1 @@
|
||||
../../images/
|
||||
1
observability/distr_tracing/distr_tracing_arch/modules
Symbolic link
1
observability/distr_tracing/distr_tracing_arch/modules
Symbolic link
@@ -0,0 +1 @@
|
||||
../../modules/
|
||||
1
observability/distr_tracing/distr_tracing_arch/snippets
Symbolic link
1
observability/distr_tracing/distr_tracing_arch/snippets
Symbolic link
@@ -0,0 +1 @@
|
||||
../../snippets/
|
||||
1
observability/distr_tracing/distr_tracing_jaeger/_attributes
Symbolic link
1
observability/distr_tracing/distr_tracing_jaeger/_attributes
Symbolic link
@@ -0,0 +1 @@
|
||||
../../_attributes/
|
||||
@@ -0,0 +1,94 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="distr-tracing-jaeger-configuring"]
|
||||
= Configuring and deploying the distributed tracing platform Jaeger
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: distr-tracing-jaeger-configuring
|
||||
|
||||
toc::[]
|
||||
|
||||
:FeatureName: The {JaegerName}
|
||||
include::modules/deprecated-feature.adoc[]
|
||||
|
||||
The {JaegerName} Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the {JaegerShortName} resources. You can install the default configuration or modify the file.
|
||||
|
||||
If you have installed {DTShortName} as part of {SMProductName}, you can perform basic configuration as part of the xref:../../../service_mesh/v2x/installing-ossm.adoc#installing-ossm[ServiceMeshControlPlane], but for complete control, you must configure a Jaeger CR and then xref:../../../service_mesh/v2x/ossm-observability.adoc#ossm-config-external-jaeger_observability[reference your distributed tracing configuration file in the ServiceMeshControlPlane].
|
||||
|
||||
The {JaegerName} has predefined deployment strategies. You specify a deployment strategy in the custom resource file. When you create a {JaegerShortName} instance, the Operator uses this configuration file to create the objects necessary for the deployment.
|
||||
|
||||
.Jaeger custom resource file showing deployment strategy
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: jaegertracing.io/v1
|
||||
kind: Jaeger
|
||||
metadata:
|
||||
name: MyConfigFile
|
||||
spec:
|
||||
strategy: production <1>
|
||||
----
|
||||
<1> Deployment strategy.
|
||||
|
||||
[id="supported-deployment-strategies"]
|
||||
== Supported deployment strategies
|
||||
|
||||
The {JaegerName} Operator currently supports the following deployment strategies:
|
||||
|
||||
`allInOne`:: - This strategy is intended for development, testing, and demo purposes; it is not intended for production use. The main backend components, Agent, Collector, and Query service, are all packaged into a single executable which is configured, by default. to use in-memory storage.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
In-memory storage is not persistent, which means that if the {JaegerShortName} instance shuts down, restarts, or is replaced, that your trace data will be lost. And in-memory storage cannot be scaled, since each pod has its own memory. For persistent storage, you must use the `production` or `streaming` strategies, which use Elasticsearch as the default storage.
|
||||
====
|
||||
|
||||
`production`:: The production strategy is intended for production environments, where long term storage of trace data is important, as well as a more scalable and highly available architecture is required. Each of the backend components is therefore deployed separately. The Agent can be injected as a sidecar on the instrumented application. The Query and Collector services are configured with a supported storage type - currently Elasticsearch. Multiple instances of each of these components can be provisioned as required for performance and resilience purposes.
|
||||
|
||||
`streaming`:: The streaming strategy is designed to augment the production strategy by providing a streaming capability that effectively sits between the Collector and the Elasticsearch backend storage. This provides the benefit of reducing the pressure on the backend storage, under high load situations, and enables other trace post-processing capabilities to tap into the real time span data directly from the streaming platform (https://access.redhat.com/documentation/en-us/red_hat_amq/7.6/html/using_amq_streams_on_openshift/index[AMQ Streams]/ https://kafka.apache.org/documentation/[Kafka]).
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
|
||||
* The streaming strategy requires an additional Red Hat subscription for AMQ Streams.
|
||||
|
||||
* The streaming deployment strategy is currently unsupported on {ibm-z-name}.
|
||||
|
||||
====
|
||||
|
||||
include::modules/distr-tracing-deploy-default.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/distr-tracing-deploy-production-es.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/distr-tracing-deploy-streaming.adoc[leveloffset=+1]
|
||||
|
||||
[id="validating-your-jaeger-deployment"]
|
||||
== Validating your deployment
|
||||
|
||||
include::modules/distr-tracing-accessing-jaeger-console.adoc[leveloffset=+2]
|
||||
|
||||
[id="customizing-your-deployment"]
|
||||
== Customizing your deployment
|
||||
|
||||
include::modules/distr-tracing-deployment-best-practices.adoc[leveloffset=+2]
|
||||
|
||||
ifdef::openshift-enterprise,openshift-dedicated[]
|
||||
For information about configuring persistent storage, see xref:../../../storage/understanding-persistent-storage.adoc#understanding-persistent-storage[Understanding persistent storage] and the appropriate configuration topic for your chosen storage option.
|
||||
endif::[]
|
||||
|
||||
include::modules/distr-tracing-config-default.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/distr-tracing-config-jaeger-collector.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/distr-tracing-config-sampling.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/distr-tracing-config-storage.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/distr-tracing-config-query.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/distr-tracing-config-ingester.adoc[leveloffset=+2]
|
||||
|
||||
[id="injecting-sidecars"]
|
||||
== Injecting sidecars
|
||||
|
||||
The {JaegerName} relies on a proxy sidecar within the application's pod to provide the Agent. The {JaegerName} Operator can inject Agent sidecars into deployment workloads. You can enable automatic sidecar injection or manage it manually.
|
||||
|
||||
include::modules/distr-tracing-sidecar-automatic.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/distr-tracing-sidecar-manual.adoc[leveloffset=+2]
|
||||
@@ -0,0 +1,40 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="dist-tracing-jaeger-installing"]
|
||||
= Installing the distributed tracing platform Jaeger
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: dist-tracing-jaeger-installing
|
||||
|
||||
toc::[]
|
||||
|
||||
:FeatureName: The {JaegerName}
|
||||
include::modules/deprecated-feature.adoc[]
|
||||
|
||||
You can install {DTProductName} on {product-title} in either of two ways:
|
||||
|
||||
* You can install {DTProductName} as part of {SMProductName}. Distributed tracing is included by default in the Service Mesh installation. To install {DTProductName} as part of a service mesh, follow the xref:../../../service_mesh/v2x/preparing-ossm-installation.adoc#preparing-ossm-installation[Red Hat Service Mesh Installation] instructions. You must install {DTProductName} in the same namespace as your service mesh, that is, the `ServiceMeshControlPlane` and the {DTProductName} resources must be in the same namespace.
|
||||
|
||||
* If you do not want to install a service mesh, you can use the {DTProductName} Operators to install {DTShortName} by itself. To install {DTProductName} without a service mesh, use the following instructions.
|
||||
|
||||
[id="prerequisites"]
|
||||
== Prerequisites
|
||||
|
||||
Before you can install {DTProductName}, review the installation activities, and ensure that you meet the prerequisites:
|
||||
|
||||
* Possess an active {product-title} subscription on your Red Hat account. If you do not have a subscription, contact your sales representative for more information.
|
||||
|
||||
* Review the xref:../../../architecture/architecture-installation.adoc#installation-overview_architecture-installation[{product-title} {product-version} overview].
|
||||
* Install {product-title} {product-version}.
|
||||
|
||||
** xref:../../../installing/installing_aws/installing-aws-account.adoc#installing-aws-account[Install {product-title} {product-version} on AWS]
|
||||
** xref:../../../installing/installing_aws/installing-aws-user-infra.adoc#installing-aws-user-infra[Install {product-title} {product-version} on user-provisioned AWS]
|
||||
** xref:../../../installing/installing_bare_metal/installing-bare-metal.adoc#installing-bare-metal[Install {product-title} {product-version} on bare metal]
|
||||
** xref:../../../installing/installing_vsphere/upi/installing-vsphere.adoc#installing-vsphere[Install {product-title} {product-version} on vSphere]
|
||||
* Install the version of the `oc` CLI tool that matches your {product-title} version and add it to your path.
|
||||
|
||||
* An account with the `cluster-admin` role.
|
||||
|
||||
include::modules/distr-tracing-install-overview.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/distr-tracing-install-elasticsearch.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/distr-tracing-install-jaeger-operator.adoc[leveloffset=+1]
|
||||
@@ -0,0 +1,30 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="dist-tracing-jaeger-removing"]
|
||||
= Removing the distributed tracing platform Jaeger
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: dist-tracing-jaeger-removing
|
||||
|
||||
toc::[]
|
||||
|
||||
:FeatureName: The {JaegerName}
|
||||
include::modules/deprecated-feature.adoc[]
|
||||
|
||||
The steps for removing {DTProductName} from an {product-title} cluster are as follows:
|
||||
|
||||
. Shut down any {DTProductName} pods.
|
||||
. Remove any {DTProductName} instances.
|
||||
. Remove the {JaegerName} Operator.
|
||||
. Remove the {OTELName} Operator.
|
||||
|
||||
include::modules/distr-tracing-removing-instance.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/distr-tracing-removing-instance-cli.adoc[leveloffset=+1]
|
||||
|
||||
[id="removing-distributed-tracing-operators"]
|
||||
== Removing the {DTProductName} Operators
|
||||
|
||||
.Procedure
|
||||
|
||||
. Follow the instructions in xref:../../../operators/admin/olm-deleting-operators-from-cluster.adoc#olm-deleting-operators-from-a-cluster[Deleting Operators from a cluster] to remove the {JaegerName} Operator.
|
||||
|
||||
. Optional: After the {JaegerName} Operator has been removed, remove the OpenShift Elasticsearch Operator.
|
||||
@@ -0,0 +1,28 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="dist-tracing-jaeger-updating"]
|
||||
= Updating the distributed tracing platform Jaeger
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: dist-tracing-jaeger-updating
|
||||
|
||||
toc::[]
|
||||
|
||||
:FeatureName: The {JaegerName}
|
||||
include::modules/deprecated-feature.adoc[]
|
||||
|
||||
Operator Lifecycle Manager (OLM) controls the installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. The OLM runs by default in {product-title}.
|
||||
OLM queries for available Operators as well as upgrades for installed Operators.
|
||||
|
||||
During an update, the {DTProductName} Operators upgrade the managed {DTShortName} instances to the version associated with the Operator. Whenever a new version of the {JaegerName} Operator is installed, all the {JaegerShortName} application instances managed by the Operator are upgraded to the Operator's version. For example, after upgrading the Operator from 1.10 installed to 1.11, the Operator scans for running {JaegerShortName} instances and upgrades them to 1.11 as well.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
If you have not already updated your OpenShift Elasticsearch Operator as described in xref:../../../logging/cluster-logging-upgrading.adoc#cluster-logging-upgrading_cluster-logging-upgrading[Updating OpenShift Logging], complete that update before updating your {JaegerName} Operator.
|
||||
====
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources_dist-tracing-jaeger-updating"]
|
||||
== Additional resources
|
||||
|
||||
* xref:../../../operators/understanding/olm/olm-understanding-olm.adoc#olm-understanding-olm[Operator Lifecycle Manager concepts and resources]
|
||||
* xref:../../../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[Updating installed Operators]
|
||||
* xref:../../../logging/cluster-logging-upgrading.adoc#cluster-logging-upgrading_cluster-logging-upgrading[Updating OpenShift Logging]
|
||||
1
observability/distr_tracing/distr_tracing_jaeger/images
Symbolic link
1
observability/distr_tracing/distr_tracing_jaeger/images
Symbolic link
@@ -0,0 +1 @@
|
||||
../../images/
|
||||
1
observability/distr_tracing/distr_tracing_jaeger/modules
Symbolic link
1
observability/distr_tracing/distr_tracing_jaeger/modules
Symbolic link
@@ -0,0 +1 @@
|
||||
../../modules/
|
||||
1
observability/distr_tracing/distr_tracing_jaeger/snippets
Symbolic link
1
observability/distr_tracing/distr_tracing_jaeger/snippets
Symbolic link
@@ -0,0 +1 @@
|
||||
../../snippets/
|
||||
1
observability/distr_tracing/distr_tracing_rn/_attributes
Symbolic link
1
observability/distr_tracing/distr_tracing_rn/_attributes
Symbolic link
@@ -0,0 +1 @@
|
||||
../../_attributes/
|
||||
@@ -0,0 +1,119 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
[id="distributed-tracing-rn-3-1"]
|
||||
= Release notes for {DTProductName} 3.1
|
||||
:context: distributed-tracing-rn-3-1
|
||||
|
||||
toc::[]
|
||||
|
||||
include::modules/distr-tracing-product-overview.adoc[leveloffset=+1]
|
||||
|
||||
[id="distributed-tracing-rn_3-1_component-versions"]
|
||||
== Component versions in the {DTProductName} 3.1
|
||||
|
||||
[options="header"]
|
||||
|===
|
||||
|Operator |Component |Version
|
||||
|
||||
|{TempoName}
|
||||
|Tempo
|
||||
|2.3.1
|
||||
|
||||
//for each new release, update the release number in the xref path on the following line
|
||||
|xref:../../otel/otel_rn/otel-rn-3.1.adoc#otel-rn-3-1[{OTELName}]
|
||||
|OpenTelemetry
|
||||
|0.93.0
|
||||
|
||||
|{JaegerName} (deprecated)
|
||||
|Jaeger
|
||||
|1.53.0
|
||||
|
||||
|===
|
||||
|
||||
// Tempo section
|
||||
[id="distributed-tracing-rn_3-1_tempo-release-notes"]
|
||||
== {TempoName}
|
||||
|
||||
////
|
||||
[id="technology-preview-features_jaeger-release-notes_distributed-tracing-rn-3-1"]
|
||||
=== Technology Preview features
|
||||
|
||||
This update introduces the following Technology Preview feature for the {TempoShortName}:
|
||||
|
||||
* Monolithic custom resource.
|
||||
|
||||
:FeatureName: The monolithic custom resource
|
||||
include::snippets/technology-preview.adoc[leveloffset=+1]
|
||||
////
|
||||
|
||||
[id="distributed-tracing-rn_3-1_tempo-release-notes_new-features-and-enhancements"]
|
||||
=== New features and enhancements
|
||||
|
||||
This update introduces the following enhancements for the {TempoShortName}:
|
||||
|
||||
* Support for cluster-wide proxy environments.
|
||||
* Support for TraceQL to Gateway component.
|
||||
|
||||
[id="distributed-tracing-rn_3-1_tempo-release-notes_bug-fixes"]
|
||||
=== Bug fixes
|
||||
|
||||
This update introduces the following bug fixes for the {TempoShortName}:
|
||||
|
||||
* Before this update, when a TempoStack instance was created with the `monitorTab` enabled in {product-title} 4.15, the required `tempo-redmetrics-cluster-monitoring-view` ClusterRoleBinding was not created. This update resolves the issue by fixing the Operator RBAC for the monitor tab when the Operator is deployed in an arbitrary namespace. (link:https://issues.redhat.com/browse/TRACING-3786[TRACING-3786])
|
||||
* Before this update, when a TempoStack instance was created on an {product-title} cluster with only an IPV6 networking stack, the compactor and ingestor pods ran in the `CrashLoopBackOff` state, resulting in multiple errors. This update provides support for IPv6 clusters.(link:https://issues.redhat.com/browse/TRACING-3226[TRACING-3226])
|
||||
|
||||
[id="distributed-tracing-rn_3-1_tempo-release-notes_known-issues"]
|
||||
=== Known issues
|
||||
|
||||
//There is currently a known issue:
|
||||
There are currently known issues:
|
||||
|
||||
* Currently, when used with the {TempoOperator}, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. (link:https://issues.redhat.com/browse/TRACING-3139[TRACING-3139])
|
||||
* Currently, the {TempoShortName} fails on the IBM Z (`s390x`) architecture. (link:https://issues.redhat.com/browse/TRACING-3545[TRACING-3545])
|
||||
|
||||
// Jaeger section
|
||||
[id="distributed-tracing-rn_3-1_jaeger-release-notes"]
|
||||
== {JaegerName}
|
||||
|
||||
[id="distributed-tracing-rn_3-1_jaeger-release-notes_support-for-elasticsearch-operator"]
|
||||
=== Support for {es-op}
|
||||
|
||||
{JaegerName} 3.1 is supported for use with the {es-op} 5.6, 5.7, and 5.8.
|
||||
|
||||
[id="distributed-tracing-rn_3-1_jaeger-release-notes_deprecated-functionality"]
|
||||
=== Deprecated functionality
|
||||
|
||||
In the {DTProductName} 3.1, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements.
|
||||
|
||||
In the {DTProductName} 3.1, Tempo provided by the {TempoOperator} and the OpenTelemetry Collector provided by the {OTELName} are the preferred Operators for distributed tracing collection and storage. The OpenTelemetry and Tempo distributed tracing stack is to be adopted by all users because this will be the stack that will be enhanced going forward.
|
||||
|
||||
////
|
||||
[id="distributed-tracing-rn_3-1_jaeger-release-notes_new-features-and-enhancements"]
|
||||
=== New features and enhancements
|
||||
This update introduces the following enhancements for the {JaegerShortName}:
|
||||
* Support for ...
|
||||
////
|
||||
|
||||
[id="distributed-tracing-rn_3-1_jaeger-release-notes_bug-fixes"]
|
||||
=== Bug fixes
|
||||
|
||||
This update introduces the following bug fix for the {JaegerShortName}:
|
||||
|
||||
* Before this update, the connection target URL for the `jaeger-agent` container in the `jager-query` pod was overwritten with another namespace URL in {product-title} 4.13. This was caused by a bug in the sidecar injection code in the `jaeger-operator`, causing nondeterministic `jaeger-agent` injection. With this update, the Operator prioritizes the Jaeger instance from the same namespace as the target deployment. (link:https://issues.redhat.com/browse/TRACING-3722[TRACING-3722])
|
||||
|
||||
[id="distributed-tracing-rn_3-1_jaeger-release-notes_known-issues"]
|
||||
=== Known issues
|
||||
|
||||
//There is currently a known issue:
|
||||
There are currently known issues:
|
||||
|
||||
* Currently, Apache Spark is not supported.
|
||||
|
||||
ifndef::openshift-rosa[]
|
||||
|
||||
* Currently, the streaming deployment via AMQ/Kafka is not supported on the {ibm-z-title} and {ibm-power-title} architectures.
|
||||
endif::openshift-rosa[]
|
||||
|
||||
include::modules/support.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/making-open-source-more-inclusive.adoc[leveloffset=+1]
|
||||
@@ -0,0 +1,793 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="distr-tracing-rn-past-releases"]
|
||||
= Release notes for past releases of {DTProductName}
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: distr-tracing-rn-past-releases
|
||||
|
||||
toc::[]
|
||||
|
||||
include::modules/distr-tracing-product-overview.adoc[leveloffset=+1]
|
||||
|
||||
[id="distr-tracing-rn_3.0"]
|
||||
== Release notes for {DTProductName} 3.0
|
||||
|
||||
[id="distributed-tracing-rn_3-0_component-versions"]
|
||||
=== Component versions in the {DTProductName} 3.0
|
||||
|
||||
[options="header"]
|
||||
|===
|
||||
|Operator |Component |Version
|
||||
|{JaegerName}
|
||||
|Jaeger
|
||||
|1.51.0
|
||||
|
||||
|{TempoName}
|
||||
|Tempo
|
||||
|2.3.0
|
||||
|===
|
||||
|
||||
// Jaeger section
|
||||
[id="distributed-tracing-rn_3-0_jaeger-release-notes"]
|
||||
=== {JaegerName}
|
||||
|
||||
[id="distributed-tracing-rn_3-0_jaeger-release-notes_deprecated-functionality"]
|
||||
==== Deprecated functionality
|
||||
|
||||
In the {DTProductName} 3.0, Jaeger and support for Elasticsearch are deprecated, and both are planned to be removed in a future release. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements.
|
||||
|
||||
In the {DTProductName} 3.0, Tempo provided by the {TempoOperator} and the OpenTelemetry Collector provided by the {OTELName} are the preferred Operators for distributed tracing collection and storage. The OpenTelemetry and Tempo distributed tracing stack is to be adopted by all users because this will be the stack that will be enhanced going forward.
|
||||
|
||||
[id="distributed-tracing-rn_3-0_jaeger-release-notes_new-features-and-enhancements"]
|
||||
==== New features and enhancements
|
||||
|
||||
This update introduces the following enhancements for the {JaegerShortName}:
|
||||
|
||||
* Support for the ARM architecture.
|
||||
* Support for cluster-wide proxy environments.
|
||||
|
||||
[id="distributed-tracing-rn_3-0_jaeger-release-notes_bug-fixes"]
|
||||
==== Bug fixes
|
||||
|
||||
This update introduces the following bug fix for the {JaegerShortName}:
|
||||
|
||||
* Before this update, the {JaegerName} operator used other images than `relatedImages`. This caused the *ImagePullBackOff* error in disconnected network environments when launching the `jaeger` pod because the `oc adm catalog mirror` command mirrors images specified in `relatedImages`. This update provides support for disconnected environments when using the `oc adm catalog mirror` CLI command. (link:https://issues.redhat.com/browse/TRACING-3546[TRACING-3546])
|
||||
|
||||
[id="distributed-tracing-rn_3-0_jaeger-release-notes_known-issues"]
|
||||
==== Known issues
|
||||
|
||||
There is currently a known issue:
|
||||
//There are currently known issues:
|
||||
|
||||
* Currently, Apache Spark is not supported.
|
||||
|
||||
ifndef::openshift-rosa[]
|
||||
|
||||
* Currently, the streaming deployment via AMQ/Kafka is not supported on the {ibm-z-title} and {ibm-power-title} architectures.
|
||||
endif::openshift-rosa[]
|
||||
|
||||
// Tempo section
|
||||
[id="distributed-tracing-rn_3-0_tempo-release-notes"]
|
||||
=== {TempoName}
|
||||
|
||||
[id="distributed-tracing-rn_3-0_tempo-release-notes_new-features-and-enhancements"]
|
||||
==== New features and enhancements
|
||||
|
||||
This update introduces the following enhancements for the {TempoShortName}:
|
||||
|
||||
* Support for the ARM architecture.
|
||||
* Support for span request count, duration, and error count (RED) metrics. The metrics can be visualized in the Jaeger console deployed as part of Tempo or in the web console in the *Observe* menu.
|
||||
|
||||
[id="distributed-tracing-rn_3-0_tempo-release-notes_bug-fixes"]
|
||||
==== Bug fixes
|
||||
|
||||
This update introduces the following bug fixes for the {TempoShortName}:
|
||||
|
||||
* Before this update, the `TempoStack` CRD was not accepting custom CA certificate despite the option to choose CA certificates. This update fixes support for the custom TLS CA option for connecting to object storage. (link:https://issues.redhat.com/browse/TRACING-3462[TRACING-3462])
|
||||
* Before this update, when mirroring the {DTProductName} operator images to a mirror registry for use in a disconnected cluster, the related operator images for `tempo`, `tempo-gateway`, `opa-openshift`, and `tempo-query` were not mirrored. This update fixes support for disconnected environments when using the `oc adm catalog mirror` CLI command. (link:https://issues.redhat.com/browse/TRACING-3523[TRACING-3523])
|
||||
* Before this update, the query frontend service of the {DTProductName} was using internal mTLS when gateway was not deployed. This caused endpoint failure errors. This update fixes mTLS when Gateway is not deployed. (link:https://issues.redhat.com/browse/TRACING-3510[TRACING-3510])
|
||||
|
||||
[id="distributed-tracing-rn_3-0_tempo-release-notes_known-issues"]
|
||||
==== Known issues
|
||||
|
||||
//There is currently a known issue:
|
||||
There are currently known issues:
|
||||
|
||||
* Currently, when used with the {TempoOperator}, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. (link:https://issues.redhat.com/browse/TRACING-3139[TRACING-3139])
|
||||
* Currently, the {TempoShortName} fails on the {ibm-z-title} (`s390x`) architecture. (link:https://issues.redhat.com/browse/TRACING-3545[TRACING-3545])
|
||||
|
||||
[id="distr-tracing-rn_2-9-2"]
|
||||
== Release notes for {DTProductName} 2.9.2
|
||||
|
||||
[id="distr-tracing-rn_2-9-2_component-versions"]
|
||||
=== Component versions in the {DTProductName} 2.9.2
|
||||
|
||||
[options="header"]
|
||||
|===
|
||||
|Operator |Component |Version
|
||||
|{JaegerName}
|
||||
|Jaeger
|
||||
|1.47.0
|
||||
|
||||
|{TempoName}
|
||||
|Tempo
|
||||
|2.1.1
|
||||
|===
|
||||
|
||||
=== CVEs
|
||||
|
||||
This release fixes link:https://bugzilla.redhat.com/show_bug.cgi?id=2246470[CVE-2023-46234].
|
||||
|
||||
[id="distr-tracing-rn_2-9-2_jaeger-release-notes"]
|
||||
=== {JaegerName}
|
||||
|
||||
[id="distr-tracing-rn_2-9-2_jaeger-release-notes_known-issues"]
|
||||
==== Known issues
|
||||
|
||||
//There is currently a known issue:
|
||||
There are currently known issues:
|
||||
|
||||
* Apache Spark is not supported.
|
||||
ifndef::openshift-rosa[]
|
||||
|
||||
* The streaming deployment via AMQ/Kafka is unsupported on the {ibm-z-title} and {ibm-power-title} architectures.
|
||||
endif::openshift-rosa[]
|
||||
|
||||
[id="distr-tracing-rn_2-9-2_tempo-release-notes"]
|
||||
=== {TempoName}
|
||||
|
||||
:FeatureName: The {TempoName}
|
||||
include::snippets/technology-preview.adoc[leveloffset=+1]
|
||||
|
||||
[id="distr-tracing-rn_2-9-2_tempo-release-notes_known-issues"]
|
||||
==== Known issues
|
||||
|
||||
//There is currently a known issue:
|
||||
There are currently known issues:
|
||||
|
||||
* Currently, the custom TLS CA option is not implemented for connecting to object storage. (link:https://issues.redhat.com/browse/TRACING-3462[TRACING-3462])
|
||||
|
||||
* Currently, when used with the {TempoOperator}, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. (link:https://issues.redhat.com/browse/TRACING-3139[TRACING-3139])
|
||||
|
||||
* Currently, the {TempoShortName} fails on the {ibm-z-title} (`s390x`) architecture. (link:https://issues.redhat.com/browse/TRACING-3545[TRACING-3545])
|
||||
|
||||
* Currently, the Tempo query frontend service must not use internal mTLS when Gateway is not deployed. This issue does not affect the Jaeger Query API. The workaround is to disable mTLS. (link:https://issues.redhat.com/browse/TRACING-3510[TRACING-3510])
|
||||
+
|
||||
.Workaround
|
||||
+
|
||||
Disable mTLS as follows:
|
||||
+
|
||||
. Open the {TempoOperator} ConfigMap for editing by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator <1>
|
||||
----
|
||||
<1> The project where the {TempoOperator} is installed.
|
||||
|
||||
. Disable the mTLS in the operator configuration by updating the YAML file:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
data:
|
||||
controller_manager_config.yaml: |
|
||||
featureGates:
|
||||
httpEncryption: false
|
||||
grpcEncryption: false
|
||||
builtInCertManagement:
|
||||
enabled: false
|
||||
----
|
||||
|
||||
. Restart the {TempoOperator} pod by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator
|
||||
----
|
||||
|
||||
|
||||
* Missing images for running the {TempoOperator} in restricted environments. The {TempoName} CSV is missing references to the operand images. (link:https://issues.redhat.com/browse/TRACING-3523[TRACING-3523])
|
||||
+
|
||||
.Workaround
|
||||
+
|
||||
Add the {TempoOperator} related images in the mirroring tool to mirror the images to the registry:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
kind: ImageSetConfiguration
|
||||
apiVersion: mirror.openshift.io/v1alpha2
|
||||
archiveSize: 20
|
||||
storageConfig:
|
||||
local:
|
||||
path: /home/user/images
|
||||
mirror:
|
||||
operators:
|
||||
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13
|
||||
packages:
|
||||
- name: tempo-product
|
||||
channels:
|
||||
- name: stable
|
||||
additionalImages:
|
||||
- name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a
|
||||
- name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23
|
||||
- name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9
|
||||
- name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e
|
||||
----
|
||||
|
||||
[id="distr-tracing-rn_2-9-1"]
|
||||
== Release notes for {DTProductName} 2.9.1
|
||||
|
||||
[id="distr-tracing-rn_2-9-1_component-versions"]
|
||||
=== Component versions in the {DTProductName} 2.9.1
|
||||
|
||||
[options="header"]
|
||||
|===
|
||||
|Operator |Component |Version
|
||||
|{JaegerName}
|
||||
|Jaeger
|
||||
|1.47.0
|
||||
|
||||
|{TempoName}
|
||||
|Tempo
|
||||
|2.1.1
|
||||
|===
|
||||
|
||||
=== CVEs
|
||||
|
||||
This release fixes link:https://access.redhat.com/security/cve/cve-2023-44487[CVE-2023-44487].
|
||||
|
||||
[id="distr-tracing-rn_2-9-1_jaeger-release-notes"]
|
||||
=== {JaegerName}
|
||||
|
||||
[id="distr-tracing-rn_2-9-1_jaeger-release-notes_known-issues"]
|
||||
==== Known issues
|
||||
|
||||
//There is currently a known issue:
|
||||
There are currently known issues:
|
||||
|
||||
* Apache Spark is not supported.
|
||||
ifndef::openshift-rosa[]
|
||||
|
||||
* The streaming deployment via AMQ/Kafka is unsupported on the {ibm-z-title} and {ibm-power-title} architectures.
|
||||
endif::openshift-rosa[]
|
||||
|
||||
[id="distr-tracing-rn_2-9-1_tempo-release-notes"]
|
||||
=== {TempoName}
|
||||
|
||||
:FeatureName: The {TempoName}
|
||||
include::snippets/technology-preview.adoc[leveloffset=+1]
|
||||
|
||||
[id="distr-tracing-rn_2-9-1_tempo-release-notes_known-issues"]
|
||||
==== Known issues
|
||||
|
||||
//There is currently a known issue:
|
||||
There are currently known issues:
|
||||
|
||||
* Currently, the custom TLS CA option is not implemented for connecting to object storage. (link:https://issues.redhat.com/browse/TRACING-3462[TRACING-3462])
|
||||
|
||||
* Currently, when used with the {TempoOperator}, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. (link:https://issues.redhat.com/browse/TRACING-3139[TRACING-3139])
|
||||
|
||||
* Currently, the {TempoShortName} fails on the {ibm-z-title} (`s390x`) architecture. (link:https://issues.redhat.com/browse/TRACING-3545[TRACING-3545])
|
||||
|
||||
* Currently, the Tempo query frontend service must not use internal mTLS when Gateway is not deployed. This issue does not affect the Jaeger Query API. The workaround is to disable mTLS. (link:https://issues.redhat.com/browse/TRACING-3510[TRACING-3510])
|
||||
+
|
||||
.Workaround
|
||||
+
|
||||
Disable mTLS as follows:
|
||||
+
|
||||
. Open the {TempoOperator} ConfigMap for editing by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator <1>
|
||||
----
|
||||
<1> The project where the {TempoOperator} is installed.
|
||||
|
||||
. Disable the mTLS in the operator configuration by updating the YAML file:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
data:
|
||||
controller_manager_config.yaml: |
|
||||
featureGates:
|
||||
httpEncryption: false
|
||||
grpcEncryption: false
|
||||
builtInCertManagement:
|
||||
enabled: false
|
||||
----
|
||||
|
||||
. Restart the {TempoOperator} pod by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator
|
||||
----
|
||||
|
||||
* Missing images for running the {TempoOperator} in restricted environments. The {TempoName} CSV is missing references to the operand images. (link:https://issues.redhat.com/browse/TRACING-3523[TRACING-3523])
|
||||
+
|
||||
.Workaround
|
||||
+
|
||||
Add the {TempoOperator} related images in the mirroring tool to mirror the images to the registry:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
kind: ImageSetConfiguration
|
||||
apiVersion: mirror.openshift.io/v1alpha2
|
||||
archiveSize: 20
|
||||
storageConfig:
|
||||
local:
|
||||
path: /home/user/images
|
||||
mirror:
|
||||
operators:
|
||||
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13
|
||||
packages:
|
||||
- name: tempo-product
|
||||
channels:
|
||||
- name: stable
|
||||
additionalImages:
|
||||
- name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a
|
||||
- name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23
|
||||
- name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9
|
||||
- name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e
|
||||
----
|
||||
|
||||
[id="distr-tracing-rn_2-9"]
|
||||
== Release notes for {DTProductName} 2.9
|
||||
|
||||
[id="distr-tracing-rn_2-9_component-versions"]
|
||||
=== Component versions in the {DTProductName} 2.9
|
||||
|
||||
[options="header"]
|
||||
|===
|
||||
|Operator |Component |Version
|
||||
|{JaegerName}
|
||||
|Jaeger
|
||||
|1.47.0
|
||||
|
||||
|{TempoName}
|
||||
|Tempo
|
||||
|2.1.1
|
||||
|===
|
||||
|
||||
[id="distr-tracing-rn_2-9_jaeger-release-notes"]
|
||||
=== {JaegerName}
|
||||
|
||||
////
|
||||
[id="distr-tracing-rn_2-9_jaeger-release-notes_new-features-and-enhancements"]
|
||||
==== New features and enhancements
|
||||
* None.
|
||||
////
|
||||
|
||||
////
|
||||
[id="technology-preview-features_jaeger-release-notes_distributed-tracing-rn-2-9"]
|
||||
==== Technology Preview features
|
||||
None.
|
||||
////
|
||||
|
||||
[id="distr-tracing-rn_2-9_jaeger-release-notes_bug-fixes"]
|
||||
==== Bug fixes
|
||||
|
||||
* Before this update, connection was refused due to a missing gRPC port on the `jaeger-query` deployment. This issue resulted in `transport: Error while dialing: dial tcp :16685: connect: connection refused` error message. With this update, the Jaeger Query gRPC port (16685) is successfully exposed on the Jaeger Query service. (link:https://issues.redhat.com/browse/TRACING-3322[TRACING-3322])
|
||||
|
||||
* Before this update, the wrong port was exposed for `jaeger-production-query`, resulting in refused connection. With this update, the issue is fixed by exposing the Jaeger Query gRPC port (16685) on the Jaeger Query deployment. (link:https://issues.redhat.com/browse/TRACING-2968[TRACING-2968])
|
||||
|
||||
* Before this update, when deploying {SMProductShortName} on {sno} clusters in disconnected environments, the Jaeger pod frequently went into the `Pending` state. With this update, the issue is fixed. (link:https://issues.redhat.com/browse/TRACING-3312[TRACING-3312])
|
||||
|
||||
* Before this update, the Jaeger Operator pod restarted with the default memory value due to the `reason: OOMKilled` error message. With this update, this issue is fixed by removing the resource limits. (link:https://issues.redhat.com/browse/TRACING-3173[TRACING-3173])
|
||||
|
||||
[id="distr-tracing-rn_2-9_jaeger-release-notes_known-issues"]
|
||||
==== Known issues
|
||||
|
||||
//There is currently a known issue:
|
||||
There are currently known issues:
|
||||
|
||||
* Apache Spark is not supported.
|
||||
ifndef::openshift-rosa[]
|
||||
|
||||
* The streaming deployment via AMQ/Kafka is unsupported on the {ibm-z-title} and {ibm-power-title} architectures.
|
||||
endif::openshift-rosa[]
|
||||
|
||||
[id="distr-tracing-rn_2-9_tempo-release-notes"]
|
||||
=== {TempoName}
|
||||
|
||||
:FeatureName: The {TempoName}
|
||||
include::snippets/technology-preview.adoc[leveloffset=+1]
|
||||
|
||||
[id="distr-tracing-rn_2-9_tempo-release-notes_new-features-and-enhancements"]
|
||||
==== New features and enhancements
|
||||
|
||||
This release introduces the following enhancements for the {TempoShortName}:
|
||||
|
||||
* Support the link:https://operatorframework.io/operator-capabilities/[operator maturity] Level IV, Deep Insights, which enables upgrading, monitoring, and alerting of the TempoStack instances and the {TempoOperator}.
|
||||
|
||||
* Add Ingress and Route configuration for the Gateway.
|
||||
|
||||
* Support the `managed` and `unmanaged` states in the `TempoStack` custom resource.
|
||||
|
||||
* Expose the following additional ingestion protocols in the Distributor service: Jaeger Thrift binary, Jaeger Thrift compact, Jaeger gRPC, and Zipkin. When the Gateway is enabled, only the OpenTelemetry protocol (OTLP) gRPC is enabled.
|
||||
|
||||
* Expose the Jaeger Query gRPC endpoint on the Query Frontend service.
|
||||
|
||||
* Support multitenancy without Gateway authentication and authorization.
|
||||
|
||||
////
|
||||
[id="distributed-tracing-rn_2-9_tempo-release-notes_technology-preview-features"]
|
||||
=== Technology Preview features
|
||||
None.
|
||||
////
|
||||
|
||||
[id="distr-tracing-rn_2-9_tempo-release-notes_bug-fixes"]
|
||||
==== Bug fixes
|
||||
|
||||
* Before this update, the {TempoOperator} was not compatible with disconnected environments. With this update, the {TempoOperator} supports disconnected environments. (link:https://issues.redhat.com/browse/TRACING-3145[TRACING-3145])
|
||||
|
||||
* Before this update, the {TempoOperator} with TLS failed to start on {product-title}. With this update, the mTLS communication is enabled between Tempo components, the Operand starts successfully, and the Jaeger UI is accessible. (link:https://issues.redhat.com/browse/TRACING-3091[TRACING-3091])
|
||||
|
||||
* Before this update, the resource limits from the {TempoOperator} caused error messages such as `reason: OOMKilled`. With this update, the resource limits for the {TempoOperator} are removed to avoid such errors. (link:https://issues.redhat.com/browse/TRACING-3204[TRACING-3204])
|
||||
|
||||
[id="distr-tracing-rn_2-9_tempo-release-notes_known-issues"]
|
||||
==== Known issues
|
||||
|
||||
//There is currently a known issue:
|
||||
There are currently known issues:
|
||||
|
||||
* Currently, the custom TLS CA option is not implemented for connecting to object storage. (link:https://issues.redhat.com/browse/TRACING-3462[TRACING-3462])
|
||||
|
||||
* Currently, when used with the {TempoOperator}, the Jaeger UI only displays services that have sent traces in the last 15 minutes. For services that did not send traces in the last 15 minutes, traces are still stored but not displayed in the Jaeger UI. (link:https://issues.redhat.com/browse/TRACING-3139[TRACING-3139])
|
||||
|
||||
* Currently, the {TempoShortName} fails on the {ibm-z-title} (`s390x`) architecture. (link:https://issues.redhat.com/browse/TRACING-3545[TRACING-3545])
|
||||
|
||||
* Currently, the Tempo query frontend service must not use internal mTLS when Gateway is not deployed. This issue does not affect the Jaeger Query API. The workaround is to disable mTLS. (link:https://issues.redhat.com/browse/TRACING-3510[TRACING-3510])
|
||||
+
|
||||
.Workaround
|
||||
+
|
||||
Disable mTLS as follows:
|
||||
+
|
||||
. Open the {TempoOperator} ConfigMap for editing by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator <1>
|
||||
----
|
||||
<1> The project where the {TempoOperator} is installed.
|
||||
|
||||
. Disable the mTLS in the operator configuration by updating the YAML file:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
data:
|
||||
controller_manager_config.yaml: |
|
||||
featureGates:
|
||||
httpEncryption: false
|
||||
grpcEncryption: false
|
||||
builtInCertManagement:
|
||||
enabled: false
|
||||
----
|
||||
|
||||
. Restart the {TempoOperator} pod by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator
|
||||
----
|
||||
|
||||
|
||||
* Missing images for running the {TempoOperator} in restricted environments. The {TempoName} CSV is missing references to the operand images. (link:https://issues.redhat.com/browse/TRACING-3523[TRACING-3523])
|
||||
+
|
||||
.Workaround
|
||||
+
|
||||
Add the {TempoOperator} related images in the mirroring tool to mirror the images to the registry:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
kind: ImageSetConfiguration
|
||||
apiVersion: mirror.openshift.io/v1alpha2
|
||||
archiveSize: 20
|
||||
storageConfig:
|
||||
local:
|
||||
path: /home/user/images
|
||||
mirror:
|
||||
operators:
|
||||
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13
|
||||
packages:
|
||||
- name: tempo-product
|
||||
channels:
|
||||
- name: stable
|
||||
additionalImages:
|
||||
- name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a
|
||||
- name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23
|
||||
- name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9
|
||||
- name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e
|
||||
----
|
||||
|
||||
[id="distr-tracing-rn_2-8"]
|
||||
== Release notes for {DTProductName} 2.8
|
||||
|
||||
[id="distr-tracing-rn_2-8_component-versions"]
|
||||
=== Component versions in the {DTProductName} 2.8
|
||||
|
||||
[options="header"]
|
||||
|===
|
||||
|Operator |Component |Version
|
||||
|{JaegerName}
|
||||
|Jaeger
|
||||
|1.42
|
||||
|
||||
|{TempoName}
|
||||
|Tempo
|
||||
|0.1.0
|
||||
|===
|
||||
|
||||
[id="distr-tracing-rn_2-8_technology-preview-features"]
|
||||
=== Technology Preview features
|
||||
|
||||
This release introduces support for the {TempoName} as a link:https://access.redhat.com/support/offerings/techpreview/[Technology Preview] feature for {DTProductName}.
|
||||
|
||||
:FeatureName: The {TempoName}
|
||||
include::snippets/technology-preview.adoc[leveloffset=+1]
|
||||
|
||||
The feature uses version 0.1.0 of the {TempoName} and version 2.0.1 of the upstream {TempoShortName} components.
|
||||
|
||||
You can use the {TempoShortName} to replace Jaeger so that you can use S3-compatible storage instead of ElasticSearch.
|
||||
Most users who use the {TempoShortName} instead of Jaeger will not notice any difference in functionality because the {TempoShortName} supports the same ingestion and query protocols as Jaeger and uses the same user interface.
|
||||
|
||||
If you enable this Technology Preview feature, note the following limitations of the current implementation:
|
||||
|
||||
* The {TempoShortName} currently does not support disconnected installations. (link:https://issues.redhat.com/browse/TRACING-3145[TRACING-3145])
|
||||
|
||||
* When you use the Jaeger user interface (UI) with the {TempoShortName}, the Jaeger UI lists only services that have sent traces within the last 15 minutes. For services that have not sent traces within the last 15 minutes, those traces are still stored even though they are not visible in the Jaeger UI. (link:https://issues.redhat.com/browse/TRACING-3139[TRACING-3139])
|
||||
|
||||
Expanded support for the {TempoOperator} is planned for future releases of the {DTProductName}.
|
||||
Possible additional features might include support for TLS authentication, multitenancy, and multiple clusters.
|
||||
For more information about the {TempoOperator}, see the link:https://tempo-operator.netlify.app[Tempo community documentation].
|
||||
|
||||
[id="distr-tracing-rn_2-8_bug-fixes"]
|
||||
=== Bug fixes
|
||||
|
||||
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
|
||||
|
||||
[id="distr-tracing-rn_2-7"]
|
||||
== Release notes for {DTProductName} 2.7
|
||||
|
||||
[id="distr-tracing-rn_2-7_component-versions"]
|
||||
=== Component versions in the {DTProductName} 2.7
|
||||
|
||||
[options="header"]
|
||||
|===
|
||||
|Operator |Component |Version
|
||||
|{JaegerName}
|
||||
|Jaeger
|
||||
|1.39
|
||||
|===
|
||||
|
||||
[id="distr-tracing-rn_2-7_bug-fixes"]
|
||||
=== Bug fixes
|
||||
|
||||
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
|
||||
|
||||
[id="distr-tracing-rn_2-6"]
|
||||
== Release notes for {DTProductName} 2.6
|
||||
|
||||
[id="distr-tracing-rn_2-6_component-versions"]
|
||||
=== Component versions in the {DTProductName} 2.6
|
||||
|
||||
[options="header"]
|
||||
|===
|
||||
|Operator |Component |Version
|
||||
|{JaegerName}
|
||||
|Jaeger
|
||||
|1.38
|
||||
|===
|
||||
|
||||
[id="distr-tracing-rn_2-6_bug-fixes"]
|
||||
=== Bug fixes
|
||||
|
||||
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
|
||||
|
||||
[id="distr-tracing-rn_2-5"]
|
||||
== Release notes for {DTProductName} 2.5
|
||||
|
||||
[id="distr-tracing-rn_2-5_component-versions"]
|
||||
=== Component versions in the {DTProductName} 2.5
|
||||
|
||||
[options="header"]
|
||||
|===
|
||||
|Operator |Component |Version
|
||||
|{JaegerName}
|
||||
|Jaeger
|
||||
|1.36
|
||||
|===
|
||||
|
||||
[id="distr-tracing-rn_2-5_new-features-and-enhancements"]
|
||||
=== New features and enhancements
|
||||
|
||||
This release introduces support for ingesting OpenTelemetry protocol (OTLP) to the {JaegerName} Operator.
|
||||
The Operator now automatically enables the OTLP ports:
|
||||
|
||||
* Port 4317 for the OTLP gRPC protocol.
|
||||
* Port 4318 for the OTLP HTTP protocol.
|
||||
|
||||
This release also adds support for collecting Kubernetes resource attributes to the {OTELName} Operator.
|
||||
|
||||
[id="distr-tracing-rn_2-5_bug-fixes"]
|
||||
=== Bug fixes
|
||||
|
||||
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
|
||||
|
||||
[id="distr-tracing-rn_2-4"]
|
||||
== Release notes for {DTProductName} 2.4
|
||||
|
||||
[id="distr-tracing-rn_2-4_component-versions"]
|
||||
=== Component versions in the {DTProductName} 2.4
|
||||
|
||||
[options="header"]
|
||||
|===
|
||||
|Operator |Component |Version
|
||||
|{JaegerName}
|
||||
|Jaeger
|
||||
|1.34.1
|
||||
|===
|
||||
|
||||
[id="distr-tracing-rn_2-4_new-features-and-enhancements"]
|
||||
=== New features and enhancements
|
||||
|
||||
This release adds support for auto-provisioning certificates using the {es-op}.
|
||||
|
||||
Self-provisioning by using the {JaegerName} Operator to call the {es-op} during installation.
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
When upgrading to the {DTProductName} 2.4, the operator recreates the Elasticsearch instance, which might take five to ten minutes. Distributed tracing will be down and unavailable for that period.
|
||||
====
|
||||
|
||||
[id="distr-tracing-rn_2-4_technology-preview-features"]
|
||||
=== Technology Preview features
|
||||
|
||||
Creating the Elasticsearch instance and certificates first and then configuring the {JaegerShortName} to use the certificate is a link:https://access.redhat.com/support/offerings/techpreview/[Technology Preview] for this release.
|
||||
|
||||
[id="distr-tracing-rn_2-4_bug-fixes"]
|
||||
=== Bug fixes
|
||||
|
||||
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
|
||||
|
||||
[id="distr-tracing-rn_2-3"]
|
||||
== Release notes for {DTProductName} 2.3
|
||||
|
||||
[id="distr-tracing-rn_2-3_component-versions_2-3-1"]
|
||||
=== Component versions in the {DTProductName} 2.3.1
|
||||
|
||||
[options="header"]
|
||||
|===
|
||||
|Operator |Component |Version
|
||||
|{JaegerName}
|
||||
|Jaeger
|
||||
|1.30.2
|
||||
|===
|
||||
|
||||
[id="distr-tracing-rn_2-3_component-versions_2-3-0"]
|
||||
=== Component versions in the {DTProductName} 2.3.0
|
||||
|
||||
[options="header"]
|
||||
|===
|
||||
|Operator |Component |Version
|
||||
|{JaegerName}
|
||||
|Jaeger
|
||||
|1.30.1
|
||||
|===
|
||||
|
||||
[id="distr-tracing-rn_2-3_new-features-and-enhancements"]
|
||||
=== New features and enhancements
|
||||
|
||||
With this release, the {JaegerName} Operator is now installed to the `openshift-distributed-tracing` namespace by default. Before this update, the default installation had been in the `openshift-operators` namespace.
|
||||
|
||||
[id="distr-tracing-rn_2-3_bug-fixes"]
|
||||
=== Bug fixes
|
||||
|
||||
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
|
||||
|
||||
[id="distr-tracing-rn_2-2"]
|
||||
== Release notes for {DTProductName} 2.2
|
||||
|
||||
[id="distr-tracing-rn_2-2_technology-preview-features"]
|
||||
=== Technology Preview features
|
||||
|
||||
The unsupported OpenTelemetry Collector components included in the 2.1 release are removed.
|
||||
|
||||
[id="distr-tracing-rn_2-2_bug-fixes"]
|
||||
=== Bug fixes
|
||||
|
||||
This release of the {DTProductName} addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
|
||||
|
||||
[id="distr-tracing-rn_2-1"]
|
||||
== Release notes for {DTProductName} 2.1
|
||||
|
||||
[id="distr-tracing-rn_2-1_component-versions"]
|
||||
=== Component versions in the {DTProductName} 2.1
|
||||
|
||||
[options="header"]
|
||||
|===
|
||||
|Operator |Component |Version
|
||||
|{JaegerName}
|
||||
|Jaeger
|
||||
|1.29.1
|
||||
|===
|
||||
|
||||
[id="distr-tracing-rn_2-1_technology-preview-features"]
|
||||
=== Technology Preview features
|
||||
|
||||
* This release introduces a breaking change to how to configure certificates in the OpenTelemetry custom resource file. With this update, the `ca_file` moves under `tls` in the custom resource, as shown in the following examples.
|
||||
+
|
||||
.CA file configuration for OpenTelemetry version 0.33
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
spec:
|
||||
mode: deployment
|
||||
config: |
|
||||
exporters:
|
||||
jaeger:
|
||||
endpoint: jaeger-production-collector-headless.tracing-system.svc:14250
|
||||
ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt"
|
||||
----
|
||||
+
|
||||
.CA file configuration for OpenTelemetry version 0.41.1
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
spec:
|
||||
mode: deployment
|
||||
config: |
|
||||
exporters:
|
||||
jaeger:
|
||||
endpoint: jaeger-production-collector-headless.tracing-system.svc:14250
|
||||
tls:
|
||||
ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt"
|
||||
----
|
||||
|
||||
[id="distr-tracing-rn_2-1_bug-fixes"]
|
||||
=== Bug fixes
|
||||
|
||||
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
|
||||
|
||||
[id="distr-tracing-rn_2-0"]
|
||||
== Release notes for {DTProductName} 2.0
|
||||
|
||||
[id="distr-tracing-rn_2-0_component-versions"]
|
||||
=== Component versions in the {DTProductName} 2.0
|
||||
|
||||
[options="header"]
|
||||
|===
|
||||
|Operator |Component |Version
|
||||
|{JaegerName}
|
||||
|Jaeger
|
||||
|1.28.0
|
||||
|===
|
||||
|
||||
[id="distr-tracing-rn_2-0_new-features-and-enhancements"]
|
||||
=== New features and enhancements
|
||||
|
||||
This release introduces the following new features and enhancements:
|
||||
|
||||
* Rebrands Red Hat OpenShift Jaeger as the {DTProductName}.
|
||||
|
||||
* Updates {JaegerName} Operator to Jaeger 1.28. Going forward, the {DTProductName} will only support the `stable` Operator channel.
|
||||
Channels for individual releases are no longer supported.
|
||||
|
||||
* Adds support for OpenTelemetry protocol (OTLP) to the Query service.
|
||||
|
||||
* Introduces a new distributed tracing icon that appears in the OperatorHub.
|
||||
|
||||
* Includes rolling updates to the documentation to support the name change and new features.
|
||||
|
||||
[id="distr-tracing-rn_2-0_technology-preview-features"]
|
||||
=== Technology Preview features
|
||||
|
||||
This release adds the {OTELName} as a link:https://access.redhat.com/support/offerings/techpreview/[Technology Preview], which you install using the {OTELName} Operator. {OTELName} is based on the link:https://opentelemetry.io/[OpenTelemetry] APIs and instrumentation. The {OTELName} includes the OpenTelemetry Operator and Collector. You can use the Collector to receive traces in the OpenTelemetry or Jaeger protocol and send the trace data to the {DTProductName}. Other capabilities of the Collector are not supported at this time. The OpenTelemetry Collector allows developers to instrument their code with vendor agnostic APIs, avoiding vendor lock-in and enabling a growing ecosystem of observability tooling.
|
||||
|
||||
[id="distr-tracing-rn_2-0_bug-fixes"]
|
||||
=== Bug fixes
|
||||
|
||||
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
|
||||
|
||||
include::modules/support.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/making-open-source-more-inclusive.adoc[leveloffset=+1]
|
||||
1
observability/distr_tracing/distr_tracing_rn/images
Symbolic link
1
observability/distr_tracing/distr_tracing_rn/images
Symbolic link
@@ -0,0 +1 @@
|
||||
../../images/
|
||||
1
observability/distr_tracing/distr_tracing_rn/modules
Symbolic link
1
observability/distr_tracing/distr_tracing_rn/modules
Symbolic link
@@ -0,0 +1 @@
|
||||
../../modules/
|
||||
1
observability/distr_tracing/distr_tracing_rn/snippets
Symbolic link
1
observability/distr_tracing/distr_tracing_rn/snippets
Symbolic link
@@ -0,0 +1 @@
|
||||
../../snippets/
|
||||
1
observability/distr_tracing/distr_tracing_tempo/_attributes
Symbolic link
1
observability/distr_tracing/distr_tracing_tempo/_attributes
Symbolic link
@@ -0,0 +1 @@
|
||||
../../_attributes/
|
||||
@@ -0,0 +1,45 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="distr-tracing-tempo-configuring"]
|
||||
= Configuring and deploying the {TempoShortName}
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: distr-tracing-tempo-configuring
|
||||
|
||||
toc::[]
|
||||
|
||||
The {TempoOperator} uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the {TempoShortName} resources. You can install the default configuration or modify the file.
|
||||
|
||||
[id="customizing-your-tempo-deployment"]
|
||||
== Customizing your deployment
|
||||
|
||||
ifdef::openshift-enterprise,openshift-dedicated[]
|
||||
For information about configuring the back-end storage, see xref:../../../storage/understanding-persistent-storage.adoc#understanding-persistent-storage[Understanding persistent storage] and the appropriate configuration topic for your chosen storage option.
|
||||
endif::[]
|
||||
|
||||
include::modules/distr-tracing-tempo-config-default.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/distr-tracing-tempo-config-storage.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/distr-tracing-tempo-config-query-frontend.adoc[leveloffset=+2]
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources_distr-tracing-tempo-configuring-query-frontend"]
|
||||
==== Additional resources
|
||||
* xref:../../../nodes/scheduling/nodes-scheduler-taints-tolerations.adoc#nodes-scheduler-taints-tolerations-about_nodes-scheduler-taints-tolerations[Understanding taints and tolerations]
|
||||
|
||||
include::modules/distr-tracing-tempo-config-spanmetrics.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/distr-tracing-tempo-config-multitenancy.adoc[leveloffset=+2]
|
||||
|
||||
[id="setting-up-monitoring-for-tempo"]
|
||||
== Setting up monitoring for the {TempoShortName}
|
||||
|
||||
The {TempoOperator} supports monitoring and alerting of each TempoStack component such as distributor, ingester, and so on, and exposes upgrade and operational metrics about the Operator itself.
|
||||
|
||||
include::modules/distr-tracing-tempo-configuring-tempostack-metrics-and-alerts.adoc[leveloffset=+2]
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources_distr-tracing-tempo-configuring-tempostack-metrics-and-alerts"]
|
||||
==== Additional resources
|
||||
* xref:../../../monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects[Enabling monitoring for user-defined projects]
|
||||
|
||||
include::modules/distr-tracing-tempo-configuring-tempooperator-metrics-and-alerts.adoc[leveloffset=+2]
|
||||
@@ -0,0 +1,31 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="dist-tracing-tempo-installing"]
|
||||
= Installing the {TempoShortName}
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: dist-tracing-tempo-installing
|
||||
|
||||
toc::[]
|
||||
|
||||
Installing the {TempoShortName} involves the following steps:
|
||||
|
||||
. Setting up supported object storage.
|
||||
. Installing the {TempoOperator}.
|
||||
. Creating a secret for the object storage credentials.
|
||||
. Creating a namespace for a TempoStack instance.
|
||||
. Creating a `TempoStack` custom resource to deploy at least one TempoStack instance.
|
||||
|
||||
include::modules/distr-tracing-tempo-storage-ref.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/distr-tracing-tempo-install-web-console.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/distr-tracing-tempo-install-cli.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources_dist-tracing-tempo-installing"]
|
||||
== Additional resources
|
||||
* xref:../../../post_installation_configuration/preparing-for-users.adoc#creating-cluster-admin_post-install-preparing-for-users[Creating a cluster admin]
|
||||
* link:https://operatorhub.io/[OperatorHub.io]
|
||||
* xref:../../../web_console/web-console.adoc#web-console[Accessing the web console]
|
||||
* xref:../../../operators/admin/olm-adding-operators-to-cluster.adoc#olm-installing-from-operatorhub-using-web-console_olm-adding-operators-to-a-cluster[Installing from OperatorHub using the web console]
|
||||
* xref:../../../operators/user/olm-creating-apps-from-installed-operators.adoc#olm-creating-apps-from-installed-operators[Creating applications from installed Operators]
|
||||
* xref:../../../cli_reference/openshift_cli/getting-started-cli.adoc#getting-started-cli[Getting started with the OpenShift CLI]
|
||||
@@ -0,0 +1,24 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="dist-tracing-tempo-removing"]
|
||||
= Removing the {TempoName}
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: dist-tracing-tempo-removing
|
||||
|
||||
toc::[]
|
||||
|
||||
The steps for removing the {TempoName} from an {product-title} cluster are as follows:
|
||||
|
||||
. Shut down all {TempoShortName} pods.
|
||||
. Remove any TempoStack instances.
|
||||
. Remove the {TempoOperator}.
|
||||
|
||||
include::modules/distr-tracing-tempo-remove-web-console.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/distr-tracing-tempo-remove-cli.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources_dist-tracing-tempo-removing"]
|
||||
== Additional resources
|
||||
|
||||
* xref:../../../operators/admin/olm-deleting-operators-from-cluster.adoc#olm-deleting-operators-from-a-cluster[Deleting Operators from a cluster]
|
||||
* xref:../../../cli_reference/openshift_cli/getting-started-cli.adoc#getting-started-cli[Getting started with the OpenShift CLI]
|
||||
@@ -0,0 +1,19 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="dist-tracing-tempo-updating"]
|
||||
= Updating the {TempoShortName}
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: dist-tracing-tempo-updating
|
||||
|
||||
toc::[]
|
||||
|
||||
For version upgrades, the {TempoOperator} uses the Operator Lifecycle Manager (OLM), which controls installation, upgrade, and role-based access control (RBAC) of Operators in a cluster.
|
||||
|
||||
The OLM runs in the {product-title} by default. The OLM queries for available Operators as well as upgrades for installed Operators.
|
||||
|
||||
When the {TempoOperator} is upgraded to the new version, it scans for running TempoStack instances that it manages and upgrades them to the version corresponding to the Operator's new version.
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources_dist-tracing-tempo-updating"]
|
||||
== Additional resources
|
||||
|
||||
* xref:../../../operators/understanding/olm/olm-understanding-olm.adoc#olm-understanding-olm[Operator Lifecycle Manager concepts and resources]
|
||||
* xref:../../../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[Updating installed Operators]
|
||||
1
observability/distr_tracing/distr_tracing_tempo/images
Symbolic link
1
observability/distr_tracing/distr_tracing_tempo/images
Symbolic link
@@ -0,0 +1 @@
|
||||
../../images/
|
||||
1
observability/distr_tracing/distr_tracing_tempo/modules
Symbolic link
1
observability/distr_tracing/distr_tracing_tempo/modules
Symbolic link
@@ -0,0 +1 @@
|
||||
../../modules/
|
||||
1
observability/distr_tracing/distr_tracing_tempo/snippets
Symbolic link
1
observability/distr_tracing/distr_tracing_tempo/snippets
Symbolic link
@@ -0,0 +1 @@
|
||||
../../snippets/
|
||||
1
observability/distr_tracing/images
Symbolic link
1
observability/distr_tracing/images
Symbolic link
@@ -0,0 +1 @@
|
||||
../images
|
||||
1
observability/distr_tracing/modules
Symbolic link
1
observability/distr_tracing/modules
Symbolic link
@@ -0,0 +1 @@
|
||||
../modules/
|
||||
1
observability/distr_tracing/snippets
Symbolic link
1
observability/distr_tracing/snippets
Symbolic link
@@ -0,0 +1 @@
|
||||
../snippets/
|
||||
@@ -38,14 +38,14 @@ For more information, see xref:../logging/cluster-logging.adoc#cluster-logging[A
|
||||
== Distributed tracing
|
||||
Store and visualize large volumes of requests passing through distributed systems, across the whole stack of microservices, and under heavy loads. Use it for monitoring distributed transactions, gathering insights into your instrumented services, network profiling, performance and latency optimization, root cause analysis, and troubleshooting the interaction between components in modern cloud-native microservices-based applications.
|
||||
|
||||
For more information, see xref:../distr_tracing/distr_tracing_arch/distr-tracing-architecture.adoc#distributed-tracing-architecture[Distributed tracing architecture].
|
||||
For more information, see xref:distr_tracing/distr_tracing_arch/distr-tracing-architecture.adoc#distributed-tracing-architecture[Distributed tracing architecture].
|
||||
//after the file is added to the observability directory, update xref path to ../observability/distr_tracing/distr_tracing_arch/distr-tracing-architecture.adoc#distributed-tracing-architecture[Distributed tracing architecture].
|
||||
|
||||
[id="otel-release-notes-index"]
|
||||
== {OTELName}
|
||||
Instrument, generate, collect, and export telemetry traces, metrics, and logs to analyze and understand your software's performance and behavior. Use open-source back ends like Tempo or Prometheus, or use commercial offerings. Learn a single set of APIs and conventions, and own the data that you generate.
|
||||
|
||||
For more information, see xref:../otel/otel_rn/otel-rn-3.1.adoc[{OTELName}].
|
||||
For more information, see xref:otel/otel_rn/otel-rn-3.1.adoc[{OTELName}].
|
||||
//possibly outdated comment: after the file is added to the observability directory, update xref path to ../observability/otel/otel-release-notes.adoc#otel-release-notes[{OTELName} release notes].
|
||||
|
||||
[id="network-observability-overview-index"]
|
||||
|
||||
1
observability/otel/_attributes
Symbolic link
1
observability/otel/_attributes
Symbolic link
@@ -0,0 +1 @@
|
||||
../_attributes/
|
||||
1
observability/otel/images
Symbolic link
1
observability/otel/images
Symbolic link
@@ -0,0 +1 @@
|
||||
../images/
|
||||
1
observability/otel/modules
Symbolic link
1
observability/otel/modules
Symbolic link
@@ -0,0 +1 @@
|
||||
../modules/
|
||||
263
observability/otel/otel-config-multicluster.adoc
Normal file
263
observability/otel/otel-config-multicluster.adoc
Normal file
@@ -0,0 +1,263 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="otel-gathering-observability-data-from-multiple-clusters"]
|
||||
= Gathering the observability data from multiple clusters
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: otel-gathering-observability-data-from-multiple-clusters
|
||||
|
||||
For a multicluster configuration, you can create one OpenTelemetry Collector instance in each one of the remote clusters and then forward all the telemetry data to one OpenTelemetry Collector instance.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* The {OTELOperator} is installed.
|
||||
* The {TempoOperator} is installed.
|
||||
* A TempoStack instance is deployed on the cluster.
|
||||
* The following mounted certificates: Issuer, self-signed certificate, CA issuer, client and server certificates. To create any of these certificates, see step 1.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Mount the following certificates in the OpenTelemetry Collector instance, skipping already mounted certificates.
|
||||
|
||||
.. An Issuer to generate the certificates by using the {cert-manager-operator}.
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: Issuer
|
||||
metadata:
|
||||
name: selfsigned-issuer
|
||||
spec:
|
||||
selfSigned: {}
|
||||
----
|
||||
|
||||
.. A self-signed certificate.
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: Certificate
|
||||
metadata:
|
||||
name: ca
|
||||
spec:
|
||||
isCA: true
|
||||
commonName: ca
|
||||
subject:
|
||||
organizations:
|
||||
- Organization # <your_organization_name>
|
||||
organizationalUnits:
|
||||
- Widgets
|
||||
secretName: ca-secret
|
||||
privateKey:
|
||||
algorithm: ECDSA
|
||||
size: 256
|
||||
issuerRef:
|
||||
name: selfsigned-issuer
|
||||
kind: Issuer
|
||||
group: cert-manager.io
|
||||
----
|
||||
|
||||
.. A CA issuer.
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: Issuer
|
||||
metadata:
|
||||
name: test-ca-issuer
|
||||
spec:
|
||||
ca:
|
||||
secretName: ca-secret
|
||||
----
|
||||
|
||||
.. The client and server certificates.
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: Certificate
|
||||
metadata:
|
||||
name: server
|
||||
spec:
|
||||
secretName: server-tls
|
||||
isCA: false
|
||||
usages:
|
||||
- server auth
|
||||
- client auth
|
||||
dnsNames:
|
||||
- "otel.observability.svc.cluster.local" # <1>
|
||||
issuerRef:
|
||||
name: ca-issuer
|
||||
---
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: Certificate
|
||||
metadata:
|
||||
name: client
|
||||
spec:
|
||||
secretName: client-tls
|
||||
isCA: false
|
||||
usages:
|
||||
- server auth
|
||||
- client auth
|
||||
dnsNames:
|
||||
- "otel.observability.svc.cluster.local" # <2>
|
||||
issuerRef:
|
||||
name: ca-issuer
|
||||
----
|
||||
<1> List of exact DNS names to be mapped to a solver in the server OpenTelemetry Collector instance.
|
||||
<2> List of exact DNS names to be mapped to a solver in the client OpenTelemetry Collector instance.
|
||||
|
||||
. Create a service account for the OpenTelemetry Collector instance.
|
||||
+
|
||||
.Example ServiceAccount
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: otel-collector-deployment
|
||||
----
|
||||
|
||||
. Create a cluster role for the service account.
|
||||
+
|
||||
.Example ClusterRole
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: otel-collector
|
||||
rules:
|
||||
# <1>
|
||||
# <2>
|
||||
- apiGroups: ["", "config.openshift.io"]
|
||||
resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"]
|
||||
verbs: ["get", "watch", "list"]
|
||||
----
|
||||
<1> The `k8sattributesprocessor` requires permissions for pods and namespace resources.
|
||||
<2> The `resourcedetectionprocessor` requires permissions for infrastructures and status.
|
||||
|
||||
. Bind the cluster role to the service account.
|
||||
+
|
||||
.Example ClusterRoleBinding
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: otel-collector
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: otel-collector-deployment
|
||||
namespace: otel-collector-<example>
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: otel-collector
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
----
|
||||
|
||||
. Create the YAML file to define the `OpenTelemetryCollector` custom resource (CR) in the edge clusters.
|
||||
+
|
||||
.Example `OpenTelemetryCollector` custom resource for the edge clusters
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: opentelemetry.io/v1alpha1
|
||||
kind: OpenTelemetryCollector
|
||||
metadata:
|
||||
name: otel
|
||||
namespace: otel-collector-<example>
|
||||
spec:
|
||||
mode: daemonset
|
||||
serviceAccount: otel-collector-deployment
|
||||
config: |
|
||||
receivers:
|
||||
jaeger:
|
||||
protocols:
|
||||
grpc:
|
||||
thrift_binary:
|
||||
thrift_compact:
|
||||
thrift_http:
|
||||
opencensus:
|
||||
otlp:
|
||||
protocols:
|
||||
grpc:
|
||||
http:
|
||||
zipkin:
|
||||
processors:
|
||||
batch:
|
||||
k8sattributes:
|
||||
memory_limiter:
|
||||
check_interval: 1s
|
||||
limit_percentage: 50
|
||||
spike_limit_percentage: 30
|
||||
resourcedetection:
|
||||
detectors: [openshift]
|
||||
exporters:
|
||||
otlphttp:
|
||||
endpoint: https://observability-cluster.com:443 # <1>
|
||||
tls:
|
||||
insecure: false
|
||||
cert_file: /certs/server.crt
|
||||
key_file: /certs/server.key
|
||||
ca_file: /certs/ca.crt
|
||||
service:
|
||||
pipelines:
|
||||
traces:
|
||||
receivers: [jaeger, opencensus, otlp, zipkin]
|
||||
processors: [memory_limiter, k8sattributes, resourcedetection, batch]
|
||||
exporters: [otlp]
|
||||
volumes:
|
||||
- name: otel-certs
|
||||
secret:
|
||||
name: otel-certs
|
||||
volumeMounts:
|
||||
- name: otel-certs
|
||||
mountPath: /certs
|
||||
----
|
||||
<1> The Collector exporter is configured to export OTLP HTTP and points to the OpenTelemetry Collector from the central cluster.
|
||||
|
||||
. Create the YAML file to define the `OpenTelemetryCollector` custom resource (CR) in the central cluster.
|
||||
+
|
||||
.Example `OpenTelemetryCollector` custom resource for the central cluster
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: opentelemetry.io/v1alpha1
|
||||
kind: OpenTelemetryCollector
|
||||
metadata:
|
||||
name: otlp-receiver
|
||||
namespace: observability
|
||||
spec:
|
||||
mode: "deployment"
|
||||
ingress:
|
||||
type: route
|
||||
route:
|
||||
termination: "passthrough"
|
||||
config: |
|
||||
receivers:
|
||||
otlp:
|
||||
protocols:
|
||||
http:
|
||||
tls: # <1>
|
||||
cert_file: /certs/server.crt
|
||||
key_file: /certs/server.key
|
||||
client_ca_file: /certs/ca.crt
|
||||
exporters:
|
||||
logging:
|
||||
otlp:
|
||||
endpoint: "tempo-<simplest>-distributor:4317" # <2>
|
||||
tls:
|
||||
insecure: true
|
||||
service:
|
||||
pipelines:
|
||||
traces:
|
||||
receivers: [otlp]
|
||||
processors: []
|
||||
exporters: [otlp]
|
||||
volumes:
|
||||
- name: otel-certs
|
||||
secret:
|
||||
name: otel-certs
|
||||
volumeMounts:
|
||||
- name: otel-certs
|
||||
mountPath: /certs
|
||||
----
|
||||
<1> The Collector receiver requires the certificates listed in the first step.
|
||||
<2> The Collector exporter is configured to export OTLP and points to the Tempo distributor endpoint, which in this example is `"tempo-simplest-distributor:4317"` and already created.
|
||||
@@ -0,0 +1,65 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="otel-configuration-for-sending-metrics-to-the-monitoring-stack"]
|
||||
= Configuration for sending metrics to the monitoring stack
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: otel-configuration-for-sending-metrics-to-the-monitoring-stack
|
||||
|
||||
The OpenTelemetry Collector custom resource (CR) can be configured to create a Prometheus `ServiceMonitor` CR for scraping the Collector's pipeline metrics and the enabled Prometheus exporters.
|
||||
|
||||
.Example of the OpenTelemetry Collector custom resource with the Prometheus exporter
|
||||
[source,yaml]
|
||||
----
|
||||
spec:
|
||||
mode: deployment
|
||||
observability:
|
||||
metrics:
|
||||
enableMetrics: true # <1>
|
||||
config: |
|
||||
exporters:
|
||||
prometheus:
|
||||
endpoint: 0.0.0.0:8889
|
||||
resource_to_telemetry_conversion:
|
||||
enabled: true # by default resource attributes are dropped
|
||||
service:
|
||||
telemetry:
|
||||
metrics:
|
||||
address: ":8888"
|
||||
pipelines:
|
||||
metrics:
|
||||
receivers: [otlp]
|
||||
exporters: [prometheus]
|
||||
----
|
||||
<1> Configures the operator to create the Prometheus `ServiceMonitor` CR to scrape the collector's internal metrics endpoint and Prometheus exporter metric endpoints. The metrics will be stored in the OpenShift monitoring stack.
|
||||
|
||||
Alternatively, a manually created Prometheus `PodMonitor` can provide fine control, for example removing duplicated labels added during Prometheus scraping.
|
||||
|
||||
.Example of the `PodMonitor` custom resource that configures the monitoring stack to scrape the Collector metrics
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PodMonitor
|
||||
metadata:
|
||||
name: otel-collector
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: `<cr_name>-collector` # <1>
|
||||
podMetricsEndpoints:
|
||||
- port: metrics # <2>
|
||||
- port: promexporter # <3>
|
||||
relabelings:
|
||||
- action: labeldrop
|
||||
regex: pod
|
||||
- action: labeldrop
|
||||
regex: container
|
||||
- action: labeldrop
|
||||
regex: endpoint
|
||||
metricRelabelings:
|
||||
- action: labeldrop
|
||||
regex: instance
|
||||
- action: labeldrop
|
||||
regex: job
|
||||
----
|
||||
<1> The name of the OpenTelemetry Collector custom resource.
|
||||
<2> The name of the internal metrics port for the OpenTelemetry Collector. This port name is always `metrics`.
|
||||
<3> The name of the Prometheus exporter port for the OpenTelemetry Collector.
|
||||
@@ -0,0 +1,14 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="otel-configuration-of-instrumentation"]
|
||||
= Configuration of the instrumentation
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: otel-configuration-of-instrumentation
|
||||
|
||||
toc::[]
|
||||
|
||||
:FeatureName: OpenTelemetry instrumentation injection
|
||||
include::snippets/technology-preview.adoc[leveloffset=+1]
|
||||
|
||||
The {OTELName} Operator uses a custom resource definition (CRD) file that defines the configuration of the instrumentation.
|
||||
|
||||
include::modules/otel-config-instrumentation.adoc[leveloffset=+1]
|
||||
13
observability/otel/otel-configuration-of-otel-collector.adoc
Normal file
13
observability/otel/otel-configuration-of-otel-collector.adoc
Normal file
@@ -0,0 +1,13 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="otel-configuration-of-otel-collector"]
|
||||
= Configuration of the OpenTelemetry Collector
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: otel-configuration-of-otel-collector
|
||||
|
||||
toc::[]
|
||||
|
||||
The {OTELName} Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the {OTELShortName} resources. You can install the default configuration or modify the file.
|
||||
|
||||
include::modules/otel-collector-config-options.adoc[leveloffset=+1]
|
||||
include::modules/otel-collector-components.adoc[leveloffset=+1]
|
||||
include::modules/otel-config-target-allocator.adoc[leveloffset=+1]
|
||||
40
observability/otel/otel-configuring-otelcol-metrics.adoc
Normal file
40
observability/otel/otel-configuring-otelcol-metrics.adoc
Normal file
@@ -0,0 +1,40 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="otel-configuring-metrics"]
|
||||
= Configuring the OpenTelemetry Collector metrics
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: otel-configuring-metrics
|
||||
|
||||
//[id="setting-up-monitoring-for-otel"]
|
||||
//== Setting up monitoring for the {OTELShortName}
|
||||
//The {OTELOperator} supports monitoring and alerting of each OpenTelemtry Collector instance and exposes upgrade and operational metrics about the Operator itself.
|
||||
|
||||
You can enable metrics and alerts of OpenTelemetry Collector instances.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Monitoring for user-defined projects is enabled in the cluster.
|
||||
|
||||
.Procedure
|
||||
|
||||
* To enable metrics of an OpenTelemetry Collector instance, set the `spec.observability.metrics.enableMetrics` field to `true`:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: opentelemetry.io/v1alpha1
|
||||
kind: OpenTelemetryCollector
|
||||
metadata:
|
||||
name: <name>
|
||||
spec:
|
||||
observability:
|
||||
metrics:
|
||||
enableMetrics: true
|
||||
----
|
||||
|
||||
.Verification
|
||||
|
||||
You can use the *Administrator* view of the web console to verify successful configuration:
|
||||
|
||||
* Go to *Observe* -> *Targets*, filter by *Source: User*, and check that the *ServiceMonitors* in the `opentelemetry-collector-<instance_name>` format have the *Up* status.
|
||||
|
||||
.Additional resources
|
||||
* xref:../../monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects[Enabling monitoring for user-defined projects]
|
||||
147
observability/otel/otel-forwarding.adoc
Normal file
147
observability/otel/otel-forwarding.adoc
Normal file
@@ -0,0 +1,147 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="otel-forwarding-traces"]
|
||||
= Forwarding traces to a TempoStack
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: otel-forwarding-traces
|
||||
|
||||
To configure forwarding traces to a TempoStack, you can deploy and configure the OpenTelemetry Collector. You can deploy the OpenTelemetry Collector in the deployment mode by using the specified processors, receivers, and exporters. For other modes, see the OpenTelemetry Collector documentation linked in _Additional resources_.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* The {OTELOperator} is installed.
|
||||
* The {TempoOperator} is installed.
|
||||
* A TempoStack is deployed on the cluster.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create a service account for the OpenTelemetry Collector.
|
||||
+
|
||||
.Example ServiceAccount
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: otel-collector-deployment
|
||||
----
|
||||
|
||||
. Create a cluster role for the service account.
|
||||
+
|
||||
.Example ClusterRole
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: otel-collector
|
||||
rules:
|
||||
# <1>
|
||||
# <2>
|
||||
- apiGroups: ["", "config.openshift.io"]
|
||||
resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"]
|
||||
verbs: ["get", "watch", "list"]
|
||||
----
|
||||
<1> The `k8sattributesprocessor` requires permissions for pods and namespaces resources.
|
||||
<2> The `resourcedetectionprocessor` requires permissions for infrastructures and status.
|
||||
|
||||
. Bind the cluster role to the service account.
|
||||
+
|
||||
.Example ClusterRoleBinding
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: otel-collector
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: otel-collector-deployment
|
||||
namespace: otel-collector-example
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: otel-collector
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
----
|
||||
|
||||
. Create the YAML file to define the `OpenTelemetryCollector` custom resource (CR).
|
||||
+
|
||||
.Example OpenTelemetryCollector
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: opentelemetry.io/v1alpha1
|
||||
kind: OpenTelemetryCollector
|
||||
metadata:
|
||||
name: otel
|
||||
spec:
|
||||
mode: deployment
|
||||
serviceAccount: otel-collector-deployment
|
||||
config: |
|
||||
receivers:
|
||||
jaeger:
|
||||
protocols:
|
||||
grpc:
|
||||
thrift_binary:
|
||||
thrift_compact:
|
||||
thrift_http:
|
||||
opencensus:
|
||||
otlp:
|
||||
protocols:
|
||||
grpc:
|
||||
http:
|
||||
zipkin:
|
||||
processors:
|
||||
batch:
|
||||
k8sattributes:
|
||||
memory_limiter:
|
||||
check_interval: 1s
|
||||
limit_percentage: 50
|
||||
spike_limit_percentage: 30
|
||||
resourcedetection:
|
||||
detectors: [openshift]
|
||||
exporters:
|
||||
otlp:
|
||||
endpoint: "tempo-simplest-distributor:4317" # <1>
|
||||
tls:
|
||||
insecure: true
|
||||
service:
|
||||
pipelines:
|
||||
traces:
|
||||
receivers: [jaeger, opencensus, otlp, zipkin] # <2>
|
||||
processors: [memory_limiter, k8sattributes, resourcedetection, batch]
|
||||
exporters: [otlp]
|
||||
----
|
||||
<1> The Collector exporter is configured to export OTLP and points to the Tempo distributor endpoint, `"tempo-simplest-distributor:4317"` in this example, which is already created.
|
||||
<2> The Collector is configured with a receiver for Jaeger traces, OpenCensus traces over the OpenCensus protocol, Zipkin traces over the Zipkin protocol, and OTLP traces over the GRPC protocol.
|
||||
|
||||
[TIP]
|
||||
====
|
||||
You can deploy `tracegen` as a test:
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: tracegen
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: tracegen
|
||||
image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/tracegen:latest
|
||||
command:
|
||||
- "./tracegen"
|
||||
args:
|
||||
- -otlp-endpoint=otel-collector:4317
|
||||
- -otlp-insecure
|
||||
- -duration=30s
|
||||
- -workers=1
|
||||
restartPolicy: Never
|
||||
backoffLimit: 4
|
||||
----
|
||||
====
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* link:https://opentelemetry.io/docs/collector/[OpenTelemetry Collector documentation]
|
||||
* link:https://github.com/os-observability/redhat-rhosdt-samples[Deployment examples on GitHub]
|
||||
27
observability/otel/otel-installing.adoc
Normal file
27
observability/otel/otel-installing.adoc
Normal file
@@ -0,0 +1,27 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="install-otel"]
|
||||
= Installing the {OTELShortName}
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: install-otel
|
||||
|
||||
toc::[]
|
||||
|
||||
Installing the {OTELShortName} involves the following steps:
|
||||
|
||||
. Installing the {OTELOperator}.
|
||||
. Creating a namespace for an OpenTelemetry Collector instance.
|
||||
. Creating an `OpenTelemetryCollector` custom resource to deploy the OpenTelemetry Collector instance.
|
||||
|
||||
include::modules/otel-install-web-console.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/otel-install-cli.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources_otel-installing"]
|
||||
== Additional resources
|
||||
* xref:../../post_installation_configuration/preparing-for-users.adoc#creating-cluster-admin_post-install-preparing-for-users[Creating a cluster admin]
|
||||
* link:https://operatorhub.io/[OperatorHub.io]
|
||||
* xref:../../web_console/web-console.adoc#web-console[Accessing the web console]
|
||||
* xref:../../operators/admin/olm-adding-operators-to-cluster.adoc#olm-installing-from-operatorhub-using-web-console_olm-adding-operators-to-a-cluster[Installing from OperatorHub using the web console]
|
||||
* xref:../../operators/user/olm-creating-apps-from-installed-operators.adoc#olm-creating-apps-from-installed-operators[Creating applications from installed Operators]
|
||||
* xref:../../cli_reference/openshift_cli/getting-started-cli.adoc#getting-started-cli[Getting started with the OpenShift CLI]
|
||||
20
observability/otel/otel-migrating.adoc
Normal file
20
observability/otel/otel-migrating.adoc
Normal file
@@ -0,0 +1,20 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="dist-tracing-otel-migrating"]
|
||||
= Migrating from the {JaegerShortName} to the {OTELShortName}
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: dist-tracing-otel-migrating
|
||||
|
||||
toc::[]
|
||||
|
||||
:FeatureName: The {JaegerName}
|
||||
include::modules/deprecated-feature.adoc[]
|
||||
|
||||
If you are already using the {JaegerName} for your applications, you can migrate to the {OTELName}, which is based on the link:https://opentelemetry.io/[OpenTelemetry] open-source project.
|
||||
|
||||
The {OTELShortName} provides a set of APIs, libraries, agents, and instrumentation to facilitate observability in distributed systems. The OpenTelemetry Collector in the {OTELShortName} can ingest the Jaeger protocol, so you do not need to change the SDKs in your applications.
|
||||
|
||||
Migration from the {JaegerShortName} to the {OTELShortName} requires configuring the OpenTelemetry Collector and your applications to report traces seamlessly. You can migrate sidecar and sidecarless deployments.
|
||||
|
||||
include::modules/otel-migrating-from-jaeger-with-sidecars.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/otel-migrating-from-jaeger-without-sidecars.adoc[leveloffset=+1]
|
||||
24
observability/otel/otel-removing.adoc
Normal file
24
observability/otel/otel-removing.adoc
Normal file
@@ -0,0 +1,24 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="dist-tracing-otel-removing"]
|
||||
= Removing the {OTELShortName}
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: dist-tracing-otel-removing
|
||||
|
||||
toc::[]
|
||||
|
||||
The steps for removing the {OTELShortName} from an {product-title} cluster are as follows:
|
||||
|
||||
. Shut down all {OTELShortName} pods.
|
||||
. Remove any OpenTelemetryCollector instances.
|
||||
. Remove the {OTELOperator}.
|
||||
|
||||
include::modules/otel-remove-web-console.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/otel-remove-cli.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources_dist-tracing-otel-removing"]
|
||||
== Additional resources
|
||||
|
||||
* xref:../../operators/admin/olm-deleting-operators-from-cluster.adoc#olm-deleting-operators-from-a-cluster[Deleting Operators from a cluster]
|
||||
* xref:../../cli_reference/openshift_cli/getting-started-cli.adoc#getting-started-cli[Getting started with the OpenShift CLI]
|
||||
@@ -0,0 +1,15 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="otel-sending-traces-and-metrics-to-otel-collector"]
|
||||
= Sending traces and metrics to the OpenTelemetry Collector
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: otel-sending-traces-and-metrics-to-otel-collector
|
||||
|
||||
toc::[]
|
||||
|
||||
You can set up and use the {OTELShortName} to send traces to the OpenTelemetry Collector or the TempoStack.
|
||||
|
||||
Sending traces and metrics to the OpenTelemetry Collector is possible with or without sidecar injection.
|
||||
|
||||
include::modules/otel-send-traces-and-metrics-to-otel-collector-with-sidecar.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/otel-send-traces-and-metrics-to-otel-collector-without-sidecar.adoc[leveloffset=+1]
|
||||
13
observability/otel/otel-troubleshooting.adoc
Normal file
13
observability/otel/otel-troubleshooting.adoc
Normal file
@@ -0,0 +1,13 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="otel-troubleshoot"]
|
||||
= Troubleshooting the {OTELShortName}
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: otel-troubleshoot
|
||||
|
||||
toc::[]
|
||||
|
||||
The OpenTelemetry Collector offers multiple ways to measure its health as well as investigate data ingestion issues.
|
||||
|
||||
include::modules/otel-troubleshoot-collector-logs.adoc[leveloffset=+1]
|
||||
include::modules/otel-troubleshoot-metrics.adoc[leveloffset=+1]
|
||||
include::modules/otel-troubleshoot-logging-exporter-stdout.adoc[leveloffset=+1]
|
||||
20
observability/otel/otel-updating.adoc
Normal file
20
observability/otel/otel-updating.adoc
Normal file
@@ -0,0 +1,20 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="dist-tracing-otel-updating"]
|
||||
= Updating the {OTELShortName}
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: dist-tracing-otel-updating
|
||||
|
||||
toc::[]
|
||||
|
||||
For version upgrades, the {OTELOperator} uses the Operator Lifecycle Manager (OLM), which controls installation, upgrade, and role-based access control (RBAC) of Operators in a cluster.
|
||||
|
||||
The OLM runs in the {product-title} by default. The OLM queries for available Operators as well as upgrades for installed Operators.
|
||||
|
||||
When the {OTELOperator} is upgraded to the new version, it scans for running OpenTelemetry Collector instances that it manages and upgrades them to the version corresponding to the Operator's new version.
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources_dist-tracing-otel-updating"]
|
||||
== Additional resources
|
||||
|
||||
* xref:../../operators/understanding/olm/olm-understanding-olm.adoc#olm-understanding-olm[Operator Lifecycle Manager concepts and resources]
|
||||
* xref:../../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[Updating installed Operators]
|
||||
1
observability/otel/otel_rn/_attributes
Symbolic link
1
observability/otel/otel_rn/_attributes
Symbolic link
@@ -0,0 +1 @@
|
||||
../../_attributes/
|
||||
1
observability/otel/otel_rn/images
Symbolic link
1
observability/otel/otel_rn/images
Symbolic link
@@ -0,0 +1 @@
|
||||
../../images/
|
||||
1
observability/otel/otel_rn/modules
Symbolic link
1
observability/otel/otel_rn/modules
Symbolic link
@@ -0,0 +1 @@
|
||||
../../modules/
|
||||
43
observability/otel/otel_rn/otel-rn-3.1.adoc
Normal file
43
observability/otel/otel_rn/otel-rn-3.1.adoc
Normal file
@@ -0,0 +1,43 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
[id="otel-rn-3-1"]
|
||||
= Release notes for {OTELName} 3.1
|
||||
:context: otel-rn-3-1
|
||||
|
||||
toc::[]
|
||||
|
||||
include::modules/otel-product-overview.adoc[leveloffset=+1]
|
||||
|
||||
[id="otel-rn_3-1_new-features-and-enhancements"]
|
||||
== New features and enhancements
|
||||
|
||||
//plural: `enhancements:`
|
||||
This update introduces the following enhancements:
|
||||
|
||||
* {OTELName} 3.1 is based on link:https://opentelemetry.io/[OpenTelemetry] 0.93.0.
|
||||
|
||||
* Support for the target allocator in the OpenTelemetry Collector. The target allocator is an optional component of the OpenTelemetry Operator that shards Prometheus receiver scrape targets across the deployed fleet of OpenTelemetry Collector instances. The target allocator provides integration with the Prometheus `PodMonitor` and `ServiceMonitor` custom resources.
|
||||
|
||||
////
|
||||
[id="otel-rn_3-1_removal-notice"]
|
||||
=== Removal notice
|
||||
*
|
||||
////
|
||||
|
||||
////
|
||||
[id="otel-rn_3-1_bug-fixes"]
|
||||
=== Bug fixes
|
||||
This update introduces the following bug fixes:
|
||||
* Fixed support for ...
|
||||
////
|
||||
|
||||
////
|
||||
[id="otel-rn_3-1_known-issues"]
|
||||
=== Known issues
|
||||
//There is currently a known issue:
|
||||
//There are currently known issues:
|
||||
////
|
||||
|
||||
include::modules/support.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/making-open-source-more-inclusive.adoc[leveloffset=+1]
|
||||
398
observability/otel/otel_rn/otel-rn-past-releases.adoc
Normal file
398
observability/otel/otel_rn/otel-rn-past-releases.adoc
Normal file
@@ -0,0 +1,398 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="otel-rn-past-releases"]
|
||||
= Release notes for past releases of {OTELName}
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: otel-rn-past-releases
|
||||
|
||||
toc::[]
|
||||
|
||||
include::modules/distr-tracing-product-overview.adoc[leveloffset=+1]
|
||||
|
||||
[id="otel-rn_3-0"]
|
||||
== Release notes for {OTELName} 3.0
|
||||
|
||||
{OTELName} 3.0 is based on link:https://opentelemetry.io/[OpenTelemetry] 0.89.0.
|
||||
|
||||
[id="otel-rn_3-0_new-features-and-enhancements"]
|
||||
=== New features and enhancements
|
||||
|
||||
This update introduces the following enhancements:
|
||||
|
||||
* The *OpenShift distributed tracing data collection Operator* is renamed as the *{OTELOperator}*.
|
||||
* Support for the ARM architecture.
|
||||
* Support for the Prometheus receiver for metrics collection.
|
||||
* Support for the Kafka receiver and exporter for sending traces and metrics to Kafka.
|
||||
* Support for cluster-wide proxy environments.
|
||||
* The {OTELOperator} creates the Prometheus `ServiceMonitor` custom resource if the Prometheus exporter is enabled.
|
||||
* The Operator enables the `Instrumentation` custom resource that allows injecting upstream OpenTelemetry auto-instrumentation libraries.
|
||||
|
||||
[id="otel-rn_3-0_removal-notice"]
|
||||
=== Removal notice
|
||||
|
||||
In {OTELName} 3.0, the Jaeger exporter has been removed. Bug fixes and support are provided only through the end of the 2.9 lifecycle. As an alternative to the Jaeger exporter for sending data to the Jaeger collector, you can use the OTLP exporter instead.
|
||||
|
||||
[id="otel-rn_3-0_bug-fixes"]
|
||||
=== Bug fixes
|
||||
|
||||
This update introduces the following bug fixes:
|
||||
|
||||
* Fixed support for disconnected environments when using the `oc adm catalog mirror` CLI command.
|
||||
|
||||
[id="otel-rn_3-0_known-issues"]
|
||||
=== Known issues
|
||||
|
||||
There is currently a known issue:
|
||||
//There are currently known issues:
|
||||
|
||||
* Curently, the cluster monitoring of the {OTELOperator} is disabled due to a bug (link:https://issues.redhat.com/browse/TRACING-3761[TRACING-3761]). The bug is preventing the cluster monitoring from scraping metrics from the {OTELOperator} due to a missing label `openshift.io/cluster-monitoring=true` that is required for the cluster monitoring and service monitor object.
|
||||
+
|
||||
.Workaround
|
||||
+
|
||||
You can enable the cluster monitoring as follows:
|
||||
+
|
||||
. Add the following label in the Operator namespace: `oc label namespace openshift-opentelemetry-operator openshift.io/cluster-monitoring=true`
|
||||
+
|
||||
. Create a service monitor, role, and role binding:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: ServiceMonitor
|
||||
metadata:
|
||||
name: opentelemetry-operator-controller-manager-metrics-service
|
||||
namespace: openshift-opentelemetry-operator
|
||||
spec:
|
||||
endpoints:
|
||||
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
|
||||
path: /metrics
|
||||
port: https
|
||||
scheme: https
|
||||
tlsConfig:
|
||||
insecureSkipVerify: true
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: opentelemetry-operator
|
||||
control-plane: controller-manager
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: otel-operator-prometheus
|
||||
namespace: openshift-opentelemetry-operator
|
||||
annotations:
|
||||
include.release.openshift.io/self-managed-high-availability: "true"
|
||||
include.release.openshift.io/single-node-developer: "true"
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- services
|
||||
- endpoints
|
||||
- pods
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: otel-operator-prometheus
|
||||
namespace: openshift-opentelemetry-operator
|
||||
annotations:
|
||||
include.release.openshift.io/self-managed-high-availability: "true"
|
||||
include.release.openshift.io/single-node-developer: "true"
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: otel-operator-prometheus
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: prometheus-k8s
|
||||
namespace: openshift-monitoring
|
||||
----
|
||||
|
||||
[id="otel-rn_2-9-2"]
|
||||
== Release notes for {OTELName} 2.9.2
|
||||
|
||||
:FeatureName: The {OTELName}
|
||||
include::snippets/technology-preview.adoc[leveloffset=+1]
|
||||
|
||||
{OTELName} 2.9.2 is based on link:https://opentelemetry.io/[OpenTelemetry] 0.81.0.
|
||||
|
||||
[id="otel-rn_2-9-2_cves"]
|
||||
=== CVEs
|
||||
|
||||
* This release fixes link:https://bugzilla.redhat.com/show_bug.cgi?id=2246470[CVE-2023-46234].
|
||||
|
||||
[id="otel-rn_2-9-2_known-issues"]
|
||||
=== Known issues
|
||||
|
||||
There is currently a known issue:
|
||||
//There are currently known issues:
|
||||
|
||||
* Currently, you must manually set link:https://operatorframework.io/operator-capabilities/[operator maturity] to Level IV, Deep Insights. (link:https://issues.redhat.com/browse/TRACING-3431[TRACING-3431])
|
||||
|
||||
[id="otel-rn_2-9-1"]
|
||||
== Release notes for {OTELName} 2.9.1
|
||||
|
||||
:FeatureName: The {OTELName}
|
||||
include::snippets/technology-preview.adoc[leveloffset=+1]
|
||||
|
||||
{OTELName} 2.9.1 is based on link:https://opentelemetry.io/[OpenTelemetry] 0.81.0.
|
||||
|
||||
[id="otel-rn_2-9-1_cves"]
|
||||
=== CVEs
|
||||
|
||||
* This release fixes link:https://access.redhat.com/security/cve/cve-2023-44487[CVE-2023-44487].
|
||||
|
||||
[id="otel-rn_2-9-1_known-issues"]
|
||||
=== Known issues
|
||||
|
||||
There is currently a known issue:
|
||||
//There are currently known issues:
|
||||
|
||||
* Currently, you must manually set link:https://operatorframework.io/operator-capabilities/[operator maturity] to Level IV, Deep Insights. (link:https://issues.redhat.com/browse/TRACING-3431[TRACING-3431])
|
||||
|
||||
[id="otel-rn_2-9"]
|
||||
== Release notes for {OTELName} 2.9
|
||||
|
||||
:FeatureName: The {OTELName}
|
||||
include::snippets/technology-preview.adoc[leveloffset=+1]
|
||||
|
||||
{OTELName} 2.9 is based on link:https://opentelemetry.io/[OpenTelemetry] 0.81.0.
|
||||
|
||||
[id="otel-rn_2-9_new-features-and-enhancements"]
|
||||
=== New features and enhancements
|
||||
|
||||
This release introduces the following enhancements for the {OTELShortName}:
|
||||
|
||||
* Support OTLP metrics ingestion. The metrics can be forwarded and stored in the `user-workload-monitoring` via the Prometheus exporter.
|
||||
|
||||
* Support the link:https://operatorframework.io/operator-capabilities/[operator maturity] Level IV, Deep Insights, which enables upgrading and monitoring of `OpenTelemetry Collector` instances and the {OTELOperator}.
|
||||
|
||||
* Report traces and metrics from remote clusters using OTLP or HTTP and HTTPS.
|
||||
|
||||
* Collect {product-title} resource attributes via the `resourcedetection` processor.
|
||||
|
||||
* Support the `managed` and `unmanaged` states in the `OpenTelemetryCollector` custom resouce.
|
||||
|
||||
////
|
||||
[id="otel-rn_2-9_technology-preview-features"]
|
||||
==== Technology Preview features
|
||||
None.
|
||||
////
|
||||
|
||||
////
|
||||
[id="otel-rn_2-9_bug-fixes"]
|
||||
==== Bug fixes
|
||||
None.
|
||||
////
|
||||
|
||||
[id="otel-rn_2-9_known-issues"]
|
||||
=== Known issues
|
||||
|
||||
There is currently a known issue:
|
||||
//There are currently known issues:
|
||||
|
||||
* Currently, you must manually set link:https://operatorframework.io/operator-capabilities/[operator maturity] to Level IV, Deep Insights. (link:https://issues.redhat.com/browse/TRACING-3431[TRACING-3431])
|
||||
|
||||
[id="otel-rn_2-8"]
|
||||
== Release notes for {OTELName} 2.8
|
||||
|
||||
:FeatureName: The {OTELName}
|
||||
include::snippets/technology-preview.adoc[leveloffset=+1]
|
||||
|
||||
{OTELName} 2.8 is based on link:https://opentelemetry.io/[OpenTelemetry] 0.74.0.
|
||||
|
||||
[id="otel-rn_2-8_bug-fixes"]
|
||||
=== Bug fixes
|
||||
|
||||
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
|
||||
|
||||
////
|
||||
[id="otel-rn_2-8_known-issues"]
|
||||
== Known issues
|
||||
None.
|
||||
////
|
||||
|
||||
[id="otel-rn_2-7"]
|
||||
== Release notes for {OTELName} 2.7
|
||||
|
||||
:FeatureName: The {OTELName}
|
||||
include::snippets/technology-preview.adoc[leveloffset=+1]
|
||||
|
||||
{OTELName} 2.7 is based on link:https://opentelemetry.io/[OpenTelemetry] 0.63.1.
|
||||
|
||||
[id="otel-rn_2-7_bug-fixes"]
|
||||
=== Bug fixes
|
||||
|
||||
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
|
||||
|
||||
[id="otel-rn_2-6"]
|
||||
== Release notes for {OTELName} 2.6
|
||||
|
||||
:FeatureName: The {OTELName}
|
||||
include::snippets/technology-preview.adoc[leveloffset=+1]
|
||||
|
||||
{OTELName} 2.6 is based on link:https://opentelemetry.io/[OpenTelemetry] 0.60.
|
||||
|
||||
[id="otel-rn_2-6_bug-fixes"]
|
||||
=== Bug fixes
|
||||
|
||||
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
|
||||
|
||||
[id="otel-rn_2-5"]
|
||||
== Release notes for {OTELName} 2.5
|
||||
|
||||
:FeatureName: The {OTELName}
|
||||
include::snippets/technology-preview.adoc[leveloffset=+1]
|
||||
|
||||
{OTELName} 2.5 is based on link:https://opentelemetry.io/[OpenTelemetry] 0.56.
|
||||
|
||||
[id="otel-rn_2-5_new-features-and-enhancements"]
|
||||
=== New features and enhancements
|
||||
|
||||
This update introduces the following enhancement:
|
||||
|
||||
* Support for collecting Kubernetes resource attributes to the {OTELName} Operator.
|
||||
|
||||
[id="otel-rn_2-5_bug-fixes"]
|
||||
=== Bug fixes
|
||||
|
||||
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
|
||||
|
||||
[id="otel-rn_2-4"]
|
||||
== Release notes for {OTELName} 2.4
|
||||
|
||||
:FeatureName: The {OTELName}
|
||||
include::snippets/technology-preview.adoc[leveloffset=+1]
|
||||
|
||||
{OTELName} 2.4 is based on link:https://opentelemetry.io/[OpenTelemetry] 0.49.
|
||||
|
||||
[id="otel-rn_2-4_bug-fixes"]
|
||||
=== Bug fixes
|
||||
|
||||
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
|
||||
|
||||
////
|
||||
[id="otel-rn_2-4_known-issues"]
|
||||
=== Known issues
|
||||
None.
|
||||
////
|
||||
|
||||
[id="otel-rn_2-3"]
|
||||
== Release notes for {OTELName} 2.3
|
||||
|
||||
:FeatureName: The {OTELName}
|
||||
include::snippets/technology-preview.adoc[leveloffset=+1]
|
||||
|
||||
{OTELName} 2.3.1 is based on link:https://opentelemetry.io/[OpenTelemetry] 0.44.1.
|
||||
|
||||
{OTELName} 2.3.0 is based on link:https://opentelemetry.io/[OpenTelemetry] 0.44.0.
|
||||
|
||||
[id="otel-rn_2-3_bug-fixes"]
|
||||
=== Bug fixes
|
||||
|
||||
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
|
||||
|
||||
////
|
||||
[id="otel-rn_2-3_known-issues"]
|
||||
=== Known issues
|
||||
None.
|
||||
////
|
||||
|
||||
[id="otel-rn_2-2"]
|
||||
== Release notes for {OTELName} 2.2
|
||||
|
||||
:FeatureName: The {OTELName}
|
||||
include::snippets/technology-preview.adoc[leveloffset=+1]
|
||||
|
||||
{OTELName} 2.2 is based on link:https://opentelemetry.io/[OpenTelemetry] 0.42.0.
|
||||
|
||||
[id="otel-rn_2-2_technology-preview-features"]
|
||||
=== Technology Preview features
|
||||
|
||||
The unsupported OpenTelemetry Collector components included in the 2.1 release are removed.
|
||||
|
||||
[id="otel-rn_2-2_bug-fixes"]
|
||||
=== Bug fixes
|
||||
|
||||
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
|
||||
|
||||
////
|
||||
[id="otel-rn_2-2_known-issues"]
|
||||
=== Known issues
|
||||
None.
|
||||
////
|
||||
|
||||
[id="otel-rn_2-1"]
|
||||
== Release notes for {OTELName} 2.1
|
||||
|
||||
:FeatureName: The {OTELName}
|
||||
include::snippets/technology-preview.adoc[leveloffset=+1]
|
||||
|
||||
{OTELName} 2.1 is based on link:https://opentelemetry.io/[OpenTelemetry] 0.41.1.
|
||||
|
||||
[id="otel-rn_2-1_technology-preview-features"]
|
||||
=== Technology Preview features
|
||||
|
||||
This release introduces a breaking change to how to configure certificates in the OpenTelemetry custom resource file. With this update, the `ca_file` moves under `tls` in the custom resource, as shown in the following examples.
|
||||
|
||||
.CA file configuration for OpenTelemetry version 0.33
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
spec:
|
||||
mode: deployment
|
||||
config: |
|
||||
exporters:
|
||||
jaeger:
|
||||
endpoint: jaeger-production-collector-headless.tracing-system.svc:14250
|
||||
ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt"
|
||||
----
|
||||
|
||||
.CA file configuration for OpenTelemetry version 0.41.1
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
spec:
|
||||
mode: deployment
|
||||
config: |
|
||||
exporters:
|
||||
jaeger:
|
||||
endpoint: jaeger-production-collector-headless.tracing-system.svc:14250
|
||||
tls:
|
||||
ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt"
|
||||
----
|
||||
|
||||
[id="otel-rn_2-1_bug-fixes"]
|
||||
=== Bug fixes
|
||||
|
||||
This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes.
|
||||
|
||||
////
|
||||
[id="otel-rn_2-1_known-issues"]
|
||||
=== Known issues
|
||||
None.
|
||||
////
|
||||
|
||||
[id="otel-rn_2-0"]
|
||||
== Release notes for {OTELName} 2.0
|
||||
|
||||
:FeatureName: The {OTELName}
|
||||
include::snippets/technology-preview.adoc[leveloffset=+1]
|
||||
|
||||
{OTELName} 2.0 is based on link:https://opentelemetry.io/[OpenTelemetry] 0.33.0.
|
||||
|
||||
This release adds the {OTELName} as a link:https://access.redhat.com/support/offerings/techpreview/[Technology Preview], which you install using the {OTELName} Operator. {OTELName} is based on the link:https://opentelemetry.io/[OpenTelemetry] APIs and instrumentation. The {OTELName} includes the OpenTelemetry Operator and Collector. You can use the Collector to receive traces in the OpenTelemetry or Jaeger protocol and send the trace data to the {OTELName}. Other capabilities of the Collector are not supported at this time. The OpenTelemetry Collector allows developers to instrument their code with vendor agnostic APIs, avoiding vendor lock-in and enabling a growing ecosystem of observability tooling.
|
||||
|
||||
////
|
||||
[id="otel-rn_2-0_known-issues"]
|
||||
=== Known issues
|
||||
None.
|
||||
////
|
||||
|
||||
include::modules/support.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/making-open-source-more-inclusive.adoc[leveloffset=+1]
|
||||
1
observability/otel/otel_rn/snippets
Symbolic link
1
observability/otel/otel_rn/snippets
Symbolic link
@@ -0,0 +1 @@
|
||||
../../snippets/
|
||||
1
observability/otel/snippets
Symbolic link
1
observability/otel/snippets
Symbolic link
@@ -0,0 +1 @@
|
||||
../snippets/
|
||||
Reference in New Issue
Block a user