mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
Merge pull request #96782 from max-cx/dt+otel-3.7
OBSDOCS-1698: Distributed Tracing and OpenTelemetry release 3.7
This commit is contained in:
@@ -3222,6 +3222,8 @@ Topics:
|
||||
Topics:
|
||||
- Name: Release notes for the Red Hat build of OpenTelemetry
|
||||
File: otel-rn
|
||||
- Name: About the Red Hat build of OpenTelemetry
|
||||
File: otel-architecture
|
||||
- Name: Installing the Red Hat build of OpenTelemetry
|
||||
File: otel-installing
|
||||
- Name: Configuring the Collector
|
||||
|
||||
@@ -1,22 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/distr_tracing/distr_tracing_rn/distr-tracing-rn-3-1-1.adoc
|
||||
// * observability/distr_tracing/distr_tracing_rn/distr-tracing-rn-past-releases.adoc
|
||||
// * observability/distr-tracing-architecture.adoc
|
||||
// * service_mesh/v2x/ossm-architecture.adoc
|
||||
// * serverless/serverless-tracing.adoc
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="distr-tracing-product-overview_{context}"]
|
||||
= Distributed tracing overview
|
||||
|
||||
As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture.
|
||||
You can use the {DTProductName} for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications.
|
||||
|
||||
With the {DTShortName}, you can perform the following functions:
|
||||
|
||||
* Monitor distributed transactions
|
||||
|
||||
* Optimize performance and latency
|
||||
|
||||
* Perform root cause analysis
|
||||
14
modules/distr-tracing-tempo-about-rn.adoc
Normal file
14
modules/distr-tracing-tempo-about-rn.adoc
Normal file
@@ -0,0 +1,14 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/distr_tracing/distr-tracing-rn.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="distr-tracing-product-overview_{context}"]
|
||||
= About this release
|
||||
|
||||
{DTShortName} 3.7 is provided through the link:https://catalog.redhat.com/software/containers/rhosdt/tempo-operator-bundle/642c3e0eacf1b5bdbba7654a/history[{TempoOperator} 0.18.0] and based on the open source link:https://grafana.com/oss/tempo/[Grafana Tempo] 2.8.2.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Some linked Jira tickets are accessible only with Red Hat credentials.
|
||||
====
|
||||
9
modules/distr-tracing-tempo-coo-ui-plugin.adoc
Normal file
9
modules/distr-tracing-tempo-coo-ui-plugin.adoc
Normal file
@@ -0,0 +1,9 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/distr_tracing/distr-tracing-tempo-configuring.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="distr-tracing-tempo-coo-ui-plugin_{context}"]
|
||||
= Configuring the UI
|
||||
|
||||
You can use the distributed tracing UI plugin of the {coo-first} as the user interface (UI) for the {DTProductName}. For more information about installing and using the distributed tracing UI plugin, see "Distributed tracing UI plugin" in _Cluster Observability Operator_.
|
||||
@@ -0,0 +1,38 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/distr-tracing-architecture.adoc
|
||||
// * service_mesh/v2x/ossm-architecture.adoc
|
||||
// * service_mesh/v1x/ossm-architecture.adoc
|
||||
// * serverless/observability/tracing/serverless-tracing.adoc
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="distr-tracing-tempo-key-concepts-in-distributed-tracing_{context}"]
|
||||
= Key concepts in distributed tracing
|
||||
|
||||
Every time a user takes an action in an application, a request is executed by the architecture that may require dozens of different services to participate to produce a response.
|
||||
{DTProductName} lets you perform distributed tracing, which records the path of a request through various microservices that make up an application.
|
||||
|
||||
_Distributed tracing_ is a technique that is used to tie the information about different units of work together — usually executed in different processes or hosts — to understand a whole chain of events in a distributed transaction.
|
||||
Developers can visualize call flows in large microservice architectures with distributed tracing.
|
||||
It is valuable for understanding serialization, parallelism, and sources of latency.
|
||||
|
||||
{DTProductName} records the execution of individual requests across the whole stack of microservices, and presents them as traces. A _trace_ is a data/execution path through the system. An end-to-end trace consists of one or more spans.
|
||||
|
||||
A _span_ represents a logical unit of work in {DTProductName} that has an operation name, the start time of the operation, and the duration, as well as potentially tags and logs. Spans may be nested and ordered to model causal relationships.
|
||||
|
||||
As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture.
|
||||
You can use {DTProductName} for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications.
|
||||
|
||||
With {DTShortName}, you can perform the following functions:
|
||||
|
||||
* Monitor distributed transactions
|
||||
|
||||
* Optimize performance and latency
|
||||
|
||||
* Perform root cause analysis
|
||||
|
||||
You can combine {DTShortName} with other relevant components of the {product-title}:
|
||||
|
||||
* {OTELName} for forwarding traces to a TempoStack instance
|
||||
|
||||
* Distributed tracing UI plugin of the {coo-first}
|
||||
11
modules/distr-tracing-tempo-rn-bug-fixes.adoc
Normal file
11
modules/distr-tracing-tempo-rn-bug-fixes.adoc
Normal file
@@ -0,0 +1,11 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/distr_tracing/distr-tracing-rn.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="fixed-issues_{context}"]
|
||||
= Fixed issues
|
||||
|
||||
This release fixes the following CVE:
|
||||
|
||||
* link:https://access.redhat.com/security/cve/cve-2025-22874[CVE-2025-22874]
|
||||
9
modules/distr-tracing-tempo-rn-deprecated-features.adoc
Normal file
9
modules/distr-tracing-tempo-rn-deprecated-features.adoc
Normal file
@@ -0,0 +1,9 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/distr_tracing/distr-tracing-rn.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="deprecated-features_{context}"]
|
||||
= Deprecated features
|
||||
|
||||
None.
|
||||
10
modules/distr-tracing-tempo-rn-enhancements.adoc
Normal file
10
modules/distr-tracing-tempo-rn-enhancements.adoc
Normal file
@@ -0,0 +1,10 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/distr_tracing/distr-tracing-rn.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="new-features-and-enhancements_{context}"]
|
||||
= New features and enhancements
|
||||
|
||||
Network policy to restrict API access::
|
||||
With this update, the {TempoOperator} creates a network policy for the Operator to restrict access to the used APIs.
|
||||
14
modules/distr-tracing-tempo-rn-known-issues.adoc
Normal file
14
modules/distr-tracing-tempo-rn-known-issues.adoc
Normal file
@@ -0,0 +1,14 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/distr_tracing/distr-tracing-rn.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="known-issues_{context}"]
|
||||
= Known issues
|
||||
|
||||
Tempo query frontend fails to fetch trace JSON::
|
||||
In the Jaeger UI, clicking on *Trace* and refreshing the page, or accessing *Trace* -> *Trace Timeline* -> *Trace JSON* from the Tempo query frontend, might result in the Tempo query pod failing with an EOF error.
|
||||
+
|
||||
To work around this problem, use the distributed tracing UI plugin to view traces.
|
||||
+
|
||||
link:https://issues.redhat.com/browse/TRACING-5483[TRACING-5483]
|
||||
9
modules/distr-tracing-tempo-rn-removed-features.adoc
Normal file
9
modules/distr-tracing-tempo-rn-removed-features.adoc
Normal file
@@ -0,0 +1,9 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/distr_tracing/distr-tracing-rn.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="removed-features_{context}"]
|
||||
= Removed features
|
||||
|
||||
None.
|
||||
@@ -0,0 +1,12 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/distr_tracing/distr-tracing-rn.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="technology-preview-features_{context}"]
|
||||
= Technology Preview features
|
||||
|
||||
None.
|
||||
|
||||
//:FeatureName: Each of these features
|
||||
//include::snippets/technology-preview.adoc[leveloffset=+1]
|
||||
14
modules/otel-about-rn.adoc
Normal file
14
modules/otel-about-rn.adoc
Normal file
@@ -0,0 +1,14 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/otel/otel-rn.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="otel-product-overview_{context}"]
|
||||
= About this release
|
||||
|
||||
{OTELName} 3.7 is provided through the link:https://catalog.redhat.com/software/containers/rhosdt/opentelemetry-operator-bundle/615618406feffc5384e84400/history[{OTELOperator} 0.135.0] and based on the open source link:https://opentelemetry.io/docs/collector/[OpenTelemetry] release 0.135.0.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Some linked Jira tickets are accessible only with Red Hat credentials.
|
||||
====
|
||||
@@ -1,21 +1,20 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/otel/otel_rn/otel-rn-3-2.adoc
|
||||
// * observability/otel/otel_rn/otel-rn-past-releases.adoc
|
||||
// * observability/otel/otel-architecture.adoc
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="otel-product-overview_{context}"]
|
||||
= {OTELName} overview
|
||||
[id="otel-about-product_{context}"]
|
||||
= About {OTELName}
|
||||
|
||||
{OTELName} is based on the open source link:https://opentelemetry.io/[OpenTelemetry project], which aims to provide unified, standardized, and vendor-neutral telemetry data collection for cloud-native software. {OTELName} product provides support for deploying and managing the OpenTelemetry Collector and simplifying the workload instrumentation.
|
||||
{OTELName} is based on the open source link:https://opentelemetry.io/[OpenTelemetry project], which aims to provide unified, standardized, and vendor-neutral telemetry data collection for cloud-native software. {OTELName} provides support for deploying and managing the OpenTelemetry Collector and simplifying the workload instrumentation.
|
||||
|
||||
The link:https://opentelemetry.io/docs/collector/[OpenTelemetry Collector] can receive, process, and forward telemetry data in multiple formats, making it the ideal component for telemetry processing and interoperability between telemetry systems. The Collector provides a unified solution for collecting and processing metrics, traces, and logs.
|
||||
|
||||
The OpenTelemetry Collector has a number of features including the following:
|
||||
The OpenTelemetry Collector provides several features including the following:
|
||||
|
||||
Data Collection and Processing Hub:: It acts as a central component that gathers telemetry data like metrics and traces from various sources. This data can be created from instrumented applications and infrastructure.
|
||||
|
||||
Customizable telemetry data pipeline:: The OpenTelemetry Collector is designed to be customizable. It supports various processors, exporters, and receivers.
|
||||
Customizable telemetry data pipeline:: The OpenTelemetry Collector is customizable and supports various processors, exporters, and receivers.
|
||||
|
||||
Auto-instrumentation features:: Automatic instrumentation simplifies the process of adding observability to applications. Developers do not need to manually instrument their code for basic telemetry data.
|
||||
|
||||
@@ -26,3 +25,5 @@ Centralized data collection:: In a microservices architecture, the Collector can
|
||||
Data enrichment and processing:: Before forwarding data to analysis tools, the Collector can enrich, filter, and process this data.
|
||||
|
||||
Multi-backend receiving and exporting:: The Collector can receive and send data to multiple monitoring and analysis platforms simultaneously.
|
||||
|
||||
You can use {OTELName} in combination with {TempoName}.
|
||||
45
modules/otel-collector-deployment-modes.adoc
Normal file
45
modules/otel-collector-deployment-modes.adoc
Normal file
@@ -0,0 +1,45 @@
|
||||
//Module included in the following assemblies:
|
||||
//
|
||||
// * observability/otel/otel-collector/otel-collector-configuration-intro.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="otel-collector-deployment-modes_{context}"]
|
||||
= Deployment modes
|
||||
|
||||
The `OpenTelemetryCollector` custom resource allows you to specify one of the following deployment modes for the OpenTelemetry Collector:
|
||||
|
||||
Deployment:: The default.
|
||||
|
||||
StatefulSet:: If you need to run stateful workloads, for example when using the Collector's File Storage Extension or Tail Sampling Processor, use the StatefulSet deployment mode.
|
||||
|
||||
DaemonSet:: If you need to scrape telemetry data from every node, for example by using the Collector's Filelog Receiver to read container logs, use the DaemonSet deployment mode.
|
||||
|
||||
Sidecar:: If you need access to log files inside a container, inject the Collector as a sidecar, and use the Collector's Filelog Receiver and a shared volume such as `emptyDir`.
|
||||
+
|
||||
If you need to configure an application to send telemetry data via `localhost`, inject the Collector as a sidecar, and set up the Collector to forward the telemetry data to an external service via an encrypted and authenticated connection. The Collector runs in the same pod as the application when injected as a sidecar.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
|
||||
If you choose the sidecar deployment mode, then in addition to setting the `spec.mode: sidecar` field in the `OpenTelemetryCollector` custom resource CR, you must also set the `sidecar.opentelemetry.io/inject` annotation as a pod annotation or namespace annotation. If you set this annotation on both the pod and namespace, the pod annotation takes precedence if it is set to either `false` or the `OpenTelemetryCollector` CR name.
|
||||
|
||||
As a pod annotation, the `sidecar.opentelemetry.io/inject` annotation supports several values:
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
...
|
||||
annotations:
|
||||
sidecar.opentelemetry.io/inject: "<supported_value>" <1>
|
||||
...
|
||||
----
|
||||
<1> Supported values:
|
||||
+
|
||||
`false`:: Does not inject the Collector. This is the default if the annotation is missing.
|
||||
`true`:: Injects the Collector with the configuration of the `OpenTelemetryCollector` CR in the same namespace.
|
||||
`<collector_name>`:: Injects the Collector with the configuration of the `<collector_name>` `OpenTelemetryCollector` CR in the same namespace.
|
||||
`<namespace>/<collector_name>`:: Injects the Collector with the configuration of the `<collector_name>` `OpenTelemetryCollector` CR in the `<namespace>` namespace.
|
||||
|
||||
====
|
||||
@@ -27,7 +27,12 @@ spec:
|
||||
service:
|
||||
telemetry:
|
||||
metrics:
|
||||
address: ":8888"
|
||||
readers:
|
||||
- pull:
|
||||
exporter:
|
||||
prometheus:
|
||||
host: 0.0.0.0
|
||||
port: 8888
|
||||
pipelines:
|
||||
metrics:
|
||||
exporters: [prometheus]
|
||||
|
||||
100
modules/otel-forwarding-data-to-third-party-systems.adoc
Normal file
100
modules/otel-forwarding-data-to-third-party-systems.adoc
Normal file
@@ -0,0 +1,100 @@
|
||||
//Module included in the following assemblies:
|
||||
//
|
||||
// * observability/otel/otel-forwarding-data.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="otel-forwarding-data-to-third-party-systems_{context}"]
|
||||
= Forwarding telemetry data to third-party systems
|
||||
|
||||
The OpenTelemetry Collector exports telemetry data by using the OTLP exporter via the OpenTelemetry Protocol (OTLP) that is implemented over the gRPC or HTTP transports. If you need to forward telemetry data to your third-party system and it does not support the OTLP or other supported protocol in the {OTELShortName}, then you can deploy an unsupported custom OpenTelemetry Collector that can receive telemetry data via the OTLP and export it to your third-party system by using a custom exporter.
|
||||
|
||||
[WARNING]
|
||||
====
|
||||
Red{nbsp}Hat does not support custom deployments.
|
||||
====
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You have developed your own unsupported custom exporter that can export telemetry data via the OTLP to your third-party system.
|
||||
|
||||
.Procedure
|
||||
|
||||
* Deploy a custom Collector either through the OperatorHub or manually:
|
||||
|
||||
** If your third-party system supports it, deploy the custom Collector by using the OperatorHub.
|
||||
|
||||
** Deploy the custom Collector manually by using a config map, deployment, and service.
|
||||
+
|
||||
.Example of a custom Collector deployment
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: custom-otel-collector-config
|
||||
data:
|
||||
otel-collector-config.yaml: |
|
||||
receivers:
|
||||
otlp:
|
||||
protocols:
|
||||
grpc:
|
||||
exporters:
|
||||
debug: {}
|
||||
prometheus:
|
||||
service:
|
||||
pipelines:
|
||||
traces:
|
||||
receivers: [otlp]
|
||||
exporters: [debug] # <1>
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: custom-otel-collector-deployment
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
component: otel-collector
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
component: otel-collector
|
||||
spec:
|
||||
containers:
|
||||
- name: opentelemetry-collector
|
||||
image: ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib:latest # <2>
|
||||
command:
|
||||
- "/otelcol-contrib"
|
||||
- "--config=/conf/otel-collector-config.yaml"
|
||||
ports:
|
||||
- name: otlp
|
||||
containerPort: 4317
|
||||
protocol: TCP
|
||||
volumeMounts:
|
||||
- name: otel-collector-config-vol
|
||||
mountPath: /conf
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: otel-collector-config-vol
|
||||
configMap:
|
||||
name: custom-otel-collector-config
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: custom-otel-collector-service # <3>
|
||||
labels:
|
||||
component: otel-collector
|
||||
spec:
|
||||
type: ClusterIP
|
||||
ports:
|
||||
- name: otlp-grpc
|
||||
port: 4317
|
||||
targetPort: 4317
|
||||
selector:
|
||||
component: otel-collector
|
||||
----
|
||||
<1> Replace `debug` with the required exporter for your third-party system.
|
||||
<2> Replace the image with the required version of the OpenTelemetry Collector that has the required exporter for your third-party system.
|
||||
<3> The service name is used in the Red Hat build of OpenTelemetry Collector CR to configure the OTLP exporter.
|
||||
@@ -0,0 +1,74 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/otel/otel-collector/
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="otel-processors-probabilistic-sampling-processor_{context}"]
|
||||
= Probabilistic Sampling Processor
|
||||
|
||||
If you handle high volumes of telemetry data and seek to reduce costs by reducing processed data volumes, you can use the Probabilistic Sampling Processor as an alternative to the Tail Sampling Processor.
|
||||
|
||||
The processor samples a specified percentage of trace spans or log records statelessly and per request.
|
||||
|
||||
The processor adds the information about the used effective sampling probability into the telemetry data:
|
||||
|
||||
* In trace spans, the processor encodes the threshold and optional randomness information in the W3C Trace Context `tracestate` fields.
|
||||
|
||||
* In log records, the processor encodes the threshold and randomness information as attributes.
|
||||
|
||||
The following is an example `OpenTelemetryCollector` custom resource configuration for the Probabilistic Sampling Processor for sampling trace spans:
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
# ...
|
||||
config:
|
||||
processors:
|
||||
probabilistic_sampler: # <1>
|
||||
sampling_percentage: 15.3 # <2>
|
||||
mode: "proportional" # <3>
|
||||
hash_seed: 22 # <4>
|
||||
sampling_precision: 14 # <5>
|
||||
fail_closed: true # <6>
|
||||
# ...
|
||||
service:
|
||||
pipelines:
|
||||
traces:
|
||||
processors: [probabilistic_sampler]
|
||||
# ...
|
||||
----
|
||||
<1> For trace pipelines, the source of randomness is the hashed value of the span trace ID.
|
||||
<2> Required. Accepts a 32-bit floating-point percentage value at which spans are to be sampled.
|
||||
<3> Optional. Accepts a supported string value for a sampling logic mode: the default `hash_seed`, `proportional`, or `equalizing`. The `hash_seed` mode applies the Fowler–Noll–Vo (FNV) hash function to the trace ID and weighs the hashed value against the sampling percentage value. You can also use the `hash_seed` mode with units of telemetry other than the trace ID. The `proportional` mode samples a strict, probability-based ratio of the total span quantity, and is based on the OpenTelemetry and World Wide Web Consortium specifications. The `equalizing` mode is useful for lowering the sampling probability to a minimum value across a whole pipeline or applying a uniform sampling probability in Collector deployments where client SDKs have mixed sampling configurations.
|
||||
<4> Optional. Accepts a 32-bit unsigned integer, which is used to compute the hash algorithm. When this field is not configured, the default seed value is `0`. If you use multiple tiers of Collector instances, you must configure all Collectors of the same tier to the same seed value.
|
||||
<5> Optional. Determines the number of hexadecimal digits used to encode the sampling threshold. Accepts an integer value. The supported values are `1`-`14`. The default value `4` causes the threshold to be rounded if it contains more than 16 significant bits, which is the case of the `proportional` mode that uses 56 bits. If you select the `proportional` mode, use a greater value for the purpose of preserving precision applied by preceding samplers.
|
||||
<6> Optional. Rejects spans with sampling errors. Accepts a boolean value. The default value is `true`.
|
||||
|
||||
The following is an example `OpenTelemetryCollector` custom resource configuration for the Probabilistic Sampling Processor for sampling log records:
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
# ...
|
||||
config:
|
||||
processors:
|
||||
probabilistic_sampler/logs:
|
||||
sampling_percentage: 15.3 # <1>
|
||||
mode: "hash_seed" # <2>
|
||||
hash_seed: 22 # <3>
|
||||
sampling_precision: 4 # <4>
|
||||
attribute_source: "record" # <5>
|
||||
from_attribute: "<log_record_attribute_name>" # <6>
|
||||
fail_closed: true # <7>
|
||||
# ...
|
||||
service:
|
||||
pipelines:
|
||||
logs:
|
||||
processors: [ probabilistic_sampler/logs ]
|
||||
# ...
|
||||
----
|
||||
<1> Required. Accepts a 32-bit floating-point percentage value at which spans are to be sampled.
|
||||
<2> Optional. Accepts a supported string value for a sampling logic mode: the default `hash_seed`, `equalizing`, or `proportional`. The `hash_seed` mode applies the Fowler–Noll–Vo (FNV) hash function to the trace ID or a specified log record attribute and then weighs the hashed value against the sampling percentage value. You can also use `hash_seed` mode with other units of telemetry than trace ID, for example to use the `service.instance.id` resource attribute for collecting log records from a percentage of pods. The `equalizing` mode is useful for lowering the sampling probability to a minimum value across a whole pipeline or applying a uniform sampling probability in Collector deployments where client SDKs have mixed sampling configurations. The `proportional` mode samples a strict, probability-based ratio of the total span quantity, and is based on the OpenTelemetry and World Wide Web Consortium specifications.
|
||||
<3> Optional. Accepts a 32-bit unsigned integer, which is used to compute the hash algorithm. When this field is not configured, the default seed value is `0`. If you use multiple tiers of Collector instances, you must configure all Collectors of the same tier to the same seed value.
|
||||
<4> Optional. Determines the number of hexadecimal digits used to encode the sampling threshold. Accepts an integer value. The supported values are `1`-`14`. The default value `4` causes the threshold to be rounded if it contains more than 16 significant bits, which is the case of the `proportional` mode that uses 56 bits. If you select the `proportional` mode, use a greater value for the purpose of preserving precision applied by preceding samplers.
|
||||
<5> Optional. Defines where to look for the log record attribute in `from_attribute`. The log record attribute is used as the source of randomness. Accept the default `traceID` value or the `record` value.
|
||||
<6> Optional. The name of a log record attribute to be used to compute the sampling hash, such as a unique log record ID. Accepts a string value. The default value is `""`. Use this field only if you need to specify a log record attribute as the source of randomness in those situations where the trace ID is absent or trace ID sampling is disabled or the `attribute_source` field is set to the `record` value.
|
||||
<7> Optional. Rejects spans with sampling errors. Accepts a boolean value. The default value is `true`.
|
||||
9
modules/otel-rn-bug-fixes.adoc
Normal file
9
modules/otel-rn-bug-fixes.adoc
Normal file
@@ -0,0 +1,9 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/otel/otel-rn.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="fixed-issues_{context}"]
|
||||
= Fixed issues
|
||||
|
||||
None.
|
||||
31
modules/otel-rn-deprecated-features.adoc
Normal file
31
modules/otel-rn-deprecated-features.adoc
Normal file
@@ -0,0 +1,31 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/otel/otel-rn.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="deprecated-features_{context}"]
|
||||
= Deprecated features
|
||||
|
||||
The OpenCensus Receiver is deprecated::
|
||||
The OpenCensus Receiver, which provided backward compatibility with the OpenCensus format, is deprecated and might be removed in a future release.
|
||||
|
||||
The Collector's service metrics telemetry address is deprecated::
|
||||
The `metrics.address` field in the `OpenTelemetryCollector` custom resource (CR) is deprecated and might be removed in a future release. As an alternative, use the `metrics.readers` field instead.
|
||||
+
|
||||
Example of using the `readers` field:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
# ...
|
||||
config:
|
||||
service:
|
||||
telemetry:
|
||||
metrics:
|
||||
readers:
|
||||
- pull:
|
||||
exporter:
|
||||
prometheus:
|
||||
host: 0.0.0.0
|
||||
port: 8888
|
||||
# ...
|
||||
----
|
||||
11
modules/otel-rn-enhancements.adoc
Normal file
11
modules/otel-rn-enhancements.adoc
Normal file
@@ -0,0 +1,11 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/otel/otel-rn.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="new-features-and-enhancements_{context}"]
|
||||
= New features and enhancements
|
||||
|
||||
Network policy to restrict API access:: With this update, the {OTELOperator} creates a network policy for itself and the OpenTelemetry Collector to restrict access to the used APIs.
|
||||
|
||||
Native sidecars:: With this update, the {OTELOperator} uses native sidecars on {product-title} 4.16 or later.
|
||||
9
modules/otel-rn-known-issues.adoc
Normal file
9
modules/otel-rn-known-issues.adoc
Normal file
@@ -0,0 +1,9 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/otel/otel-rn.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="known-issues_{context}"]
|
||||
= Known issues
|
||||
|
||||
None.
|
||||
13
modules/otel-rn-removed-features.adoc
Normal file
13
modules/otel-rn-removed-features.adoc
Normal file
@@ -0,0 +1,13 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/otel/otel-rn.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="removed-features_{context}"]
|
||||
= Removed features
|
||||
|
||||
The LokiStack Exporter is removed::
|
||||
The LokiStack Exporter, which exported data to a LokiStack instance, is removed and no longer supported. You can export data to a LokiStack instance by using the OTLP HTTP Exporter instead.
|
||||
|
||||
The Routing Processor is removed::
|
||||
The Routing Processor, which routed telemetry data to an exporter is removed and no longer supported. You can route telemetry data by using the Routing Connector instead.
|
||||
18
modules/otel-rn-technology-preview-features.adoc
Normal file
18
modules/otel-rn-technology-preview-features.adoc
Normal file
@@ -0,0 +1,18 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/otel/otel-rn.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="technology-preview-features_{context}"]
|
||||
= Technology Preview features
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
[subs="attributes+"]
|
||||
Technology Preview features are not supported with Red{nbsp}Hat production service level agreements (SLAs) and might not be functionally complete. Red{nbsp}Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
|
||||
|
||||
For more information about the support scope of Red{nbsp}Hat Technology Preview features, see link:https://access.redhat.com/support/offerings/techpreview/[Technology Preview Features Support Scope].
|
||||
====
|
||||
|
||||
Probabilistic Sampling Processor (Technology Preview)::
|
||||
This release introduces the Probabilistic Sampling Processor as a Technology Preview feature for the {OTELShortName} Collector. The Probabilistic Sampling Processor samples a specified percentage of trace spans or log records statelessly and per request. You can use the Probabilistic Sampling Processor if you handle high volumes of telemetry data and seek to reduce costs by reducing processed data volumes.
|
||||
35
modules/otel-troubleshoot-network-policies.adoc
Normal file
35
modules/otel-troubleshoot-network-policies.adoc
Normal file
@@ -0,0 +1,35 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/otel/otel-troubleshooting.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="troubleshoot-network-policies_{context}"]
|
||||
= Disabling network policies
|
||||
|
||||
The {OTELOperator} creates network policies to control the traffic for the Operator and operands to improve security.
|
||||
By default, the network policies are enabled and configured to allow traffic to all the required components. No additional configuration is needed.
|
||||
|
||||
If you are experiencing traffic issues for the OpenTelemetry Collector or its Target Allocator component, the problem might be caused by the default network policy configuration. You can disable network policies for the OpenTelemetry Collector to troubleshoot the issue.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You have access to the cluster as a cluster administrator with the `cluster-admin` role.
|
||||
|
||||
.Procedure
|
||||
|
||||
* Disable the network policy for the OpenTelemetry Collector by configuring the `OpenTelemetryCollector` custom resource (CR):
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: opentelemetry.io/v1beta1
|
||||
kind: OpenTelemetryCollector
|
||||
metadata:
|
||||
name: otel
|
||||
namespace: observability
|
||||
spec:
|
||||
networkPolicy:
|
||||
enabled: false # <1>
|
||||
# ...
|
||||
----
|
||||
<1> Specify whether to enable network policies by setting `networkPolicy.enabled` to `true` (default) or `false`. Setting it to `false` disables the creation of network policies.
|
||||
|
||||
@@ -6,26 +6,24 @@ include::_attributes/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
Every time a user takes an action in an application, a request is executed by the architecture that may require dozens of different services to participate to produce a response.
|
||||
{DTProductName} lets you perform distributed tracing, which records the path of a request through various microservices that make up an application.
|
||||
include::modules/distr-tracing-tempo-key-concepts-in-distributed-tracing.adoc[leveloffset=+1]
|
||||
|
||||
_Distributed tracing_ is a technique that is used to tie the information about different units of work together — usually executed in different processes or hosts — to understand a whole chain of events in a distributed transaction.
|
||||
Developers can visualize call flows in large microservice architectures with distributed tracing.
|
||||
It is valuable for understanding serialization, parallelism, and sources of latency.
|
||||
|
||||
{DTProductName} records the execution of individual requests across the whole stack of microservices, and presents them as traces. A _trace_ is a data/execution path through the system. An end-to-end trace is comprised of one or more spans.
|
||||
|
||||
A _span_ represents a logical unit of work in {DTProductName} that has an operation name, the start time of the operation, and the duration, as well as potentially tags and logs. Spans may be nested and ordered to model causal relationships.
|
||||
|
||||
include::modules/distr-tracing-product-overview.adoc[leveloffset=+1]
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
// xreffing to the installation page until further notice because OTEL content is currently planned for internal restructuring across pages that is likely to result in renamed page files
|
||||
* xref:../../observability/otel/otel-installing.adoc#install-otel[{OTELName}]
|
||||
* xref:../../observability/cluster_observability_operator/ui_plugins/distributed-tracing-ui-plugin.adoc#distributed-tracing-ui-plugin[Distributed tracing UI plugin]
|
||||
|
||||
include::modules/distr-tracing-features.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/distr-tracing-architecture.adoc[leveloffset=+1]
|
||||
|
||||
////
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources_{context}"]
|
||||
== Additional resources
|
||||
|
||||
// xreffing to the installation page until further notice because OTEL content is currently planned for internal restructuring across pages that is likely to result in renamed page files
|
||||
* xref:../../observability/otel/otel-installing.adoc#install-otel[{OTELName}]
|
||||
* xref:../../observability/cluster_observability_operator/ui_plugins/distributed-tracing-ui-plugin.adoc#distributed-tracing-ui-plugin[Distributed tracing UI plugin]
|
||||
////
|
||||
File diff suppressed because it is too large
Load Diff
@@ -33,6 +33,12 @@ include::modules/distr-tracing-tempo-config-query-frontend.adoc[leveloffset=+1]
|
||||
.Additional resources
|
||||
* xref:../../nodes/scheduling/nodes-scheduler-taints-tolerations.adoc#nodes-scheduler-taints-tolerations-about_nodes-scheduler-taints-tolerations[Understanding taints and tolerations]
|
||||
|
||||
include::modules/distr-tracing-tempo-coo-ui-plugin.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
* xref:../../observability/cluster_observability_operator/ui_plugins/distributed-tracing-ui-plugin.adoc#distributed-tracing-ui-plugin[Distributed tracing UI plugin]
|
||||
|
||||
include::modules/distr-tracing-tempo-config-spanmetrics.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
|
||||
9
observability/otel/otel-architecture.adoc
Normal file
9
observability/otel/otel-architecture.adoc
Normal file
@@ -0,0 +1,9 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
[id="otel-architecture"]
|
||||
= About {OTELName}
|
||||
:context: otel-architecture
|
||||
|
||||
toc::[]
|
||||
|
||||
include::modules/otel-about.adoc[leveloffset=+1]
|
||||
@@ -8,6 +8,8 @@ toc::[]
|
||||
|
||||
The {OTELName} Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the {OTELShortName} resources. You can install the default configuration or modify the file.
|
||||
|
||||
include::modules/otel-collector-deployment-modes.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/otel-collector-config-options.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/otel-creating-required-RBAC-resources-automatically.adoc[leveloffset=+1]
|
||||
|
||||
@@ -18,7 +18,6 @@ Currently, the following General Availability and Technology Preview processors
|
||||
- xref:../../../observability/otel/otel-collector/otel-collector-processors.adoc#span-processor_otel-collector-processors[Span Processor]
|
||||
- xref:../../../observability/otel/otel-collector/otel-collector-processors.adoc#kubernetes-attributes-processor_otel-collector-processors[Kubernetes Attributes Processor]
|
||||
- xref:../../../observability/otel/otel-collector/otel-collector-processors.adoc#filter-processor_otel-collector-processors[Filter Processor]
|
||||
- xref:../../../observability/otel/otel-collector/otel-collector-processors.adoc#routing-processor_otel-collector-processors[Routing Processor]
|
||||
- xref:../../../observability/otel/otel-collector/otel-collector-processors.adoc#cumulativetodelta-processor_otel-collector-processors[Cumulative-to-Delta Processor]
|
||||
- xref:../../../observability/otel/otel-collector/otel-collector-processors.adoc#groupbyattrsprocessor-processor_otel-collector-processors[Group-by-Attributes Processor]
|
||||
- xref:../../../observability/otel/otel-collector/otel-collector-processors.adoc#transform-processor_otel-collector-processors[Transform Processor]
|
||||
@@ -372,40 +371,6 @@ include::snippets/technology-preview.adoc[]
|
||||
<2> Filters the spans that have the `container.name == app_container_1` attribute.
|
||||
<3> Filters the spans that have the `host.name == localhost` resource attribute.
|
||||
|
||||
[id="routing-processor_{context}"]
|
||||
== Routing Processor
|
||||
|
||||
The Routing Processor routes logs, metrics, or traces to specific exporters. This processor can read a header from an incoming gRPC or plain HTTP request or read a resource attribute, and then direct the trace information to relevant exporters according to the read value.
|
||||
|
||||
:FeatureName: The Routing Processor
|
||||
include::snippets/technology-preview.adoc[]
|
||||
|
||||
.OpenTelemetry Collector custom resource with an enabled OTLP Exporter
|
||||
[source,yaml]
|
||||
----
|
||||
# ...
|
||||
config:
|
||||
processors:
|
||||
routing:
|
||||
from_attribute: X-Tenant # <1>
|
||||
default_exporters: # <2>
|
||||
- jaeger
|
||||
table: # <3>
|
||||
- value: acme
|
||||
exporters: [jaeger/acme]
|
||||
exporters:
|
||||
jaeger:
|
||||
endpoint: localhost:14250
|
||||
jaeger/acme:
|
||||
endpoint: localhost:24250
|
||||
# ...
|
||||
----
|
||||
<1> The HTTP header name for the lookup value when performing the route.
|
||||
<2> The default exporter when the attribute value is not present in the table in the next section.
|
||||
<3> The table that defines which values are to be routed to which exporters.
|
||||
|
||||
Optionally, you can create an `attribute_source` configuration, which defines where to look for the attribute that you specify in the `from_attribute` field. The supported values are `context` for searching the context including the HTTP headers, and `resource` for searching the resource attributes.
|
||||
|
||||
[id="cumulativetodelta-processor_{context}"]
|
||||
== Cumulative-to-Delta Processor
|
||||
|
||||
@@ -924,6 +889,8 @@ You can choose and combine policies from the following list:
|
||||
* link:https://opentelemetry.io/blog/2022/tail-sampling/[Tail Sampling with OpenTelemetry: Why it’s useful, how to do it, and what to consider] (OpenTelemetry Blog)
|
||||
* link:https://opentelemetry.io/docs/collector/deployment/gateway/[Gateway] (OpenTelemetry Documentation)
|
||||
|
||||
include::modules/otel-processors-probabilistic-sampling-processor.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources_{context}"]
|
||||
== Additional resources
|
||||
|
||||
@@ -593,6 +593,11 @@ subjects:
|
||||
|
||||
The OpenCensus Receiver provides backwards compatibility with the OpenCensus project for easier migration of instrumented codebases. It receives metrics and traces in the OpenCensus format via gRPC or HTTP and Json.
|
||||
|
||||
[WARNING]
|
||||
====
|
||||
The OpenCensus Receiver is deprecated and might be removed in a future major release.
|
||||
====
|
||||
|
||||
.OpenTelemetry Collector custom resource with the enabled OpenCensus Receiver
|
||||
[source,yaml]
|
||||
----
|
||||
|
||||
@@ -11,3 +11,10 @@ You can use the OpenTelemetry Collector to forward your telemetry data.
|
||||
include::modules/otel-forwarding-traces.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/otel-forwarding-logs-to-tempostack.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/otel-forwarding-data-to-third-party-systems.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources_{context}"]
|
||||
== Additional resources
|
||||
* link:https://opentelemetry.io/docs/specs/otlp/[OpenTelemetry Protocol (OTLP)]
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -20,6 +20,8 @@ include::modules/otel-troubleshoot-metrics.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/otel-troubleshoot-debug-exporter-stdout.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/otel-troubleshoot-network-policies.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/otel-troubleshoot-network-traffic.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
|
||||
@@ -10,7 +10,7 @@ toc::[]
|
||||
Distributed tracing records the path of a request through the various services that make up an application. It is used to tie information about different units of work together, to understand a whole chain of events in a distributed transaction. The units of work might be executed in different processes or hosts.
|
||||
|
||||
ifdef::openshift-enterprise[]
|
||||
include::modules/distr-tracing-product-overview.adoc[leveloffset=+1]
|
||||
include::modules/distr-tracing-tempo-key-concepts-in-distributed-tracing.adoc[leveloffset=+1]
|
||||
endif::[]
|
||||
|
||||
ifdef::openshift-enterprise[]
|
||||
|
||||
@@ -38,7 +38,7 @@ Jaeger records the execution of individual requests across the whole stack of mi
|
||||
|
||||
A *span* represents a logical unit of work in Jaeger that has an operation name, the start time of the operation, and the duration. Spans may be nested and ordered to model causal relationships.
|
||||
|
||||
include::modules/distr-tracing-product-overview.adoc[leveloffset=+2]
|
||||
include::modules/distr-tracing-tempo-key-concepts-in-distributed-tracing.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/jaeger-architecture.adoc[leveloffset=+2]
|
||||
|
||||
|
||||
@@ -37,7 +37,7 @@ The {JaegerShortName} records the execution of individual requests across the wh
|
||||
|
||||
A *span* represents a logical unit of work that has an operation name, the start time of the operation, and the duration. Spans may be nested and ordered to model causal relationships.
|
||||
|
||||
include::modules/distr-tracing-product-overview.adoc[leveloffset=+2]
|
||||
include::modules/distr-tracing-tempo-key-concepts-in-distributed-tracing.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/distr-tracing-architecture.adoc[leveloffset=+2]
|
||||
|
||||
|
||||
@@ -14,9 +14,9 @@ metadata:
|
||||
name: otel
|
||||
namespace: <permitted_project_of_opentelemetry_collector_instance> # <1>
|
||||
spec:
|
||||
mode: deployment
|
||||
mode: <deployment_mode> # <2>
|
||||
config:
|
||||
receivers: # <2>
|
||||
receivers: # <3>
|
||||
otlp:
|
||||
protocols:
|
||||
grpc:
|
||||
@@ -28,13 +28,13 @@ spec:
|
||||
thrift_compact: {}
|
||||
thrift_http: {}
|
||||
zipkin: {}
|
||||
processors: # <3>
|
||||
processors: # <4>
|
||||
batch: {}
|
||||
memory_limiter:
|
||||
check_interval: 1s
|
||||
limit_percentage: 50
|
||||
spike_limit_percentage: 30
|
||||
exporters: # <4>
|
||||
exporters: # <5>
|
||||
debug: {}
|
||||
service:
|
||||
pipelines:
|
||||
@@ -44,6 +44,7 @@ spec:
|
||||
exporters: [debug]
|
||||
----
|
||||
<1> The project that you have chosen for the `OpenTelemetryCollector` deployment. Project names beginning with the `openshift-` prefix are not permitted.
|
||||
<2> For details, see the "Receivers" page.
|
||||
<3> For details, see the "Processors" page.
|
||||
<4> For details, see the "Exporters" page.
|
||||
<2> The deployment mode with the following supported values: the default `deployment`, `daemonset`, `statefulset`, or `sidecar`. For details, see _Deployment Modes_.
|
||||
<3> For details, see _Receivers_.
|
||||
<4> For details, see _Processors_.
|
||||
<5> For details, see _Exporters_.
|
||||
|
||||
Reference in New Issue
Block a user