mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-06 15:46:57 +01:00
OSDOCS-10036:Exporting Network Observability metrics to OpenTelemetry OSDOCS-11731: Developer perspective for Network Observability OSDOCS-10877: Virtualization in Network Observability NetObserv FlowMetric API regeneration Network Observability 1.7 regenerate flows format doc OSDOCS-11821: NetObserv CLI updates NetObserv 1.7 FlowCollector API regeneration Network Observability 1.7 Release Notes
68 lines
4.1 KiB
Plaintext
68 lines
4.1 KiB
Plaintext
// Module included in the following assemblies:
|
|
//
|
|
// network_observability/configuring-operator.adoc
|
|
|
|
:_mod-docs-content-type: PROCEDURE
|
|
[id="network-observability-enriched-flows_{context}"]
|
|
= Export enriched network flow data
|
|
|
|
You can send network flows to Kafka, IPFIX, the Red{nbsp}Hat build of OpenTelemetry, or all three at the same time. For Kafka or IPFIX, any processor or storage that supports those inputs, such as Splunk, Elasticsearch, or Fluentd, can consume the enriched network flow data. For OpenTelemetry, network flow data and metrics can be exported to a compatible OpenTelemetry endpoint, such as Red{nbsp}Hat build of OpenTelemetry, Jaeger, or Prometheus.
|
|
|
|
.Prerequisites
|
|
* Your Kafka, IPFIX, or OpenTelemetry collector endpoints are available from Network Observability `flowlogs-pipeline` pods.
|
|
|
|
|
|
.Procedure
|
|
|
|
. In the web console, navigate to *Operators* -> *Installed Operators*.
|
|
. Under the *Provided APIs* heading for the *NetObserv Operator*, select *Flow Collector*.
|
|
. Select *cluster* and then select the *YAML* tab.
|
|
. Edit the `FlowCollector` to configure `spec.exporters` as follows:
|
|
+
|
|
[source,yaml]
|
|
----
|
|
apiVersion: flows.netobserv.io/v1beta2
|
|
kind: FlowCollector
|
|
metadata:
|
|
name: cluster
|
|
spec:
|
|
exporters:
|
|
- type: Kafka <1>
|
|
kafka:
|
|
address: "kafka-cluster-kafka-bootstrap.netobserv"
|
|
topic: netobserv-flows-export <2>
|
|
tls:
|
|
enable: false <3>
|
|
- type: IPFIX <1>
|
|
ipfix:
|
|
targetHost: "ipfix-collector.ipfix.svc.cluster.local"
|
|
targetPort: 4739
|
|
transport: tcp or udp <4>
|
|
- type: OpenTelemetry <1>
|
|
openTelemetry:
|
|
targetHost: my-otelcol-collector-headless.otlp.svc
|
|
targetPort: 4317
|
|
type: grpc <5>
|
|
logs: <6>
|
|
enable: true
|
|
metrics: <7>
|
|
enable: true
|
|
prefix: netobserv
|
|
pushTimeInterval: 20s <8>
|
|
expiryTime: 2m
|
|
# fieldsMapping: <9>
|
|
# input: SrcAddr
|
|
# output: source.address
|
|
----
|
|
<1> You can export flows to IPFIX, OpenTelemetry, and Kafka individually or concurrently.
|
|
<2> The Network Observability Operator exports all flows to the configured Kafka topic.
|
|
<3> You can encrypt all communications to and from Kafka with SSL/TLS or mTLS. When enabled, the Kafka CA certificate must be available as a ConfigMap or a Secret, both in the namespace where the `flowlogs-pipeline` processor component is deployed (default: netobserv). It must be referenced with `spec.exporters.tls.caCert`. When using mTLS, client secrets must be available in these namespaces as well (they can be generated for instance using the AMQ Streams User Operator) and referenced with `spec.exporters.tls.userCert`.
|
|
<4> You have the option to specify transport. The default value is `tcp` but you can also specify `udp`.
|
|
<5> The protocol of OpenTelemetry connection. The available options are `http` and `grpc`.
|
|
<6> OpenTelemetry configuration for exporting logs, which are the same as the logs created for Loki.
|
|
<7> OpenTelemetry configuration for exporting metrics, which are the same as the metrics created for Prometheus. These configurations are specified in the `spec.processor.metrics.includeList` parameter of the `FlowCollector` custom resource, along with any custom metrics you defined using the `FlowMetrics` custom resource.
|
|
<8> The time interval that metrics are sent to the OpenTelemetry collector.
|
|
<9> *Optional*:Network Observability network flows formats get automatically renamed to an OpenTelemetry compliant format. The `fieldsMapping` specification gives you the ability to customize the OpenTelemetry format output. For example in the YAML sample, `SrcAddr` is the Network Observability input field, and it is being renamed `source.address` in OpenTelemetry output. You can see both Network Observability and OpenTelemetry formats in the "Network flows format reference".
|
|
|
|
After configuration, network flows data can be sent to an available output in a JSON format. For more information, see "Network flows format reference".
|