mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-07 00:48:01 +01:00
no-1.5 integration with main OSDOCS-7593: Netobserv RTT OSDOCS-8465: Updates to Network Traffic Overview OSDOCS-8253: Improved LokiStack integration OSDOCS-8253: API version updates Dashboard enhancements for lokiless use OCPBUGS-22397: clarify netobserv network policy OSDOCS-9419: Adding zones to Overview Re-adding removed RTT overview info OSDOCS-8701: Update resource considerations table Network Observability API documentation updates Update to JSON flows format Network Observability 1.5 release notes no-1.5 integration with main
48 lines
2.3 KiB
Plaintext
48 lines
2.3 KiB
Plaintext
// Module included in the following assemblies:
|
|
//
|
|
// network_observability/configuring-operator.adoc
|
|
|
|
:_mod-docs-content-type: PROCEDURE
|
|
[id="network-observability-enriched-flows_{context}"]
|
|
= Export enriched network flow data
|
|
|
|
You can send network flows to Kafka, IPFIX, or both at the same time. Any processor or storage that supports Kafka or IPFIX input, such as Splunk, Elasticsearch, or Fluentd, can consume the enriched network flow data.
|
|
|
|
.Prerequisites
|
|
* Your Kafka or IPFIX collector endpoint(s) are available from Network Observability `flowlogs-pipeline` pods.
|
|
|
|
.Procedure
|
|
|
|
. In the web console, navigate to *Operators* -> *Installed Operators*.
|
|
. Under the *Provided APIs* heading for the *NetObserv Operator*, select *Flow Collector*.
|
|
. Select *cluster* and then select the *YAML* tab.
|
|
. Edit the `FlowCollector` to configure `spec.exporters` as follows:
|
|
+
|
|
[source,yaml]
|
|
----
|
|
apiVersion: flows.netobserv.io/v1beta2
|
|
kind: FlowCollector
|
|
metadata:
|
|
name: cluster
|
|
spec:
|
|
exporters:
|
|
- type: Kafka <3>
|
|
kafka:
|
|
address: "kafka-cluster-kafka-bootstrap.netobserv"
|
|
topic: netobserv-flows-export <1>
|
|
tls:
|
|
enable: false <2>
|
|
- type: IPFIX <3>
|
|
ipfix:
|
|
targetHost: "ipfix-collector.ipfix.svc.cluster.local"
|
|
targetPort: 4739
|
|
transport: tcp or udp <4>
|
|
|
|
|
|
----
|
|
<1> The Network Observability Operator exports all flows to the configured Kafka topic.
|
|
<2> You can encrypt all communications to and from Kafka with SSL/TLS or mTLS. When enabled, the Kafka CA certificate must be available as a ConfigMap or a Secret, both in the namespace where the `flowlogs-pipeline` processor component is deployed (default: netobserv). It must be referenced with `spec.exporters.tls.caCert`. When using mTLS, client secrets must be available in these namespaces as well (they can be generated for instance using the AMQ Streams User Operator) and referenced with `spec.exporters.tls.userCert`.
|
|
<3> You can export flows to IPFIX instead of or in conjunction with exporting flows to Kafka.
|
|
<4> You have the option to specify transport. The default value is `tcp` but you can also specify `udp`.
|
|
. After configuration, network flows data can be sent to an available output in a JSON format. For more information, see _Network flows format reference_.
|