1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-06 06:46:26 +01:00

OSDOCS-11555:Configuring network observability network policy

OSDOCS-10036:Exporting Network Observability metrics to OpenTelemetry

OSDOCS-11731: Developer perspective for Network Observability

OSDOCS-10877: Virtualization in Network Observability

NetObserv FlowMetric API regeneration

Network Observability 1.7 regenerate flows format doc

OSDOCS-11821: NetObserv CLI updates

NetObserv 1.7 FlowCollector API regeneration

Network Observability 1.7 Release Notes
This commit is contained in:
Sara Thomas
2024-08-02 16:42:59 -04:00
committed by openshift-cherrypick-robot
parent 0d4e8746b4
commit 2dd67ca52a
20 changed files with 923 additions and 249 deletions

View File

@@ -3085,6 +3085,8 @@ Topics:
File: network-observability-operator-monitoring
- Name: Scheduling resources
File: network-observability-scheduling-resources
- Name: Secondary networks
File: network-observability-secondary-networks
- Name: Network Observability CLI
Dir: netobserv_cli
Topics:

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * network_observability/configuring-operator.adoc
// * observability/network_observability/network-observability-secondary-networks.adoc
:_mod-docs-content-type: PROCEDURE
[id="network-observability-SR-IOV-config_{context}"]
@@ -16,7 +16,7 @@ In order to collect traffic from a cluster with a Single Root I/O Virtualization
. Under the *Provided APIs* heading for the *NetObserv Operator*, select *Flow Collector*.
. Select *cluster* and then select the *YAML* tab.
. Configure the `FlowCollector` custom resource. A sample configuration is as follows:
+
.Configure `FlowCollector` for SR-IOV monitoring
[source,yaml]
----

View File

@@ -25,6 +25,7 @@ $ oc netobserv flows --enable_filter=true --action=Accept --cidr=0.0.0.0/0 --pro
----
live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once
----
. Use the *PageUp* and *PageDown* keys to toggle between *None*, *Resource*, *Zone*, *Host*, *Owner* and *all of the above*.
. To stop capturing, press kbd:[Ctrl+C]. The data that was captured is written to two separate files in an `./output` directory located in the same path used to install the CLI.
. View the captured data in the `./output/flow/<capture_date_time>.json` JSON file, which contains JSON arrays of the captured data.
+

View File

@@ -16,7 +16,7 @@ You can capture packets using the Network Observability CLI.
+
[source,terminal]
----
$ oc netobserv packets tcp,80
$ oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051
----
. Add filters to the `live table filter` prompt in the terminal to refine the incoming packets. An example filter is as follows:
+
@@ -24,6 +24,7 @@ $ oc netobserv packets tcp,80
----
live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once
----
. Use the *PageUp* and *PageDown* keys to toggle between *None*, *Resource*, *Zone*, *Host*, *Owner* and *all of the above*.
. To stop capturing, press kbd:[Ctrl+C].
. View the captured data, which is written to a single file in an `./output/pcap` directory located in the same path that was used to install the CLI:
.. The `./output/pcap/<capture_date_time>.pcap` file can be opened with Wireshark.

View File

@@ -6,7 +6,54 @@
:_mod-docs-content-type: PROCEDURE
[id="network-observability-network-policy_{context}"]
= Creating a network policy for Network Observability
You might need to create a network policy to secure ingress traffic to the `netobserv` namespace. In the web console, you can create a network policy using the form view.
If you want to further customize the network policies for the `netobserv` and `netobserv-privileged` namespaces, you must disable the managed installation of the policy from the `FlowCollector` CR, and create your own. You can use the network policy resources that are enabled from the `FlowCollector` CR as a starting point for the procedure that follows:
.Example `netobserv` network policy
[source,yaml]
----
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
spec:
ingress:
- from:
- podSelector: {}
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: netobserv-privileged
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: openshift-console
ports:
- port: 9001
protocol: TCP
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: openshift-monitoring
podSelector: {}
policyTypes:
- Ingress
----
.Example `netobserv-privileged` network policy
[source,yaml]
----
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: netobserv
namespace: netobserv-privileged
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: openshift-monitoring
podSelector: {}
policyTypes:
- Ingress
----
.Procedure
. Navigate to *Networking* -> *NetworkPolicies*.

View File

@@ -0,0 +1,40 @@
// Module included in the following assemblies:
// * networking/network_observability/network-observability-network-policy.adoc
:_mod-docs-content-type: PROCEDURE
[id="network-observability-deploy-network-policy_{context}"]
= Configuring an ingress network policy by using the FlowCollector custom resource
You can configure the `FlowCollector` custom resource (CR) to deploy an ingress network policy for Network Observability by setting the `spec.NetworkPolicy.enable` specification to `true`. By default, the specification is `false`.
If you have installed Loki, Kafka or any exporter in a different namespace that also has a network policy, you must ensure that the Network Observability components can communicate with them. Consider the following about your setup:
* Connection to Loki (as defined in the `FlowCollector` CR `spec.loki` parameter)
* Connection to Kafka (as defined in the `FlowCollector` CR `spec.kafka` parameter)
* Connection to any exporter (as defined in FlowCollector CR `spec.exporters` parameter)
* If you are using Loki and including it in the policy target, connection to an external object storage (as defined in your `LokiStack` related secret)
.Procedure
. . In the web console, go to *Operators* -> *Installed Operators* page.
. Under the *Provided APIs* heading for *Network Observability*, select *Flow Collector*.
. Select *cluster* then select the *YAML* tab.
. Configure the `FlowCollector` CR. A sample configuration is as follows:
+
[id="network-observability-flowcollector-configuring-network-policy_{context}"]
.Example `FlowCollector` CR for network policy
[source, yaml]
----
apiVersion: flows.netobserv.io/v1beta2
kind: FlowCollector
metadata:
name: cluster
spec:
namespace: netobserv
networkPolicy:
enable: true <1>
additionalNamespaces: ["openshift-console", "openshift-monitoring"] <2>
# ...
----
<1> By default, the `enable` value is `false`.
<2> Default values are `["openshift-console", "openshift-monitoring"]`.

View File

@@ -6,10 +6,11 @@
[id="network-observability-enriched-flows_{context}"]
= Export enriched network flow data
You can send network flows to Kafka, IPFIX, or both at the same time. Any processor or storage that supports Kafka or IPFIX input, such as Splunk, Elasticsearch, or Fluentd, can consume the enriched network flow data.
You can send network flows to Kafka, IPFIX, the Red{nbsp}Hat build of OpenTelemetry, or all three at the same time. For Kafka or IPFIX, any processor or storage that supports those inputs, such as Splunk, Elasticsearch, or Fluentd, can consume the enriched network flow data. For OpenTelemetry, network flow data and metrics can be exported to a compatible OpenTelemetry endpoint, such as Red{nbsp}Hat build of OpenTelemetry, Jaeger, or Prometheus.
.Prerequisites
* Your Kafka or IPFIX collector endpoint(s) are available from Network Observability `flowlogs-pipeline` pods.
* Your Kafka, IPFIX, or OpenTelemetry collector endpoints are available from Network Observability `flowlogs-pipeline` pods.
.Procedure
@@ -26,22 +27,41 @@ metadata:
name: cluster
spec:
exporters:
- type: Kafka <3>
- type: Kafka <1>
kafka:
address: "kafka-cluster-kafka-bootstrap.netobserv"
topic: netobserv-flows-export <1>
topic: netobserv-flows-export <2>
tls:
enable: false <2>
- type: IPFIX <3>
enable: false <3>
- type: IPFIX <1>
ipfix:
targetHost: "ipfix-collector.ipfix.svc.cluster.local"
targetPort: 4739
transport: tcp or udp <4>
- type: OpenTelemetry <1>
openTelemetry:
targetHost: my-otelcol-collector-headless.otlp.svc
targetPort: 4317
type: grpc <5>
logs: <6>
enable: true
metrics: <7>
enable: true
prefix: netobserv
pushTimeInterval: 20s <8>
expiryTime: 2m
# fieldsMapping: <9>
# input: SrcAddr
# output: source.address
----
<1> The Network Observability Operator exports all flows to the configured Kafka topic.
<2> You can encrypt all communications to and from Kafka with SSL/TLS or mTLS. When enabled, the Kafka CA certificate must be available as a ConfigMap or a Secret, both in the namespace where the `flowlogs-pipeline` processor component is deployed (default: netobserv). It must be referenced with `spec.exporters.tls.caCert`. When using mTLS, client secrets must be available in these namespaces as well (they can be generated for instance using the AMQ Streams User Operator) and referenced with `spec.exporters.tls.userCert`.
<3> You can export flows to IPFIX instead of or in conjunction with exporting flows to Kafka.
<1> You can export flows to IPFIX, OpenTelemetry, and Kafka individually or concurrently.
<2> The Network Observability Operator exports all flows to the configured Kafka topic.
<3> You can encrypt all communications to and from Kafka with SSL/TLS or mTLS. When enabled, the Kafka CA certificate must be available as a ConfigMap or a Secret, both in the namespace where the `flowlogs-pipeline` processor component is deployed (default: netobserv). It must be referenced with `spec.exporters.tls.caCert`. When using mTLS, client secrets must be available in these namespaces as well (they can be generated for instance using the AMQ Streams User Operator) and referenced with `spec.exporters.tls.userCert`.
<4> You have the option to specify transport. The default value is `tcp` but you can also specify `udp`.
. After configuration, network flows data can be sent to an available output in a JSON format. For more information, see _Network flows format reference_.
<5> The protocol of OpenTelemetry connection. The available options are `http` and `grpc`.
<6> OpenTelemetry configuration for exporting logs, which are the same as the logs created for Loki.
<7> OpenTelemetry configuration for exporting metrics, which are the same as the metrics created for Prometheus. These configurations are specified in the `spec.processor.metrics.includeList` parameter of the `FlowCollector` custom resource, along with any custom metrics you defined using the `FlowMetrics` custom resource.
<8> The time interval that metrics are sent to the OpenTelemetry collector.
<9> *Optional*:Network Observability network flows formats get automatically renamed to an OpenTelemetry compliant format. The `fieldsMapping` specification gives you the ability to customize the OpenTelemetry format output. For example in the YAML sample, `SrcAddr` is the Network Observability input field, and it is being renamed `source.address` in OpenTelemetry output. You can see both Network Observability and OpenTelemetry formats in the "Network flows format reference".
After configuration, network flows data can be sent to an available output in a JSON format. For more information, see "Network flows format reference".

View File

@@ -1,3 +1,4 @@
// Automatically generated by 'openshift-apidocs-gen'. Do not edit.
:_mod-docs-content-type: REFERENCE
[id="network-observability-flowcollector-api-specifications_{context}"]
= FlowCollector API specifications
@@ -115,6 +116,10 @@ Kafka can provide better scalability, resiliency, and high availability (for mor
| `string`
| Namespace where Network Observability pods are deployed.
| `networkPolicy`
| `object`
| `networkPolicy` defines ingress network policy settings for Network Observability components isolation.
| `processor`
| `object`
| `processor` defines the settings of the component that receives the flows from the agent,
@@ -197,16 +202,21 @@ Otherwise it is matched as a case-sensitive string.
| `features`
| `array (string)`
| List of additional features to enable. They are all disabled by default. Enabling additional features might have performance impacts. Possible values are: +
| List of additional features to enable. They are all disabled by default. Enabling additional features might have performance impacts. A possible value is `+`.
- `PacketDrop`: enable the packets drop flows logging feature. This feature requires mounting
the kernel debug filesystem, so the eBPF pod has to run as privileged.
the kernel debug filesystem, so the eBPF agent pods have to run as privileged.
If the `spec.agent.ebpf.privileged` parameter is not set, an error is reported. +
- `DNSTracking`: enable the DNS tracking feature. +
- `FlowRTT`: enable flow latency (sRTT) extraction in the eBPF agent from TCP traffic. +
- `NetworkEvents`: enable the network events monitoring feature, such as correlating flows and network policies.
This feature requires mounting the kernel debug filesystem, so the eBPF agent pods have to run as privileged.
It requires using the OVN-Kubernetes network plugin with the Observability feature.
IMPORTANT: This feature is available as a Developer Preview. +
| `flowFilter`
| `object`
@@ -377,8 +387,9 @@ Examples: `10.10.10.0/24` or `100:100:100:100::/64`
| `destPorts`
| `integer-or-string`
| `destPorts` defines the destination ports to filter flows by.
To filter a single port, set a single port as an integer value. For example: `destPorts: 80`.
To filter a range of ports, use a "start-end" range in string format. For example: `destPorts: "80-100"`.
To filter a single port, set a single port as an integer value. For example, `destPorts: 80`.
To filter a range of ports, use a "start-end" range in string format. For example, `destPorts: "80-100"`.
To filter two ports, use a "port1,port2" in string format. For example, `ports: "80,100"`.
| `direction`
| `string`
@@ -401,11 +412,16 @@ To filter a range of ports, use a "start-end" range in string format. For exampl
| `peerIP` defines the IP address to filter flows by.
Example: `10.10.10.10`.
| `pktDrops`
| `boolean`
| `pktDrops` filters flows with packet drops
| `ports`
| `integer-or-string`
| `ports` defines the ports to filter flows by. It is used both for source and destination ports.
To filter a single port, set a single port as an integer value. For example: `ports: 80`.
To filter a range of ports, use a "start-end" range in string format. For example: `ports: "80-100"`.
To filter a single port, set a single port as an integer value. For example, `ports: 80`.
To filter a range of ports, use a "start-end" range in string format. For example, `ports: "80-100"`.
To filter two ports, use a "port1,port2" in string format. For example, `ports: "80,100"`.
| `protocol`
| `string`
@@ -414,8 +430,13 @@ To filter a range of ports, use a "start-end" range in string format. For exampl
| `sourcePorts`
| `integer-or-string`
| `sourcePorts` defines the source ports to filter flows by.
To filter a single port, set a single port as an integer value. For example: `sourcePorts: 80`.
To filter a range of ports, use a "start-end" range in string format. For example: `sourcePorts: "80-100"`.
To filter a single port, set a single port as an integer value. For example, `sourcePorts: 80`.
To filter a range of ports, use a "start-end" range in string format. For example, `sourcePorts: "80-100"`.
To filter two ports, use a "port1,port2" in string format. For example, `ports: "80,100"`.
| `tcpFlags`
| `string`
| `tcpFlags` defines the TCP flags to filter flows by.
|===
== .spec.agent.ebpf.metrics
@@ -440,7 +461,7 @@ Type::
| `disableAlerts` is a list of alerts that should be disabled.
Possible values are: +
`NetObservDroppedFlows`, which is triggered when the eBPF agent is dropping flows, such as when the BPF hashmap is full or the capacity limiter is being triggered. +
`NetObservDroppedFlows` is triggered when the eBPF agent is missing packets or flows, such as when the eBPF hashmap is busy or full, or the capacity limiter is triggered. +
| `enable`
@@ -488,6 +509,8 @@ TLS configuration.
Type::
`object`
Required::
- `type`
@@ -949,6 +972,10 @@ Required::
| `object`
| Kafka configuration, such as the address and topic, to send enriched flows to.
| `openTelemetry`
| `object`
| OpenTelemetry configuration, such as the IP address and port to send enriched logs or metrics to.
| `type`
| `string`
| `type` selects the type of exporters. The available options are `Kafka` and `IPFIX`.
@@ -1211,6 +1238,267 @@ Type::
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `certFile`
| `string`
| `certFile` defines the path to the certificate file name within the config map or secret.
| `certKey`
| `string`
| `certKey` defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary.
| `name`
| `string`
| Name of the config map or secret containing certificates.
| `namespace`
| `string`
| Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed.
If the namespace is different, the config map or the secret is copied so that it can be mounted as required.
| `type`
| `string`
| Type for the certificate reference: `configmap` or `secret`.
|===
== .spec.exporters[].openTelemetry
Description::
+
--
OpenTelemetry configuration, such as the IP address and port to send enriched logs or metrics to.
--
Type::
`object`
Required::
- `targetHost`
- `targetPort`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `fieldsMapping`
| `array`
| Custom fields mapping to an OpenTelemetry conformant format.
By default, Network Observability format proposal is used: https://github.com/rhobs/observability-data-model/blob/main/network-observability.md#format-proposal .
As there is currently no accepted standard for L3 or L4 enriched network logs, you can freely override it with your own.
| `headers`
| `object (string)`
| Headers to add to messages (optional)
| `logs`
| `object`
| OpenTelemetry configuration for logs.
| `metrics`
| `object`
| OpenTelemetry configuration for metrics.
| `protocol`
| `string`
| Protocol of the OpenTelemetry connection. The available options are `http` and `grpc`.
| `targetHost`
| `string`
| Address of the OpenTelemetry receiver.
| `targetPort`
| `integer`
| Port for the OpenTelemetry receiver.
| `tls`
| `object`
| TLS client configuration.
|===
== .spec.exporters[].openTelemetry.fieldsMapping
Description::
+
--
Custom fields mapping to an OpenTelemetry conformant format.
By default, Network Observability format proposal is used: https://github.com/rhobs/observability-data-model/blob/main/network-observability.md#format-proposal .
As there is currently no accepted standard for L3 or L4 enriched network logs, you can freely override it with your own.
--
Type::
`array`
== .spec.exporters[].openTelemetry.fieldsMapping[]
Description::
+
--
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `input`
| `string`
|
| `multiplier`
| `integer`
|
| `output`
| `string`
|
|===
== .spec.exporters[].openTelemetry.logs
Description::
+
--
OpenTelemetry configuration for logs.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `enable`
| `boolean`
| Set `enable` to `true` to send logs to an OpenTelemetry receiver.
|===
== .spec.exporters[].openTelemetry.metrics
Description::
+
--
OpenTelemetry configuration for metrics.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `enable`
| `boolean`
| Set `enable` to `true` to send metrics to an OpenTelemetry receiver.
| `pushTimeInterval`
| `string`
| Specify how often metrics are sent to a collector.
|===
== .spec.exporters[].openTelemetry.tls
Description::
+
--
TLS client configuration.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `caCert`
| `object`
| `caCert` defines the reference of the certificate for the Certificate Authority.
| `enable`
| `boolean`
| Enable TLS
| `insecureSkipVerify`
| `boolean`
| `insecureSkipVerify` allows skipping client-side verification of the server certificate.
If set to `true`, the `caCert` field is ignored.
| `userCert`
| `object`
| `userCert` defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property.
|===
== .spec.exporters[].openTelemetry.tls.caCert
Description::
+
--
`caCert` defines the reference of the certificate for the Certificate Authority.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `certFile`
| `string`
| `certFile` defines the path to the certificate file name within the config map or secret.
| `certKey`
| `string`
| `certKey` defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary.
| `name`
| `string`
| Name of the config map or secret containing certificates.
| `namespace`
| `string`
| Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed.
If the namespace is different, the config map or the secret is copied so that it can be mounted as required.
| `type`
| `string`
| Type for the certificate reference: `configmap` or `secret`.
|===
== .spec.exporters[].openTelemetry.tls.userCert
Description::
+
--
`userCert` defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
@@ -1497,6 +1785,8 @@ Description::
Type::
`object`
Required::
- `mode`
@@ -1618,6 +1908,8 @@ It is ignored for other modes.
Type::
`object`
Required::
- `name`
@@ -2218,6 +2510,36 @@ If the namespace is different, the config map or the secret is copied so that it
| `string`
| Type for the certificate reference: `configmap` or `secret`.
|===
== .spec.networkPolicy
Description::
+
--
`networkPolicy` defines ingress network policy settings for Network Observability components isolation.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `additionalNamespaces`
| `array (string)`
| `additionalNamespaces` contains additional namespaces allowed to connect to the Network Observability namespace.
It provides flexibility in the network policy configuration, but if you need a more specific
configuration, you can disable it and install your own instead.
| `enable`
| `boolean`
| Set `enable` to `true` to deploy network policies on the namespaces used by Network Observability (main and privileged). It is disabled by default.
These network policies better isolate the Network Observability components to prevent undesired connections to them.
Either enable it, or create your own network policy for Network Observability.
|===
== .spec.processor
Description::
@@ -2375,6 +2697,12 @@ By convention, some values are forbidden. It must be greater than 1024 and diffe
| `object`
| scheduling controls how the pods are scheduled on nodes.
| `secondaryNetworks`
| `array`
| Define secondary networks to be checked for resources identification.
To guarantee a correct identification, indexed values must form an unique identifier across the cluster.
If the same index is used by several resources, those resources might be incorrectly labeled.
|===
== .spec.processor.advanced.scheduling
Description::
@@ -2440,6 +2768,52 @@ Type::
== .spec.processor.advanced.secondaryNetworks
Description::
+
--
Define secondary networks to be checked for resources identification.
To guarantee a correct identification, the indexed values must form an unique identifier across the cluster.
If the same index is used by several resources, those resources might be wrongly labeled.
--
Type::
`array`
== .spec.processor.advanced.secondaryNetworks[]
Description::
+
--
--
Type::
`object`
Required::
- `index`
- `name`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `index`
| `array (string)`
| `index` is a list of fields to use for indexing the pods. They should form a unique Pod identifier across the cluster.
Can be any of: `MAC`, `IP`, `Interface`.
Fields absent from the 'k8s.v1.cni.cncf.io/network-status' annotation must not be added to the index.
| `name`
| `string`
| `name` should match the network name as visible in the pods annotation 'k8s.v1.cni.cncf.io/network-status'.
|===
== .spec.processor.kafkaConsumerAutoscaler
Description::
+
@@ -2488,7 +2862,8 @@ The names correspond to the names in Prometheus without the prefix. For example,
`namespace_egress_packets_total` shows up as `netobserv_namespace_egress_packets_total` in Prometheus.
Note that the more metrics you add, the bigger is the impact on Prometheus workload resources.
Metrics enabled by default are:
`namespace_flows_total`, `node_ingress_bytes_total`, `workload_ingress_bytes_total`, `namespace_drop_packets_total` (when `PacketDrop` feature is enabled),
`namespace_flows_total`, `node_ingress_bytes_total`, `node_egress_bytes_total`, `workload_ingress_bytes_total`,
`workload_egress_bytes_total`, `namespace_drop_packets_total` (when `PacketDrop` feature is enabled),
`namespace_rtt_seconds` (when `FlowRTT` feature is enabled), `namespace_dns_latency_seconds` (when `DNSTracking` feature is enabled).
More information, with full list of available metrics: https://github.com/netobserv/network-observability-operator/blob/main/docs/Metrics.md
@@ -2533,6 +2908,8 @@ TLS configuration.
Type::
`object`
Required::
- `type`
@@ -2721,6 +3098,9 @@ SubnetLabel allows to label subnets and IPs, such as to identify cluster-externa
Type::
`object`
Required::
- `cidrs`
- `name`
@@ -2769,6 +3149,8 @@ Prometheus querying configuration, such as client settings, used in the Console
Type::
`object`
Required::
- `mode`

View File

@@ -120,6 +120,10 @@ Refer to the documentation for the list of available fields: https://docs.opensh
| `string`
| Name of the metric. In Prometheus, it is automatically prefixed with "netobserv_".
| `remap`
| `object (string)`
| Set the `remap` property to use different names for the generated metric labels than the flow fields. Use the origin flow fields as keys, and the desired label names as values.
| `type`
| `string`
| Metric type: "Counter" or "Histogram".

View File

@@ -9,140 +9,162 @@ The "Filter ID" column shows which related name to use when defining Quick Filte
The "Loki label" column is useful when querying Loki directly: label fields need to be selected using link:https://grafana.com/docs/loki/latest/logql/log_queries/#log-stream-selector[stream selectors].
The "Cardinality" column gives information about the implied metric cardinality if this field was to be used as a Prometheus label with the `FlowMetric` API. For more information, see the "FlowMetric API reference".
The "Cardinality" column contains information about the implied metric cardinality if this field was to be used as a Prometheus label with the `FlowMetrics` API. For more information, see the `FlowMetrics` documentation for more information on using this API.
[cols="1,1,3,1,1,1",options="header"]
[cols="1,1,3,1,1,1,1",options="header"]
|===
| Name | Type | Description | Filter ID | Loki label | Cardinality
| Name | Type | Description | Filter ID | Loki label | Cardinality | OpenTelemetry
| `Bytes`
| number
| Number of bytes
| n/a
| no
| avoid
| bytes
| `DnsErrno`
| number
| Error number returned from DNS tracker ebpf hook function
| `dns_errno`
| no
| fine
| dns.errno
| `DnsFlags`
| number
| DNS flags for DNS record
| n/a
| no
| fine
| dns.flags
| `DnsFlagsResponseCode`
| string
| Parsed DNS header RCODEs name
| `dns_flag_response_code`
| no
| fine
| dns.responsecode
| `DnsId`
| number
| DNS record id
| `dns_id`
| no
| avoid
| dns.id
| `DnsLatencyMs`
| number
| Time between a DNS request and response, in milliseconds
| `dns_latency`
| no
| avoid
| dns.latency
| `Dscp`
| number
| Differentiated Services Code Point (DSCP) value
| `dscp`
| no
| fine
| dscp
| `DstAddr`
| string
| Destination IP address (ipv4 or ipv6)
| `dst_address`
| no
| avoid
| destination.address
| `DstK8S_HostIP`
| string
| Destination node IP
| `dst_host_address`
| no
| fine
| destination.k8s.host.address
| `DstK8S_HostName`
| string
| Destination node name
| `dst_host_name`
| no
| fine
| destination.k8s.host.name
| `DstK8S_Name`
| string
| Name of the destination Kubernetes object, such as Pod name, Service name or Node name.
| `dst_name`
| no
| careful
| destination.k8s.name
| `DstK8S_Namespace`
| string
| Destination namespace
| `dst_namespace`
| yes
| fine
| destination.k8s.namespace.name
| `DstK8S_OwnerName`
| string
| Name of the destination owner, such as Deployment name, StatefulSet name, etc.
| `dst_owner_name`
| yes
| fine
| destination.k8s.owner.name
| `DstK8S_OwnerType`
| string
| Kind of the destination owner, such as Deployment, StatefulSet, etc.
| `dst_kind`
| no
| fine
| destination.k8s.owner.kind
| `DstK8S_Type`
| string
| Kind of the destination Kubernetes object, such as Pod, Service or Node.
| `dst_kind`
| yes
| fine
| destination.k8s.kind
| `DstK8S_Zone`
| string
| Destination availability zone
| `dst_zone`
| yes
| fine
| destination.zone
| `DstMac`
| string
| Destination MAC address
| `dst_mac`
| no
| avoid
| destination.mac
| `DstPort`
| number
| Destination port
| `dst_port`
| no
| careful
| destination.port
| `DstSubnetLabel`
| string
| Destination subnet label
| `dst_subnet_label`
| no
| fine
| n/a
| `Duplicate`
| boolean
| Indicates if this flow was also captured from another interface on the same host
| n/a
| yes
| no
| fine
| n/a
| `Flags`
| number
| Logical OR combination of unique TCP flags comprised in the flow, as per RFC-9293, with additional custom flags to represent the following per-packet combinations: +
- SYN+ACK (0x100) +
- FIN+ACK (0x200) +
- RST+ACK (0x400)
| n/a
| `tcp_flags`
| no
| fine
| tcp.flags
| `FlowDirection`
| number
| Flow interpreted direction from the node observation point. Can be one of: +
@@ -152,18 +174,21 @@ The "Cardinality" column gives information about the implied metric cardinality
| `node_direction`
| yes
| fine
| host.direction
| `IcmpCode`
| number
| ICMP code
| `icmp_code`
| no
| fine
| icmp.code
| `IcmpType`
| number
| ICMP type
| `icmp_type`
| no
| fine
| icmp.type
| `IfDirections`
| number
| Flow directions from the network interface observation point. Can be one of: +
@@ -172,172 +197,208 @@ The "Cardinality" column gives information about the implied metric cardinality
| `ifdirections`
| no
| fine
| interface.directions
| `Interfaces`
| string
| Network interfaces
| `interfaces`
| no
| careful
| interface.names
| `K8S_ClusterName`
| string
| Cluster name or identifier
| `cluster_name`
| yes
| fine
| k8s.cluster.name
| `K8S_FlowLayer`
| string
| Flow layer: 'app' or 'infra'
| `flow_layer`
| no
| yes
| fine
| k8s.layer
| `NetworkEvents`
| string
| Network events flow monitoring
| `network_events`
| no
| avoid
| n/a
| `Packets`
| number
| Number of packets
| n/a
| no
| avoid
| packets
| `PktDropBytes`
| number
| Number of bytes dropped by the kernel
| n/a
| no
| avoid
| drops.bytes
| `PktDropLatestDropCause`
| string
| Latest drop cause
| `pkt_drop_cause`
| no
| fine
| drops.latestcause
| `PktDropLatestFlags`
| number
| TCP flags on last dropped packet
| n/a
| no
| fine
| drops.latestflags
| `PktDropLatestState`
| string
| TCP state on last dropped packet
| `pkt_drop_state`
| no
| fine
| drops.lateststate
| `PktDropPackets`
| number
| Number of packets dropped by the kernel
| n/a
| no
| avoid
| drops.packets
| `Proto`
| number
| L4 protocol
| `protocol`
| no
| fine
| protocol
| `SrcAddr`
| string
| Source IP address (ipv4 or ipv6)
| `src_address`
| no
| avoid
| source.address
| `SrcK8S_HostIP`
| string
| Source node IP
| `src_host_address`
| no
| fine
| source.k8s.host.address
| `SrcK8S_HostName`
| string
| Source node name
| `src_host_name`
| no
| fine
| source.k8s.host.name
| `SrcK8S_Name`
| string
| Name of the source Kubernetes object, such as Pod name, Service name or Node name.
| `src_name`
| no
| careful
| source.k8s.name
| `SrcK8S_Namespace`
| string
| Source namespace
| `src_namespace`
| yes
| fine
| source.k8s.namespace.name
| `SrcK8S_OwnerName`
| string
| Name of the source owner, such as Deployment name, StatefulSet name, etc.
| `src_owner_name`
| yes
| fine
| source.k8s.owner.name
| `SrcK8S_OwnerType`
| string
| Kind of the source owner, such as Deployment, StatefulSet, etc.
| `src_kind`
| no
| fine
| source.k8s.owner.kind
| `SrcK8S_Type`
| string
| Kind of the source Kubernetes object, such as Pod, Service or Node.
| `src_kind`
| yes
| fine
| source.k8s.kind
| `SrcK8S_Zone`
| string
| Source availability zone
| `src_zone`
| yes
| fine
| source.zone
| `SrcMac`
| string
| Source MAC address
| `src_mac`
| no
| avoid
| source.mac
| `SrcPort`
| number
| Source port
| `src_port`
| no
| careful
| source.port
| `SrcSubnetLabel`
| string
| Source subnet label
| `src_subnet_label`
| no
| fine
| n/a
| `TimeFlowEndMs`
| number
| End timestamp of this flow, in milliseconds
| n/a
| no
| avoid
| timeflowend
| `TimeFlowRttNs`
| number
| TCP Smoothed Round Trip Time (SRTT), in nanoseconds
| `time_flow_rtt`
| no
| avoid
| tcp.rtt
| `TimeFlowStartMs`
| number
| Start timestamp of this flow, in milliseconds
| n/a
| no
| avoid
| timeflowstart
| `TimeReceived`
| number
| Timestamp when this flow was received and processed by the flow collector, in seconds
| n/a
| no
| avoid
| timereceived
| `_HashId`
| string
| In conversation tracking, the conversation identifier
| `id`
| no
| avoid
| n/a
| `_RecordType`
| string
| Type of record: 'flowLog' for regular flow logs, or 'newConnection', 'heartbeat', 'endConnection' for conversation tracking
| `type`
| yes
| fine
| n/a
|===

View File

@@ -3,22 +3,43 @@
// network_observability/installing-operators.adoc
:_mod-docs-content-type: PROCEDURE
[id="network-observability-multi-tenancy{context}"]
[id="network-observability-multi-tenancy_{context}"]
= Enabling multi-tenancy in Network Observability
Multi-tenancy in the Network Observability Operator allows and restricts individual user access, or group access, to the flows stored in Loki. Access is enabled for project admins. Project admins who have limited access to some namespaces can access flows for only those namespaces.
Multi-tenancy in the Network Observability Operator allows and restricts individual user access, or group access, to the flows stored in Loki and or or Prometheus. Access is enabled for project administrators. Project administrators who have limited access to some namespaces can access flows for only those namespaces.
For Developers, multi-tenancy is available for both Loki and Prometheus but requires different access rights.
.Prerequisite
* You have installed at least link:https://catalog.redhat.com/software/containers/openshift-logging/loki-rhel8-operator/622b46bcae289285d6fcda39[Loki Operator version 5.7]
* You must be logged in as a project administrator
* If you are using Loki, you have installed at least link:https://catalog.redhat.com/software/containers/openshift-logging/loki-rhel8-operator/622b46bcae289285d6fcda39[Loki Operator version 5.7].
* You must be logged in as a project administrator.
.Procedure
. Authorize reading permission to `user1` by running the following command:
* For per-tenant access, you must have the `netobserv-reader` cluster role and the `netobserv-metrics-reader` namespace role to use the developer perspective. Run the following commands for this level of access:
+
[source,terminal]
----
$ oc adm policy add-cluster-role-to-user netobserv-reader user1
$ oc adm policy add-cluster-role-to-user netobserv-reader <user_group_or_name>
----
+
Now, the data is restricted to only allowed user namespaces. For example, a user that has access to a single namespace can see all the flows internal to this namespace, as well as flows going from and to this namespace.
Project admins have access to the Administrator perspective in the {product-title} console to access the Network Flows Traffic page.
[source,terminal]
----
$ oc adm policy add-role-to-user netobserv-metrics-reader <user_group_or_name> -n <namespace>
----
* For cluster-wide access, non-cluster-administrators must have the `netobserv-reader`, `cluster-monitoring-view`, and `netobserv-metrics-reader` cluster roles. In this scenario, you can use either the admin perspective or the developer perspective. Run the following commands for this level of access:
+
[source,terminal]
----
$ oc adm policy add-cluster-role-to-user netobserv-reader <user_group_or_name>
----
+
[source,terminal]
----
$ oc adm policy add-cluster-role-to-user cluster-monitoring-view <user_group_or_name>
----
+
[source,terminal]
----
$ oc adm policy add-cluster-role-to-user netobserv-metrics-reader <user_group_or_name>
----

View File

@@ -1,10 +1,13 @@
// Module included in the following assemblies:
// * observability/network_observability/netobserv-cli-reference.adoc
// Automatically generated by './scripts/generate-doc.sh'. Do not edit, or make the NETOBSERV team aware of the editions.
:_mod-docs-content-type: REFERENCE
[id="network-observability-netobserv-cli-reference_{context}"]
= oc netobserv CLI reference
The Network Observability CLI (`oc netobserv`) is a CLI tool for capturing flow data and packet data for further analysis.
= Network Observability CLI usage
You can use the Network Observability CLI (`oc netobserv`) to pass command line arguments to capture flow data and packet data for further analysis, enable Network Observability Operator features, or pass configuration options to the eBPF agent and `flowlogs-pipeline`.
[id="cli-syntax_{context}"]
== Syntax
The basic syntax for `oc netobserv` commands is as follows:
.`oc netobserv` syntax
[source,terminal]
@@ -13,168 +16,73 @@ $ oc netobserv [<command>] [<feature_option>] [<command_options>] <1>
----
<1> Feature options can only be used with the `oc netobserv flows` command. They cannot be used with the `oc netobserv packets` command.
[id="cli-basic-commands_{context}"]
== Basic commands
[cols="3a,8a",options="header"]
.Basic commands
|===
|Command| Description
| `flows`
| Capture flows information. For subcommands, see the "Flow capture subcommands" table.
| `packets`
| Capture packets from a specific protocol or port pair, such as `netobserv packets --filter=tcp,80`. For more information about packet capture, see the "Packet capture subcommand" table.
| `cleanup`
| Command | Description
| flows
| Capture flows information. For subcommands, see the "Flows capture options" table.
| packets
| Capture packets data. For subcommands, see the "Packets capture options" table.
| cleanup
| Remove the Network Observability CLI components.
| `version`
| version
| Print the software version.
| `help`
| help
| Show help.
|===
[id="network-observability-cli-enrichment_{context}"]
== Network Observability enrichment
The Network Observability enrichment to display zone, node, owner and resource names including optional features about packet drops, DNS latencies and Round-trip time can only be enabled when capturing flows. These do not appear in packet capture pcap output file.
.Network Observability enrichment syntax
[source,terminal]
----
$ oc netobserv flows [<enrichment_options>] [<subcommands>]
----
.Network Observability enrichment options
|===
|Option| Description| Possible values| Default
| `--enable_pktdrop`
| Enable packet drop.
| `true`, `false`
| `false`
| `--enable_rtt`
| Enable round trip time.
| `true`, `false`
| `false`
| `--enable_dns`
| Enable DNS tracking.
| `true`, `false`
| `false`
| `--help`
| Show help.
| -
| -
| `--interfaces`
| Interfaces to match on the flow. For example, `"eth0,eth1"`.
| `"<interface>"`
| -
|===
[id="cli-reference-flow-capture-options_{context}"]
== Flow capture options
Flow capture has mandatory commands as well as additional options, such as enabling extra features about packet drops, DNS latencies, Round-trip time, and filtering.
[id="cli-reference-flows-capture-options_{context}"]
== Flows capture options
Flows capture has mandatory commands as well as additional options, such as enabling extra features about packet drops, DNS latencies, Round-trip time, and filtering.
.`oc netobserv flows` syntax
[source,terminal]
----
$ oc netobserv flows [<feature_option>] [<command_options>]
----
.Flow capture filter options
[cols="1,1,1",options="header"]
|===
|Option| Description| Possible values| Mandatory| Default
| `--enable_filter`
| Enable flow filter.
| `true`, `false`
| Yes
| `false`
| `--action`
| Action to apply on the flow.
| `Accept`, `Reject`
| Yes
| `Accept`
| `--cidr`
| CIDR to match on the flow.
| `1.1.1.0/24`, `1::100/64`, or `0.0.0.0/0`
| Yes
| `0.0.0.0/0`
| `--protocol`
| Protocol to match on the flow
| `TCP`, `UDP`, `SCTP`, `ICMP`, or `ICMPv6`
| No
| -
| `--direction`
| Direction to match on the flow
| `Ingress`, `Egress`
| No
| -
| `--dport`
| Destination port to match on the flow.
| `80`, `443`, or `49051`
| no
| -
| `--sport`
| Source port to match on the flow.
| `80`, `443`, or `49051`
| No
| -
| `--port`
| Port to match on the flow.
| `80`, `443`, or `49051`
| No
| -
| `--sport_range`
| Source port range to match on the flow.
| `80-100` or `443-445`
| No
| -
| `--dport_range`
| Destination port range to match on the flow.
| `80-100`
| No
| -
| `--port_range`
| Port range to match on the flow.
| `80-100` or `443-445`
| No
| -
| `--icmp_type`
| ICMP type to match on the flow.
| `8` or `13`
| No
| -
| `--icmp_code`
| ICMP code to match on the flow.
| `0` or `1`
| No
| -
| `--peer_ip`
| Peer IP to match on the flow.
| `1.1.1.1` or `1::1`
| No
| -
| Option | Description | Default
|--enable_pktdrop| enable packet drop | false
|--enable_dns| enable DNS tracking | false
|--enable_rtt| enable RTT tracking | false
|--enable_network_events| enable Network events monitoring | false
|--enable_filter| enable flow filter | false
|--log-level| components logs | info
|--max-time| maximum capture time | 5m
|--max-bytes| maximum capture bytes | 50000000 = 50MB
|--copy| copy the output files locally | prompt
|--direction| filter direction | n/a
|--cidr| filter CIDR | 0.0.0.0/0
|--protocol| filter protocol | n/a
|--sport| filter source port | n/a
|--dport| filter destination port | n/a
|--port| filter port | n/a
|--sport_range| filter source port range | n/a
|--dport_range| filter destination port range | n/a
|--port_range| filter port range | n/a
|--sports| filter on either of two source ports | n/a
|--dports| filter on either of two destination ports | n/a
|--ports| filter on either of two ports | n/a
|--tcp_flags| filter TCP flags | n/a
|--action| filter action | Accept
|--icmp_type| filter ICMP type | n/a
|--icmp_code| filter ICMP code | n/a
|--peer_ip| filter peer IP | n/a
|--interfaces| interfaces to monitor | n/a
|===
.Example running flows capture on TCP protocol and port 49051 with PacketDrop and RTT features enabled:
[source,terminal]
----
$ oc netobserv flows --enable_pktdrop=true --enable_rtt=true --enable_filter=true --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051
----
[id="cli-reference-packet-capture-options_{context}"]
== Packet capture options
== Packets capture options
You can filter on port and protocol for packet capture data.
.`oc netobserv packets` syntax
@@ -182,12 +90,34 @@ You can filter on port and protocol for packet capture data.
----
$ oc netobserv packets [<option>]
----
.Packet capture filter option
[cols="1,1,1",options="header"]
|===
|Option| Description| Mandatory| Default
| `<protocol>`,`<port>`
| Capture packets from a specific protocol and port pair. Use a comma as delimiter. For example, `tcp,80` specifies the `tcp` protocol and port `80`.
| Yes
| -
|===
| Option | Description | Default
|--log-level| components logs | info
|--max-time| maximum capture time | 5m
|--max-bytes| maximum capture bytes | 50000000 = 50MB
|--copy| copy the output files locally | prompt
|--direction| filter direction | n/a
|--cidr| filter CIDR | 0.0.0.0/0
|--protocol| filter protocol | n/a
|--sport| filter source port | n/a
|--dport| filter destination port | n/a
|--port| filter port | n/a
|--sport_range| filter source port range | n/a
|--dport_range| filter destination port range | n/a
|--port_range| filter port range | n/a
|--sports| filter on either of two source ports | n/a
|--dports| filter on either of two destination ports | n/a
|--ports| filter on either of two ports | n/a
|--tcp_flags| filter TCP flags | n/a
|--action| filter action | Accept
|--icmp_type| filter ICMP type | n/a
|--icmp_code| filter ICMP code | n/a
|--peer_ip| filter peer IP | n/a
|===
.Example running packets capture on TCP protocol and port 49051:
[source,terminal]
----
$ oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051
----

View File

@@ -1,37 +0,0 @@
// Module included in the following assemblies:
// * networking/network_observability/network-observability-network-policy.adoc
:_mod-docs-content-type: REFERENCE
[id="network-observability-sample-network-policy_{context}"]
= Example network policy
The following annotates an example `NetworkPolicy` object for the `netobserv` namespace:
[id="network-observability-network-policy-sample_{context}"]
.Sample network policy
[source, yaml]
----
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-ingress
namespace: netobserv
spec:
podSelector: {} <1>
ingress:
- from:
- podSelector: {} <2>
namespaceSelector: <3>
matchLabels:
kubernetes.io/metadata.name: openshift-console
- podSelector: {}
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: openshift-monitoring
policyTypes:
- Ingress
status: {}
----
<1> A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the `NetworkPolicy` object. In this documentation, it would be the project in which the Network Observability Operator is installed, which is the `netobserv` project.
<2> A selector that matches the pods from which the policy object allows ingress traffic. The default is that the selector matches pods in the same namespace as the `NetworkPolicy`.
<3> When the `namespaceSelector` is specified, the selector matches pods in the specified namespace.

View File

@@ -0,0 +1,87 @@
// Module included in the following assemblies:
//
// * observability/network_observability/network-observability-secondary-networks.adoc
:_mod-docs-content-type: PROCEDURE
[id="network-observability-virtualization-config_{context}"]
= Configuring virtual machine (VM) secondary network interfaces for Network Observability
You can observe network traffic on an OpenShift Virtualization setup by identifying eBPF-enriched network flows coming from VMs that are connected to secondary networks, such as through OVN-Kubernetes. Network flows coming from VMs that are connected to the default internal pod network are automatically captured by Network Observability.
.Procedure
. Get information about the virtual machine launcher pod by running the following command. This information is used in Step 5:
+
[source,terminal]
----
$ oc get pod virt-launcher-<vm_name>-<suffix> -n <namespace> -o yaml
----
+
[source,yaml]
----
apiVersion: v1
kind: Pod
metadata:
annotations:
k8s.v1.cni.cncf.io/network-status: |-
[{
"name": "ovn-kubernetes",
"interface": "eth0",
"ips": [
"10.129.2.39"
],
"mac": "0a:58:0a:81:02:27",
"default": true,
"dns": {}
},
{
"name": "my-vms/l2-network", <1>
"interface": "podc0f69e19ba2", <2>
"ips": [ <3>
"10.10.10.15"
],
"mac": "02:fb:f8:00:00:12", <4>
"dns": {}
}]
name: virt-launcher-fedora-aqua-fowl-13-zr2x9
namespace: my-vms
spec:
# ...
status:
# ...
----
<1> The name of the secondary network.
<2> The network interface name of the secondary network.
<3> The list of IPs used by the secondary network.
<4> The MAC address used for secondary network.
. In the web console, navigate to *Operators* -> *Installed Operators*.
. Under the *Provided APIs* heading for the *NetObserv Operator*, select *Flow Collector*.
. Select *cluster* and then select the *YAML* tab.
. Configure `FlowCollector` based on the information you found from the additional network investigation:
+
[source,yaml]
----
apiVersion: flows.netobserv.io/v1beta2
kind: FlowCollector
metadata:
name: cluster
spec:
# ...
ebpf:
privileged: true <1>
processor:
advanced:
secondaryNetworks:
- index: <2>
- MAC <3>
name: my-vms/l2-network <4>
# ...
----
<.> Ensure that the eBPF agent is in `privileged` mode so that flows are collected for secondary interfaces.
<.> Define the fields to use for indexing the virtual machine launcher pods. It is recommended to use the `MAC` address as the indexing field to get network flows enrichment for secondary interfaces. If you have overlapping MAC address between pods, then additional indexing fields, such as `IP` and `Interface`, could be added to have accurate enrichment.
<.> If your additional network information has a MAC address, add `MAC` to the field list.
<.> Specify the name of the network found in the `k8s.v1.cni.cncf.io/network-status` annotation. Usually <namespace>/<network_attachement_definition_name>.
. Observe VM traffic:
.. Navigate to the *Network Traffic* page.
.. Filter by *Source* IP using your virtual machine IP found in `k8s.v1.cni.cncf.io/network-status` annotation.
.. View both *Source* and *Destination* fields, which should be enriched, and identify the VM launcher pods and the VM instance as owners.

View File

@@ -25,10 +25,5 @@ include::modules/network-observability-enriched-flows.adoc[leveloffset=+1]
include::modules/network-observability-configuring-FLP-sampling.adoc[leveloffset=+1]
include::modules/network-observability-configuring-quickfilters-flowcollector.adoc[leveloffset=+1]
include::modules/network-observability-SRIOV-configuration.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../../networking/hardware_networks/configuring-sriov-device.adoc#cnf-creating-an-additional-sriov-network-with-vrf-plug-in_configuring-sriov-device[Creating an additional SR-IOV network attachment with the CNI VRF plugin]
include::modules/network-observability-resource-recommendations.adoc[leveloffset=+1]
include::modules/network-observability-resources-table.adoc[leveloffset=+2]

View File

@@ -33,9 +33,8 @@ include::modules/logging-creating-new-group-cluster-admin-user-role.adoc[levelof
include::modules/logging-loki-log-access.adoc[leveloffset=+1,tags=CustomAdmin;NetObservMode;!LokiMode]
include::modules/loki-deployment-sizing.adoc[leveloffset=+2]
include::modules/network-observability-lokistack-ingestion-query.adoc[leveloffset=+2]
include::modules/network-observability-multitenancy.adoc[leveloffset=+2]
include::modules/network-observability-operator-install.adoc[leveloffset=+1]
include::modules/network-observability-multitenancy.adoc[leveloffset=+1]
[role="_additional-resources"]
[id="additional-resources_configuring-flow-collector-considerations"]
== Important Flow Collector configuration considerations
@@ -43,7 +42,7 @@ Once you create the `FlowCollector` instance, you can reconfigure it, but the po
* xref:../../observability/network_observability/configuring-operator.adoc#network-observability-flowcollector-kafka-config_network_observability[Configuring the Flow Collector resource with Kafka]
* xref:../../observability/network_observability/configuring-operator.adoc#network-observability-enriched-flows_network_observability[Export enriched network flow data to Kafka or IPFIX]
* xref:../../observability/network_observability/configuring-operator.adoc#network-observability-SR-IOV-config_network_observability[Configuring monitoring for SR-IOV interface traffic]
* xref:../../observability/network_observability/network-observability-secondary-networks.adoc#network-observability-SR-IOV-config_network-observability-secondary-networks[Configuring monitoring for SR-IOV interface traffic]
* xref:../../observability/network_observability/observing-network-traffic.adoc#network-observability-working-with-conversations_nw-observe-network-traffic[Working with conversation tracking]
* xref:../../observability/network_observability/observing-network-traffic.adoc#network-observability-dns-tracking_nw-observe-network-traffic[Working with DNS tracking]
* xref:../../observability/network_observability/observing-network-traffic.adoc#network-observability-packet-drops_nw-observe-network-traffic[Working with packet drops]

View File

@@ -8,8 +8,8 @@ toc::[]
As a user with the `admin` role, you can create a network policy for the `netobserv` namespace to secure inbound access to the Network Observability Operator.
include::modules/network-observability-deploy-network-policy.adoc[leveloffset=+1]
include::modules/network-observability-create-network-policy.adoc[leveloffset=+1]
include::modules/network-observability-sample-network-policy-YAML.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources

View File

@@ -12,6 +12,96 @@ The Network Observability Operator enables administrators to observe and analyze
These release notes track the development of the Network Observability Operator in the {product-title}.
For an overview of the Network Observability Operator, see xref:../../observability/network_observability/network-observability-overview.adoc#dependency-network-observability[About Network Observability Operator].
[id="network-observability-operator-release-notes-1-7_{context}"]
== Network Observability Operator 1.7.0
The following advisory is available for the Network Observability Operator 1.7.0:
* link:https://access.redhat.com/errata/RHSA-2024:8014[Network Observability Operator 1.7.0]
[id="network-observability-operator-1.7.0-features-enhancements_{context}"]
=== New features and enhancements
[id="network-observability-operator-otel-1-7_{context}"]
==== OpenTelemetry support
You can now export enriched network flows to a compatible OpenTelemetry endpoint, such as the Red{nbsp}Hat build of OpenTelemetry. For more information see xref:../../observability/network_observability/configuring-operator.adoc#network-observability-enriched-flows_network_observability[Export enriched network flow data].
[id="network-observability-operator-developer-perspective-1-7_{context}"]
==== Network Observability Developer perspective
You can now use Network Observability in the *Developer* perspective. For more information, see xref:../../observability/network_observability/network-observability-overview.adoc#no-console-integration[{product-title} console integration].
[id="network-observability-operator-tcp-flags-filtering-1-7_{context}"]
==== TCP flags filtering
You can now use the `tcpFlags` filter to limit the volume of packets processed by the eBPF program. For more information, see xref:../../observability/network_observability/observing-network-traffic.adoc#network-observability-flowcollector-flowfilter-parameters_nw-observe-network-traffic[Flow filter configuration parameters] and xref:../../observability/network_observability/observing-network-traffic.adoc#network-observability-ebpf-flow-rule-filter_nw-observe-network-traffic[eBPF flow rule filter].
[id="network-observability-virtualization_{context}"]
==== Network Observability for OpenShift Virtualization
You can observe networking patterns on an {VirtProductName} setup by identifying eBPF-enriched network flows coming from VMs that are connected to secondary networks, such as through Open Virtual Network (OVN)-Kubernetes. For more information, see xref:../../observability/network_observability/network-observability-secondary-networks.adoc#network-observability-virtualization-config_network-observability-secondary-networks[Configuring virtual machine (VM) secondary network interfaces for Network Observability].
[id="network-observability-network-policy-1-7_{context}"]
==== Network policy deploys in the FlowCollector custom resource (CR)
With this release, you can configure the `FlowCollector` CR to deploy a network policy for Network Observability. Previously, if you wanted a network policy, you had to manually create one. The option to manually create a network policy is still available. For more information, see xref:../../observability/network_observability/network-observability-network-policy.adoc#network-observability-deploy-network-policy_network_observability[Configuring an ingress network policy by using the FlowCollector custom resource].
[id="network-observability-fips-compliance-1-7_{context}"]
==== FIPS compliance
* You can install and use the Network Observability Operator in an {product-title} cluster running in FIPS mode.
+
--
include::snippets/fips-snippet.adoc[]
--
[id="network-observability-dns-enhancements-1-7_{context}"]
==== eBPF agent enhancements
The following enhancements are available for the eBPF agent:
* If the DNS service maps to a different port than `53`, you can specify this DNS tracking port using `spec.agent.ebpf.advanced.env.DNS_TRACKING_PORT`.
* You can now use two ports for transport protocols (TCP, UDP, or SCTP) filtering rules.
* You can now filter on transport ports with a wildcard protocol by leaving the protocol field empty.
For more information, see xref:../../observability/network_observability/flowcollector-api.adoc#spec-agent-ebpf-advanced[FlowCollector API specifications].
[id="network-observability-operator-1-7-bug-fixes_{context}"]
=== Bug fixes
* Previously, when using a {op-system-base} 9.2 real-time kernel, some of the webhooks did not work. Now, a fix is in place to check whether this {op-system-base} 9.2 real-time kernel is being used. If the kernel is being used, a warning is displayed about the features that do not work, such as packet drop and neither Round-trip Time when using `s390x` architecture. The fix is in OpenShift 4.16 and later. (link:https://issues.redhat.com/browse/NETOBSERV-1808[*NETOBSERV-1808*])
* Previously, in the *Manage panels* dialog in the *Overview* tab, filtering on *total*, *bar*, *donut*, or *line* did not show a result. Now the available panels are correctly filtered. (link:https://issues.redhat.com/browse/NETOBSERV-1540[*NETOBSERV-1540*])
* Previously, under high stress, the eBPF agents were susceptible to enter into a state where they generated a high number of small flows, almost not aggregated. With this fix, the aggregation process is still maintained under high stress, resulting in less flows being created. This fix improves the resource consumption not only in the eBPF agent but also in `flowlogs-pipeline` and Loki. (link:https://issues.redhat.com/browse/NETOBSERV-1564[*NETOBSERV-1564*])
* Previously, when the `workload_flows_total` metric was enabled instead of the `namespace_flows_total` metric, the health dashboard stopped showing `By namespace` flow charts. With this fix, the health dashboard now shows the flow charts when the `workload_flows_total` is enabled. (link:https://issues.redhat.com/browse/NETOBSERV-1746[*NETOBSERV-1746*])
* Previously, when you used the `FlowMetrics` API to generate a custom metric and later modified its labels, such as by adding a new label, the metric stopped populating and an error was shown in the `flowlogs-pipeline` logs. With this fix, you can modify the labels, and the error is no longer raised in the `flowlogs-pipeline` logs. (link:https://issues.redhat.com/browse/NETOBSERV-1748[*NETOBSERV-1748*])
* Previously, there was an inconsistency with the default Loki `WriteBatchSize` configuration: it was set to 100 KB in the `FlowCollector` CRD default, and 10 MB in the OLM sample or default configuration. Both are now aligned to 10 MB, which generally provides better performances and less resource footprint. (link:https://issues.redhat.com/browse/NETOBSERV-1766[*NETOBSERV-1766*])
* Previously, the eBPF flow filter on ports was ignored if you did not specify a protocol. With this fix, you can set eBPF flow filters independently on ports and or protocols. (link:https://issues.redhat.com/browse/NETOBSERV-1779[*NETOBSERV-1779*])
* Previously, traffic from Pods to Services was hidden from the *Topology view*. Only the return traffic from Services to Pods was visible. With this fix, that traffic is correctly displayed. (link:https://issues.redhat.com/browse/NETOBSERV-1788[*NETOBSERV-1788*])
* Previously, non-cluster administrator users that had access to Network Observability saw an error in the console plugin when they tried to filter for something that triggered auto-completion, such as a namespace. With this fix, no error is displayed, and the auto-completion returns the expected results. (link:https://issues.redhat.com/browse/NETOBSERV-1798[*NETOBSERV-1798*])
* When the secondary interface support was added, you had to iterate multiple times to register the per network namespace with the netlink to learn about interface notifications. At the same time, unsuccessful handlers caused a leaking file descriptor because with TCX hook, unlike TC, handlers needed to be explicitly removed when the interface went down. Furthermore, when the network namespace was deleted, there was no Go close channel event to terminate the netlink goroutine socket, which caused go threads to leak. Now, there are no longer leaking file descriptors or go threads when you create or delete pods. (link:https://issues.redhat.com/browse/NETOBSERV-1805[*NETOBSERV-1805*])
* Previously, the ICMP type and value were displaying 'n/a' in the *Traffic flows* table even when related data was available in the flow JSON. With this fix, ICMP columns display related values as expected in the flow table. (link:https://issues.redhat.com/browse/NETOBSERV-1806[*NETOBSERV-1806*])
* Previously in the console plugin, it wasn't always possible to filter for unset fields, such as unset DNS latency. With this fix, filtering on unset fields is now possible. (link:https://issues.redhat.com/browse/NETOBSERV-1816[*NETOBSERV-1816*])
* Previously, when you cleared filters in the OpenShift web console plugin, sometimes the filters reappeared after you navigated to another page and returned to the page with filters. With this fix, filters do not unexpectedly reappear after they are cleared. (link:https://issues.redhat.com/browse/NETOBSERV-1733[*NETOBSERV-1733*])
[id="network-observability-operator-1-7-known-issues_{context}"]
=== Known issues
* WWhen you use the must-gather tool with Network Observability, logs are not collected when the cluster has FIPS enabled. (link:https://issues.redhat.com/browse/NETOBSERV-1830[*NETOBSERV-1830*])
* When the `spec.networkPolicy` is enabled in the `FlowCollector`, which installs a network policy on the `netobserv` namespace, it is impossible to use the `FlowMetrics` API. The network policy blocks calls to the validation webhook. As a workaround, use the following network policy:
+
[source,yaml]
----
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-from-hostnetwork
namespace: netobserv
spec:
podSelector:
matchLabels:
app: netobserv-operator
ingress:
- from:
- namespaceSelector:
matchLabels:
policy-group.network.openshift.io/host-network: ''
policyTypes:
- Ingress
----
(link:https://issues.redhat.com/browse/NETOBSERV-1934[*NETOBSERV-193*])
[id="network-observability-operator-release-notes-1-6-2_{context}"]
== Network Observability Operator 1.6.2
@@ -296,7 +386,7 @@ For more information, see xref:../../observability/network_observability/observi
[id="SR-IOV-configuration-1.4"]
==== SR-IOV support
You can now collect traffic from a cluster with Single Root I/O Virtualization (SR-IOV) device. For more information, see xref:../../observability/network_observability/configuring-operator.adoc#network-observability-SR-IOV-config_network_observability[Configuring the monitoring of SR-IOV interface traffic].
You can now collect traffic from a cluster with Single Root I/O Virtualization (SR-IOV) device. For more information, see xref:../../observability/network_observability/network-observability-secondary-networks.adoc#network-observability-SR-IOV-config_network-observability-secondary-networks[Configuring the monitoring of SR-IOV interface traffic].
[id="IPFIX-support-1.4"]
==== IPFIX exporter support
@@ -313,7 +403,7 @@ Network Observability Operator can now run on `s390x` architecture. Previously i
=== Bug fixes
* Previously, the Prometheus metrics exported by Network Observability were computed out of potentially duplicated network flows. In the related dashboards, from *Observe* -> *Dashboards*, this could result in potentially doubled rates. Note that dashboards from the *Network Traffic* view were not affected. Now, network flows are filtered to eliminate duplicates before metrics calculation, which results in correct traffic rates displayed in the dashboards. (link:https://issues.redhat.com/browse/NETOBSERV-1131[*NETOBSERV-1131*])
* Previously, the Network Observability Operator agents were not able to capture traffic on network interfaces when configured with Multus or SR-IOV, non-default network namespaces. Now, all available network namespaces are recognized and used for capturing flows, allowing capturing traffic for SR-IOV. There are xref:../../observability/network_observability/configuring-operator.adoc#network-observability-SR-IOV-config_network_observability[configurations needed] for the `FlowCollector` and `SRIOVnetwork` custom resource to collect traffic.
* Previously, the Network Observability Operator agents were not able to capture traffic on network interfaces when configured with Multus or SR-IOV, non-default network namespaces. Now, all available network namespaces are recognized and used for capturing flows, allowing capturing traffic for SR-IOV. There are xref:../../observability/network_observability/network-observability-secondary-networks.adoc#network-observability-SR-IOV-config_network-observability-secondary-networks[configurations needed] for the `FlowCollector` and `SRIOVnetwork` custom resource to collect traffic.
(link:https://issues.redhat.com/browse/NETOBSERV-1283[*NETOBSERV-1283*])
* Previously, in the Network Observability Operator details from *Operators* -> *Installed Operators*, the `FlowCollector` *Status* field might have reported incorrect information about the state of the deployment. The status field now shows the proper conditions with improved messages. The history of events is kept, ordered by event date. (link:https://issues.redhat.com/browse/NETOBSERV-1224[*NETOBSERV-1224*])
@@ -351,7 +441,7 @@ You must switch your channel from `v1.0.x` to `stable` to receive future Operato
[id="multi-tenancy-1.3"]
==== Multi-tenancy in Network Observability
* System administrators can allow and restrict individual user access, or group access, to the flows stored in Loki. For more information, see xref:../../observability/network_observability/installing-operators.adoc#network-observability-multi-tenancynetwork_observability[Multi-tenancy in Network Observability].
* System administrators can allow and restrict individual user access, or group access, to the flows stored in Loki. For more information, see xref:../../observability/network_observability/installing-operators.adoc#network-observability-multi-tenancy_network_observability[Multi-tenancy in Network Observability].
[id="flow-based-dashboard-1.3"]
==== Flow-based metrics dashboard

View File

@@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]
toc::[]
Red Hat offers cluster administrators the Network Observability Operator to observe the network traffic for {product-title} clusters. The Network Observability Operator uses the eBPF technology to create network flows. The network flows are then enriched with {product-title} information. They are available as Prometheus metrics or as logs in Loki. You can view and analyze the stored network flows information in the {product-title} console for further insight and troubleshooting.
Red Hat offers cluster administrators and developers the Network Observability Operator to observe the network traffic for {product-title} clusters. The Network Observability Operator uses the eBPF technology to create network flows. The network flows are then enriched with {product-title} information. They are available as Prometheus metrics or as logs in Loki. You can view and analyze the stored network flows information in the {product-title} console for further insight and troubleshooting.
[id="dependency-network-observability"]
== Optional dependencies of the Network Observability Operator
@@ -22,7 +22,14 @@ The Network Observability Operator provides the Flow Collector API custom resour
[id="no-console-integration"]
== {product-title} console integration
{product-title} console integration offers overview, topology view and traffic flow tables.
{product-title} console integration offers overview, topology view, and traffic flow tables in both *Administrator* and *Developer* perspectives.
In the *Administrator* perspective, you can find the Network Observability *Overview*, *Traffic flows*, and *Topology* views by clicking *Observe* -> *Network Traffic*. In the *Developer* perspective, you can view this information by clicking *Observe*. The Network Observability metrics dashboards in *Observe* -> *Dashboards* are only available to administrators.
[NOTE]
====
To enable multi-tenancy for the developer perspective and for administrators with limited access to namespaces, you must specify permissions by defining roles. For more information, see xref:../../observability/network_observability/installing-operators.adoc#network-observability-multi-tenancy_network_observability[Enabling multi-tenancy in Network Observability].
====
[id="network-observability-dashboards"]
=== Network Observability metrics dashboards
@@ -40,7 +47,7 @@ The {product-title} console offers the *Topology* tab which displays a graphical
[id="traffic-flow-tables"]
=== Traffic flow tables
The traffic flow table view provides a view for raw flows, non aggregated filtering options, and configurable columns. The {product-title} console offers the *Traffic flows* tab which displays the data of the network flows and the amount of traffic.
The *Traffic flow* table view provides a view for raw flows, non aggregated filtering options, and configurable columns. The {product-title} console offers the *Traffic flows* tab which displays the data of the network flows and the amount of traffic.
[id="network-observability-cli"]
== Network Observability CLI

View File

@@ -0,0 +1,24 @@
:_mod-docs-content-type: ASSEMBLY
[id="network-observability-secondary-networks"]
= Secondary networks
include::_attributes/common-attributes.adoc[]
:context: network-observability-secondary-networks
toc::[]
You can configure the Network Observability Operator to collect and enrich network flow data from secondary networks, such as SR-IOV and OVN-Kubernetes.
// Note to tech review:
// Is the existing SR-IOV example we have, "Configuring monitoring for SR-IOV interface traffic", an example of secondary network? If so, it is not through a VM, right?
[discrete]
[id="network-observability-secondary-network-prerequisites_{context}"]
== Prerequisites
* Access to an {product-title} cluster with an additional network interface, such as a secondary interface or an L2 network.
include::modules/network-observability-SRIOV-configuration.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
*xref:../../networking/hardware_networks/configuring-sriov-device.adoc#cnf-creating-an-additional-sriov-network-with-vrf-plug-in_configuring-sriov-device[Creating an additional SR-IOV network attachment with the CNI VRF plugin].
include::modules/network-observability-virtualization-configuration.adoc[leveloffset=+1]