mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
OSDOCS-10730: NetObserv 1.8 CLI reference
OSDOCS-13258: Update FlowMetric API 1.8 OSDOCS-13258: Update FlowMetrics API 1.8 OSDOCS-13258: Flows Format Reference 1.8 update OSDOCS-13050:Packet translation
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
f0a223aed2
commit
c1a4ee348d
33
modules/network-observability-cli-capturing-metrics.adoc
Normal file
33
modules/network-observability-cli-capturing-metrics.adoc
Normal file
@@ -0,0 +1,33 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/network_observability/netobserv_cli/netobserv-cli-using.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="network-observability-cli-capturing-metrics_{context}"]
|
||||
= Capturing metrics
|
||||
You can generate on-demand dashboards in Prometheus by using a service monitor for Network Observability.
|
||||
|
||||
.Prerequisites
|
||||
* Install the {oc-first}.
|
||||
* Install the Network Observability CLI (`oc netobserv`) plugin.
|
||||
|
||||
.Procedure
|
||||
. Capture metrics with filters enabled by running the following command:
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc netobserv metrics --enable_filter=true --cidr=0.0.0.0/0 --protocol=TCP --port=49051
|
||||
----
|
||||
. Open the link provided in the terminal to view the *NetObserv / On-Demand* dashboard:
|
||||
+
|
||||
.Example URL
|
||||
[source,terminal]
|
||||
----
|
||||
https://console-openshift-console.apps.rosa...openshiftapps.com/monitoring/dashboards/netobserv-cli
|
||||
----
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
Features that are not enabled present as empty graphs.
|
||||
====
|
||||
@@ -102,7 +102,7 @@ Kafka can provide better scalability, resiliency, and high availability (for mor
|
||||
|
||||
| `exporters`
|
||||
| `array`
|
||||
| `exporters` define additional optional exporters for custom consumption or storage.
|
||||
| `exporters` defines additional optional exporters for custom consumption or storage.
|
||||
|
||||
| `kafka`
|
||||
| `object`
|
||||
@@ -154,7 +154,7 @@ is set to `eBPF`.
|
||||
|
||||
| `type`
|
||||
| `string`
|
||||
| `type` [deprecated (*)] selects the flows tracing agent. Previously, this field allowed to select between `eBPF` or `IPFIX`.
|
||||
| `type` [deprecated *] selects the flows tracing agent. Previously, this field allowed to select between `eBPF` or `IPFIX`.
|
||||
Only `eBPF` is allowed now, so this field is deprecated and is planned for removal in a future version of the API.
|
||||
|
||||
|===
|
||||
@@ -204,19 +204,26 @@ Otherwise it is matched as a case-sensitive string.
|
||||
| `array (string)`
|
||||
| List of additional features to enable. They are all disabled by default. Enabling additional features might have performance impacts. Possible values are: +
|
||||
|
||||
- `PacketDrop`: enable the packets drop flows logging feature. This feature requires mounting
|
||||
the kernel debug filesystem, so the eBPF agent pods have to run as privileged.
|
||||
- `PacketDrop`: Enable the packets drop flows logging feature. This feature requires mounting
|
||||
the kernel debug filesystem, so the eBPF agent pods must run as privileged.
|
||||
If the `spec.agent.ebpf.privileged` parameter is not set, an error is reported. +
|
||||
|
||||
- `DNSTracking`: enable the DNS tracking feature. +
|
||||
- `DNSTracking`: Enable the DNS tracking feature. +
|
||||
|
||||
- `FlowRTT`: enable flow latency (sRTT) extraction in the eBPF agent from TCP traffic. +
|
||||
- `FlowRTT`: Enable flow latency (sRTT) extraction in the eBPF agent from TCP traffic. +
|
||||
|
||||
- `NetworkEvents`: enable the network events monitoring feature, such as correlating flows and network policies.
|
||||
This feature requires mounting the kernel debug filesystem, so the eBPF agent pods have to run as privileged.
|
||||
- `NetworkEvents`: Enable the network events monitoring feature, such as correlating flows and network policies.
|
||||
This feature requires mounting the kernel debug filesystem, so the eBPF agent pods must run as privileged.
|
||||
It requires using the OVN-Kubernetes network plugin with the Observability feature. +
|
||||
IMPORTANT: This feature is available as a Technology Preview.
|
||||
|
||||
- `PacketTranslation`: Enable enriching flows with packet translation information, such as Service NAT. +
|
||||
|
||||
- `EbpfManager`: Unsupported * . Use eBPF Manager to manage Network Observability eBPF programs. Pre-requisite: the eBPF Manager operator (or upstream bpfman operator) must be installed. +
|
||||
|
||||
- `UDNMapping`: Unsupported *. Enable interfaces mapping to User Defined Networks (UDN). +
|
||||
This feature requires mounting the kernel debug filesystem, so the eBPF agent pods must run as privileged.
|
||||
It requires using the OVN-Kubernetes network plugin with the Observability feature.
|
||||
IMPORTANT: This feature is available as a Developer Preview. +
|
||||
|
||||
|
||||
| `flowFilter`
|
||||
| `object`
|
||||
@@ -407,6 +414,11 @@ To filter two ports, use a "port1,port2" in string format. For example, `ports:
|
||||
| `integer`
|
||||
| `icmpType`, for ICMP traffic, optionally defines the ICMP type to filter flows by.
|
||||
|
||||
| `peerCIDR`
|
||||
| `string`
|
||||
| `peerCIDR` defines the Peer IP CIDR to filter flows by.
|
||||
Examples: `10.10.10.0/24` or `100:100:100:100::/64`
|
||||
|
||||
| `peerIP`
|
||||
| `string`
|
||||
| `peerIP` optionally defines the remote IP address to filter flows by.
|
||||
@@ -425,7 +437,18 @@ To filter two ports, use a "port1,port2" in string format. For example, `ports:
|
||||
|
||||
| `protocol`
|
||||
| `string`
|
||||
| `protocol` optionally defines a protocol to filter flows by. The available options are `TCP`, `UDP`, `ICMP`, `ICMPv6` and `SCTP`.
|
||||
| `protocol` optionally defines a protocol to filter flows by. The available options are `TCP`, `UDP`, `ICMP`, `ICMPv6`, and `SCTP`.
|
||||
|
||||
| `rules`
|
||||
| `array`
|
||||
| `rules` defines a list of filtering rules on the eBPF Agents.
|
||||
When filtering is enabled, by default, flows that don't match any rule are rejected.
|
||||
To change the default, you can define a rule that accepts everything: `{ action: "Accept", cidr: "0.0.0.0/0" }`, and then refine with rejecting rules.
|
||||
Unsupported *.
|
||||
|
||||
| `sampling`
|
||||
| `integer`
|
||||
| `sampling` sampling rate for the matched flows, overriding the global sampling defined at `spec.agent.ebpf.sampling`.
|
||||
|
||||
| `sourcePorts`
|
||||
| `integer-or-string`
|
||||
@@ -437,7 +460,110 @@ To filter two ports, use a "port1,port2" in string format. For example, `ports:
|
||||
| `tcpFlags`
|
||||
| `string`
|
||||
| `tcpFlags` optionally defines TCP flags to filter flows by.
|
||||
In addition to the standard flags (RFC-9293), you can also filter by one of the three following combinations: `SYN-ACK`, `FIN-ACK` and `RST-ACK`.
|
||||
In addition to the standard flags (RFC-9293), you can also filter by one of the three following combinations: `SYN-ACK`, `FIN-ACK`, and `RST-ACK`.
|
||||
|
||||
|===
|
||||
== .spec.agent.ebpf.flowFilter.rules
|
||||
Description::
|
||||
+
|
||||
--
|
||||
`rules` defines a list of filtering rules on the eBPF Agents.
|
||||
When filtering is enabled, by default, flows that don't match any rule are rejected.
|
||||
To change the default, you can define a rule that accepts everything: `{ action: "Accept", cidr: "0.0.0.0/0" }`, and then refine with rejecting rules.
|
||||
Unsupported *.
|
||||
--
|
||||
|
||||
Type::
|
||||
`array`
|
||||
|
||||
|
||||
|
||||
|
||||
== .spec.agent.ebpf.flowFilter.rules[]
|
||||
Description::
|
||||
+
|
||||
--
|
||||
`EBPFFlowFilterRule` defines the desired eBPF agent configuration regarding flow filtering rule.
|
||||
--
|
||||
|
||||
Type::
|
||||
`object`
|
||||
|
||||
|
||||
|
||||
|
||||
[cols="1,1,1",options="header"]
|
||||
|===
|
||||
| Property | Type | Description
|
||||
|
||||
| `action`
|
||||
| `string`
|
||||
| `action` defines the action to perform on the flows that match the filter. The available options are `Accept`, which is the default, and `Reject`.
|
||||
|
||||
| `cidr`
|
||||
| `string`
|
||||
| `cidr` defines the IP CIDR to filter flows by.
|
||||
Examples: `10.10.10.0/24` or `100:100:100:100::/64`
|
||||
|
||||
| `destPorts`
|
||||
| `integer-or-string`
|
||||
| `destPorts` optionally defines the destination ports to filter flows by.
|
||||
To filter a single port, set a single port as an integer value. For example, `destPorts: 80`.
|
||||
To filter a range of ports, use a "start-end" range in string format. For example, `destPorts: "80-100"`.
|
||||
To filter two ports, use a "port1,port2" in string format. For example, `ports: "80,100"`.
|
||||
|
||||
| `direction`
|
||||
| `string`
|
||||
| `direction` optionally defines a direction to filter flows by. The available options are `Ingress` and `Egress`.
|
||||
|
||||
| `icmpCode`
|
||||
| `integer`
|
||||
| `icmpCode`, for Internet Control Message Protocol (ICMP) traffic, optionally defines the ICMP code to filter flows by.
|
||||
|
||||
| `icmpType`
|
||||
| `integer`
|
||||
| `icmpType`, for ICMP traffic, optionally defines the ICMP type to filter flows by.
|
||||
|
||||
| `peerCIDR`
|
||||
| `string`
|
||||
| `peerCIDR` defines the Peer IP CIDR to filter flows by.
|
||||
Examples: `10.10.10.0/24` or `100:100:100:100::/64`
|
||||
|
||||
| `peerIP`
|
||||
| `string`
|
||||
| `peerIP` optionally defines the remote IP address to filter flows by.
|
||||
Example: `10.10.10.10`.
|
||||
|
||||
| `pktDrops`
|
||||
| `boolean`
|
||||
| `pktDrops` optionally filters only flows containing packet drops.
|
||||
|
||||
| `ports`
|
||||
| `integer-or-string`
|
||||
| `ports` optionally defines the ports to filter flows by. It is used both for source and destination ports.
|
||||
To filter a single port, set a single port as an integer value. For example, `ports: 80`.
|
||||
To filter a range of ports, use a "start-end" range in string format. For example, `ports: "80-100"`.
|
||||
To filter two ports, use a "port1,port2" in string format. For example, `ports: "80,100"`.
|
||||
|
||||
| `protocol`
|
||||
| `string`
|
||||
| `protocol` optionally defines a protocol to filter flows by. The available options are `TCP`, `UDP`, `ICMP`, `ICMPv6`, and `SCTP`.
|
||||
|
||||
| `sampling`
|
||||
| `integer`
|
||||
| `sampling` sampling rate for the matched flows, overriding the global sampling defined at `spec.agent.ebpf.sampling`.
|
||||
|
||||
| `sourcePorts`
|
||||
| `integer-or-string`
|
||||
| `sourcePorts` optionally defines the source ports to filter flows by.
|
||||
To filter a single port, set a single port as an integer value. For example, `sourcePorts: 80`.
|
||||
To filter a range of ports, use a "start-end" range in string format. For example, `sourcePorts: "80-100"`.
|
||||
To filter two ports, use a "port1,port2" in string format. For example, `ports: "80,100"`.
|
||||
|
||||
| `tcpFlags`
|
||||
| `string`
|
||||
| `tcpFlags` optionally defines TCP flags to filter flows by.
|
||||
In addition to the standard flags (RFC-9293), you can also filter by one of the three following combinations: `SYN-ACK`, `FIN-ACK`, and `RST-ACK`.
|
||||
|
||||
|===
|
||||
== .spec.agent.ebpf.metrics
|
||||
@@ -537,7 +663,7 @@ If set to `true`, the `providedCaFile` field is ignored.
|
||||
| Select the type of TLS configuration: +
|
||||
|
||||
- `Disabled` (default) to not configure TLS for the endpoint.
|
||||
- `Provided` to manually provide cert file and a key file. [Unsupported (*)].
|
||||
- `Provided` to manually provide cert file and a key file. Unsupported *.
|
||||
- `Auto` to use {product-title} auto generated certificate using annotations.
|
||||
|
||||
|===
|
||||
@@ -937,7 +1063,7 @@ More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-co
|
||||
Description::
|
||||
+
|
||||
--
|
||||
`exporters` define additional optional exporters for custom consumption or storage.
|
||||
`exporters` defines additional optional exporters for custom consumption or storage.
|
||||
--
|
||||
|
||||
Type::
|
||||
@@ -979,7 +1105,7 @@ Required::
|
||||
|
||||
| `type`
|
||||
| `string`
|
||||
| `type` selects the type of exporters. The available options are `Kafka`, `IPFIX` and `OpenTelemetry`.
|
||||
| `type` selects the type of exporters. The available options are `Kafka`, `IPFIX`, and `OpenTelemetry`.
|
||||
|
||||
|===
|
||||
== .spec.exporters[].ipfix
|
||||
@@ -1041,7 +1167,7 @@ Required::
|
||||
|
||||
| `sasl`
|
||||
| `object`
|
||||
| SASL authentication configuration. [Unsupported (*)].
|
||||
| SASL authentication configuration. Unsupported *.
|
||||
|
||||
| `tls`
|
||||
| `object`
|
||||
@@ -1056,7 +1182,7 @@ Required::
|
||||
Description::
|
||||
+
|
||||
--
|
||||
SASL authentication configuration. [Unsupported (*)].
|
||||
SASL authentication configuration. Unsupported *.
|
||||
--
|
||||
|
||||
Type::
|
||||
@@ -1552,7 +1678,7 @@ Required::
|
||||
|
||||
| `sasl`
|
||||
| `object`
|
||||
| SASL authentication configuration. [Unsupported (*)].
|
||||
| SASL authentication configuration. Unsupported *.
|
||||
|
||||
| `tls`
|
||||
| `object`
|
||||
@@ -1567,7 +1693,7 @@ Required::
|
||||
Description::
|
||||
+
|
||||
--
|
||||
SASL authentication configuration. [Unsupported (*)].
|
||||
SASL authentication configuration. Unsupported *.
|
||||
--
|
||||
|
||||
Type::
|
||||
@@ -1953,7 +2079,7 @@ Type::
|
||||
|
||||
- `Forward` forwards the user token for authorization. +
|
||||
|
||||
- `Host` [deprecated (*)] - uses the local pod service account to authenticate to Loki. +
|
||||
- `Host` [deprecated *] - uses the local pod service account to authenticate to Loki. +
|
||||
|
||||
When using the Loki Operator, this must be set to `Forward`.
|
||||
|
||||
@@ -2539,7 +2665,7 @@ configuration, you can disable it and install your own instead.
|
||||
| `boolean`
|
||||
| Set `enable` to `true` to deploy network policies on the namespaces used by Network Observability (main and privileged). It is disabled by default.
|
||||
These network policies better isolate the Network Observability components to prevent undesired connections to them.
|
||||
We recommend you either enable it, or create your own network policy for Network Observability.
|
||||
To increase the security of connections, enable this option or create your own network policy.
|
||||
|
||||
|===
|
||||
== .spec.processor
|
||||
@@ -2575,6 +2701,18 @@ such as `GOGC` and `GOMAXPROCS` env vars. Set these values at your own risk.
|
||||
| `string`
|
||||
| `clusterName` is the name of the cluster to appear in the flows data. This is useful in a multi-cluster context. When using {product-title}, leave empty to make it automatically determined.
|
||||
|
||||
| `deduper`
|
||||
| `object`
|
||||
| `deduper` allows you to sample or drop flows identified as duplicates, in order to save on resource usage.
|
||||
Unsupported *.
|
||||
|
||||
| `filters`
|
||||
| `array`
|
||||
| `filters` lets you define custom filters to limit the amount of generated flows.
|
||||
These filters provide more flexibility than the eBPF Agent filters (in `spec.agent.ebpf.flowFilter`), such as allowing to filter by Kubernetes namespace,
|
||||
but with a lesser improvement in performance.
|
||||
Unsupported *.
|
||||
|
||||
| `imagePullPolicy`
|
||||
| `string`
|
||||
| `imagePullPolicy` is the Kubernetes pull policy for the image defined above
|
||||
@@ -2605,13 +2743,13 @@ This setting is ignored when Kafka is disabled.
|
||||
| `string`
|
||||
| `logTypes` defines the desired record types to generate. Possible values are: +
|
||||
|
||||
- `Flows` (default) to export regular network flows +
|
||||
- `Flows` to export regular network flows. This is the default. +
|
||||
|
||||
- `Conversations` to generate events for started conversations, ended conversations as well as periodic "tick" updates +
|
||||
- `Conversations` to generate events for started conversations, ended conversations as well as periodic "tick" updates. +
|
||||
|
||||
- `EndedConversations` to generate only ended conversations events +
|
||||
- `EndedConversations` to generate only ended conversations events. +
|
||||
|
||||
- `All` to generate both network flows and all conversations events +
|
||||
- `All` to generate both network flows and all conversations events. It is not recommended due to the impact on resources footprint. +
|
||||
|
||||
|
||||
| `metrics`
|
||||
@@ -2667,7 +2805,7 @@ This delay is ignored when a FIN packet is collected for TCP flows (see `convers
|
||||
|
||||
| `dropUnusedFields`
|
||||
| `boolean`
|
||||
| `dropUnusedFields` [deprecated (*)] this setting is not used anymore.
|
||||
| `dropUnusedFields` [deprecated *] this setting is not used anymore.
|
||||
|
||||
| `enableKubeProbes`
|
||||
| `boolean`
|
||||
@@ -2700,7 +2838,7 @@ By convention, some values are forbidden. It must be greater than 1024 and diffe
|
||||
|
||||
| `secondaryNetworks`
|
||||
| `array`
|
||||
| Define secondary networks to be checked for resources identification.
|
||||
| Defines secondary networks to be checked for resources identification.
|
||||
To guarantee a correct identification, indexed values must form an unique identifier across the cluster.
|
||||
If the same index is used by several resources, those resources might be incorrectly labeled.
|
||||
|
||||
@@ -2773,9 +2911,8 @@ Type::
|
||||
Description::
|
||||
+
|
||||
--
|
||||
Define secondary networks to be checked for resources identification.
|
||||
To guarantee a correct identification, indexed values must form an unique identifier across the cluster.
|
||||
If the same index is used by several resources, those resources might be incorrectly labeled.
|
||||
Defines secondary networks to be checked for resources identification.
|
||||
To guarantee a correct identification, indexed values must form an unique identifier across the cluster. If the same index is used by several resources, those resources might be incorrectly labeled.
|
||||
--
|
||||
|
||||
Type::
|
||||
@@ -2814,6 +2951,133 @@ Fields absent from the 'k8s.v1.cni.cncf.io/network-status' annotation must not b
|
||||
| `string`
|
||||
| `name` should match the network name as visible in the pods annotation 'k8s.v1.cni.cncf.io/network-status'.
|
||||
|
||||
|===
|
||||
== .spec.processor.deduper
|
||||
Description::
|
||||
+
|
||||
--
|
||||
`deduper` allows you to sample or drop flows identified as duplicates, in order to save on resource usage.
|
||||
Unsupported *.
|
||||
--
|
||||
|
||||
Type::
|
||||
`object`
|
||||
|
||||
|
||||
|
||||
|
||||
[cols="1,1,1",options="header"]
|
||||
|===
|
||||
| Property | Type | Description
|
||||
|
||||
| `mode`
|
||||
| `string`
|
||||
| Set the Processor de-duplication mode. It comes in addition to the Agent-based deduplication because the Agent cannot de-duplicate same flows reported from different nodes. +
|
||||
|
||||
- Use `Drop` to drop every flow considered as duplicates, allowing saving more on resource usage but potentially losing some information such as the network interfaces used from peer, or network events. +
|
||||
|
||||
- Use `Sample` to randomly keep only one flow on 50, which is the default, among the ones considered as duplicates. This is a compromise between dropping every duplicate or keeping every duplicate. This sampling action comes in addition to the Agent-based sampling. If both Agent and Processor sampling values are `50`, the combined sampling is 1:2500. +
|
||||
|
||||
- Use `Disabled` to turn off Processor-based de-duplication. +
|
||||
|
||||
|
||||
| `sampling`
|
||||
| `integer`
|
||||
| `sampling` is the sampling rate when deduper `mode` is `Sample`.
|
||||
|
||||
|===
|
||||
== .spec.processor.filters
|
||||
Description::
|
||||
+
|
||||
--
|
||||
`filters` lets you define custom filters to limit the amount of generated flows.
|
||||
These filters provide more flexibility than the eBPF Agent filters (in `spec.agent.ebpf.flowFilter`), such as allowing to filter by Kubernetes namespace,
|
||||
but with a lesser improvement in performance.
|
||||
Unsupported *.
|
||||
--
|
||||
|
||||
Type::
|
||||
`array`
|
||||
|
||||
|
||||
|
||||
|
||||
== .spec.processor.filters[]
|
||||
Description::
|
||||
+
|
||||
--
|
||||
`FLPFilterSet` defines the desired configuration for FLP-based filtering satisfying all conditions.
|
||||
--
|
||||
|
||||
Type::
|
||||
`object`
|
||||
|
||||
|
||||
|
||||
|
||||
[cols="1,1,1",options="header"]
|
||||
|===
|
||||
| Property | Type | Description
|
||||
|
||||
| `allOf`
|
||||
| `array`
|
||||
| `filters` is a list of matches that must be all satisfied in order to remove a flow.
|
||||
|
||||
| `outputTarget`
|
||||
| `string`
|
||||
| If specified, these filters only target a single output: `Loki`, `Metrics` or `Exporters`. By default, all outputs are targeted.
|
||||
|
||||
| `sampling`
|
||||
| `integer`
|
||||
| `sampling` is an optional sampling rate to apply to this filter.
|
||||
|
||||
|===
|
||||
== .spec.processor.filters[].allOf
|
||||
Description::
|
||||
+
|
||||
--
|
||||
`filters` is a list of matches that must be all satisfied in order to remove a flow.
|
||||
--
|
||||
|
||||
Type::
|
||||
`array`
|
||||
|
||||
|
||||
|
||||
|
||||
== .spec.processor.filters[].allOf[]
|
||||
Description::
|
||||
+
|
||||
--
|
||||
`FLPSingleFilter` defines the desired configuration for a single FLP-based filter.
|
||||
--
|
||||
|
||||
Type::
|
||||
`object`
|
||||
|
||||
Required::
|
||||
- `field`
|
||||
- `matchType`
|
||||
|
||||
|
||||
|
||||
[cols="1,1,1",options="header"]
|
||||
|===
|
||||
| Property | Type | Description
|
||||
|
||||
| `field`
|
||||
| `string`
|
||||
| Name of the field to filter on.
|
||||
Refer to the documentation for the list of available fields: https://github.com/netobserv/network-observability-operator/blob/main/docs/flows-format.adoc.
|
||||
|
||||
| `matchType`
|
||||
| `string`
|
||||
| Type of matching to apply.
|
||||
|
||||
| `value`
|
||||
| `string`
|
||||
| Value to filter on. When `matchType` is `Equal` or `NotEqual`, you can use field injection with `$(SomeField)` to refer to any other field of the flow.
|
||||
|
||||
|===
|
||||
== .spec.processor.kafkaConsumerAutoscaler
|
||||
Description::
|
||||
@@ -2865,7 +3129,8 @@ Note that the more metrics you add, the bigger is the impact on Prometheus workl
|
||||
Metrics enabled by default are:
|
||||
`namespace_flows_total`, `node_ingress_bytes_total`, `node_egress_bytes_total`, `workload_ingress_bytes_total`,
|
||||
`workload_egress_bytes_total`, `namespace_drop_packets_total` (when `PacketDrop` feature is enabled),
|
||||
`namespace_rtt_seconds` (when `FlowRTT` feature is enabled), `namespace_dns_latency_seconds` (when `DNSTracking` feature is enabled).
|
||||
`namespace_rtt_seconds` (when `FlowRTT` feature is enabled), `namespace_dns_latency_seconds` (when `DNSTracking` feature is enabled),
|
||||
`namespace_network_policy_events_total` (when `NetworkEvents` feature is enabled).
|
||||
More information, with full list of available metrics: https://github.com/netobserv/network-observability-operator/blob/main/docs/Metrics.md
|
||||
|
||||
| `server`
|
||||
@@ -2936,7 +3201,7 @@ If set to `true`, the `providedCaFile` field is ignored.
|
||||
| Select the type of TLS configuration: +
|
||||
|
||||
- `Disabled` (default) to not configure TLS for the endpoint.
|
||||
- `Provided` to manually provide cert file and a key file. [Unsupported (*)].
|
||||
- `Provided` to manually provide cert file and a key file. Unsupported *.
|
||||
- `Auto` to use {product-title} auto generated certificate using annotations.
|
||||
|
||||
|===
|
||||
|
||||
@@ -107,6 +107,11 @@ When set to `Egress`, it is equivalent to adding the regular expression filter o
|
||||
be used to eliminate duplicates: `Duplicate != "true"` and `FlowDirection = "0"`.
|
||||
Refer to the documentation for the list of available fields: https://docs.openshift.com/container-platform/latest/observability/network_observability/json-flows-format-reference.html.
|
||||
|
||||
| `flatten`
|
||||
| `array (string)`
|
||||
| `flatten` is a list of list-type fields that must be flattened, such as Interfaces and NetworkEvents. Flattened fields generate one metric per item in that field.
|
||||
For instance, when flattening `Interfaces` on a bytes counter, a flow having Interfaces [br-ex, ens5] increases one counter for `br-ex` and another for `ens5`.
|
||||
|
||||
| `labels`
|
||||
| `array (string)`
|
||||
| `labels` is a list of fields that should be used as Prometheus labels, also known as dimensions.
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
// Automatically generated by 'hack/asciidoc-flows-gen.sh'. Do not edit, or make the NETOBSERV team aware of the editions.
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="network-observability-flows-format_{context}"]
|
||||
= Network flows format reference
|
||||
= Network Flows format reference
|
||||
|
||||
This is the specification of the network flows format. That format is used when a Kafka exporter is configured, for Prometheus metrics labels as well as internally for the Loki store.
|
||||
|
||||
@@ -9,7 +9,7 @@ The "Filter ID" column shows which related name to use when defining Quick Filte
|
||||
|
||||
The "Loki label" column is useful when querying Loki directly: label fields need to be selected using link:https://grafana.com/docs/loki/latest/logql/log_queries/#log-stream-selector[stream selectors].
|
||||
|
||||
The "Cardinality" column contains information about the implied metric cardinality if this field was to be used as a Prometheus label with the `FlowMetrics` API. For more information, see the `FlowMetrics` documentation for more information on using this API.
|
||||
The "Cardinality" column gives information about the implied metric cardinality if this field was to be used as a Prometheus label with the `FlowMetrics` API. Refer to the `FlowMetrics` documentation for more information on using this API.
|
||||
|
||||
|
||||
[cols="1,1,3,1,1,1,1",options="header"]
|
||||
@@ -99,6 +99,13 @@ The "Cardinality" column contains information about the implied metric cardinali
|
||||
| yes
|
||||
| fine
|
||||
| destination.k8s.namespace.name
|
||||
| `DstK8S_NetworkName`
|
||||
| string
|
||||
| Destination network name
|
||||
| `dst_network`
|
||||
| no
|
||||
| fine
|
||||
| n/a
|
||||
| `DstK8S_OwnerName`
|
||||
| string
|
||||
| Name of the destination owner, such as Deployment name, StatefulSet name, etc.
|
||||
@@ -156,14 +163,14 @@ The "Cardinality" column contains information about the implied metric cardinali
|
||||
| fine
|
||||
| n/a
|
||||
| `Flags`
|
||||
| number
|
||||
| Logical OR combination of unique TCP flags comprised in the flow, as per RFC-9293, with additional custom flags to represent the following per-packet combinations: +
|
||||
- SYN+ACK (0x100) +
|
||||
- FIN+ACK (0x200) +
|
||||
- RST+ACK (0x400)
|
||||
| string[]
|
||||
| List of TCP flags comprised in the flow, according to RFC-9293, with additional custom flags to represent the following per-packet combinations: +
|
||||
- SYN_ACK +
|
||||
- FIN_ACK +
|
||||
- RST_ACK
|
||||
| `tcp_flags`
|
||||
| no
|
||||
| fine
|
||||
| careful
|
||||
| tcp.flags
|
||||
| `FlowDirection`
|
||||
| number
|
||||
@@ -190,7 +197,7 @@ The "Cardinality" column contains information about the implied metric cardinali
|
||||
| fine
|
||||
| icmp.type
|
||||
| `IfDirections`
|
||||
| number
|
||||
| number[]
|
||||
| Flow directions from the network interface observation point. Can be one of: +
|
||||
- 0: Ingress (interface incoming traffic) +
|
||||
- 1: Egress (interface outgoing traffic)
|
||||
@@ -199,7 +206,7 @@ The "Cardinality" column contains information about the implied metric cardinali
|
||||
| fine
|
||||
| interface.directions
|
||||
| `Interfaces`
|
||||
| string
|
||||
| string[]
|
||||
| Network interfaces
|
||||
| `interfaces`
|
||||
| no
|
||||
@@ -220,8 +227,14 @@ The "Cardinality" column contains information about the implied metric cardinali
|
||||
| fine
|
||||
| k8s.layer
|
||||
| `NetworkEvents`
|
||||
| string
|
||||
| Network events flow monitoring
|
||||
| object[]
|
||||
| Network events, such as network policy actions, composed of nested fields: +
|
||||
- Feature (such as "acl" for network policies) +
|
||||
- Type (such as an "AdminNetworkPolicy") +
|
||||
- Namespace (namespace where the event applies, if any) +
|
||||
- Name (name of the resource that triggered the event) +
|
||||
- Action (such as "allow" or "drop") +
|
||||
- Direction (Ingress or Egress)
|
||||
| `network_events`
|
||||
| no
|
||||
| avoid
|
||||
@@ -229,7 +242,7 @@ The "Cardinality" column contains information about the implied metric cardinali
|
||||
| `Packets`
|
||||
| number
|
||||
| Number of packets
|
||||
| n/a
|
||||
| `pkt_drop_cause`
|
||||
| no
|
||||
| avoid
|
||||
| packets
|
||||
@@ -275,6 +288,13 @@ The "Cardinality" column contains information about the implied metric cardinali
|
||||
| no
|
||||
| fine
|
||||
| protocol
|
||||
| `Sampling`
|
||||
| number
|
||||
| Sampling rate used for this flow
|
||||
| n/a
|
||||
| no
|
||||
| fine
|
||||
| n/a
|
||||
| `SrcAddr`
|
||||
| string
|
||||
| Source IP address (ipv4 or ipv6)
|
||||
@@ -310,6 +330,13 @@ The "Cardinality" column contains information about the implied metric cardinali
|
||||
| yes
|
||||
| fine
|
||||
| source.k8s.namespace.name
|
||||
| `SrcK8S_NetworkName`
|
||||
| string
|
||||
| Source network name
|
||||
| `src_network`
|
||||
| no
|
||||
| fine
|
||||
| n/a
|
||||
| `SrcK8S_OwnerName`
|
||||
| string
|
||||
| Name of the source owner, such as Deployment name, StatefulSet name, etc.
|
||||
@@ -387,6 +414,48 @@ The "Cardinality" column contains information about the implied metric cardinali
|
||||
| no
|
||||
| avoid
|
||||
| timereceived
|
||||
| `Udns`
|
||||
| string[]
|
||||
| List of User Defined Networks
|
||||
| `udns`
|
||||
| no
|
||||
| careful
|
||||
| n/a
|
||||
| `XlatDstAddr`
|
||||
| string
|
||||
| Packet translation destination address
|
||||
| `xlat_dst_address`
|
||||
| no
|
||||
| avoid
|
||||
| n/a
|
||||
| `XlatDstPort`
|
||||
| number
|
||||
| Packet translation destination port
|
||||
| `xlat_dst_port`
|
||||
| no
|
||||
| careful
|
||||
| n/a
|
||||
| `XlatSrcAddr`
|
||||
| string
|
||||
| Packet translation source address
|
||||
| `xlat_src_address`
|
||||
| no
|
||||
| avoid
|
||||
| n/a
|
||||
| `XlatSrcPort`
|
||||
| number
|
||||
| Packet translation source port
|
||||
| `xlat_src_port`
|
||||
| no
|
||||
| careful
|
||||
| n/a
|
||||
| `ZoneId`
|
||||
| number
|
||||
| Packet translation zone id
|
||||
| `xlat_zone_id`
|
||||
| no
|
||||
| avoid
|
||||
| n/a
|
||||
| `_HashId`
|
||||
| string
|
||||
| In conversation tracking, the conversation identifier
|
||||
@@ -396,9 +465,9 @@ The "Cardinality" column contains information about the implied metric cardinali
|
||||
| n/a
|
||||
| `_RecordType`
|
||||
| string
|
||||
| Type of record: 'flowLog' for regular flow logs, or 'newConnection', 'heartbeat', 'endConnection' for conversation tracking
|
||||
| Type of record: `flowLog` for regular flow logs, or `newConnection`, `heartbeat`, `endConnection` for conversation tracking
|
||||
| `type`
|
||||
| yes
|
||||
| fine
|
||||
| n/a
|
||||
|===
|
||||
|===
|
||||
@@ -1,13 +1,14 @@
|
||||
// Automatically generated by './scripts/generate-doc.sh'. Do not edit, or make the NETOBSERV team aware of the editions.
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
|
||||
[id="network-observability-netobserv-cli-reference_{context}"]
|
||||
= Network Observability CLI usage
|
||||
|
||||
You can use the Network Observability CLI (`oc netobserv`) to pass command line arguments to capture flow data and packet data for further analysis, enable Network Observability Operator features, or pass configuration options to the eBPF agent and `flowlogs-pipeline`.
|
||||
You can use the Network Observability CLI (`oc netobserv`) to pass command line arguments to capture flows data, packets data, and metrics for further analysis and enable features supported by the Network Observability Operator.
|
||||
|
||||
[id="cli-syntax_{context}"]
|
||||
== Syntax
|
||||
The basic syntax for `oc netobserv` commands is as follows:
|
||||
The basic syntax for `oc netobserv` commands:
|
||||
|
||||
.`oc netobserv` syntax
|
||||
[source,terminal]
|
||||
@@ -26,6 +27,14 @@ $ oc netobserv [<command>] [<feature_option>] [<command_options>] <1>
|
||||
| Capture flows information. For subcommands, see the "Flows capture options" table.
|
||||
| packets
|
||||
| Capture packets data. For subcommands, see the "Packets capture options" table.
|
||||
| metrics
|
||||
| Capture metrics data. For subcommands, see the "Metrics capture options" table.
|
||||
| follow
|
||||
| Follow collector logs when running in background.
|
||||
| stop
|
||||
| Stop collection by removing agent daemonset.
|
||||
| copy
|
||||
| Copy collector generated files locally.
|
||||
| cleanup
|
||||
| Remove the Network Observability CLI components.
|
||||
| version
|
||||
@@ -46,44 +55,52 @@ $ oc netobserv flows [<feature_option>] [<command_options>]
|
||||
[cols="1,1,1",options="header"]
|
||||
|===
|
||||
| Option | Description | Default
|
||||
|--enable_pktdrop| enable packet drop | false
|
||||
|--enable_dns| enable DNS tracking | false
|
||||
|--enable_rtt| enable RTT tracking | false
|
||||
|--enable_network_events| enable Network events monitoring | false
|
||||
|--enable_filter| enable flow filter | false
|
||||
|--log-level| components logs | info
|
||||
|--max-time| maximum capture time | 5m
|
||||
|--max-bytes| maximum capture bytes | 50000000 = 50MB
|
||||
|--copy| copy the output files locally | prompt
|
||||
|--direction| filter direction | n/a
|
||||
|--cidr| filter CIDR | 0.0.0.0/0
|
||||
|--protocol| filter protocol | n/a
|
||||
|--sport| filter source port | n/a
|
||||
|--dport| filter destination port | n/a
|
||||
|--port| filter port | n/a
|
||||
|--sport_range| filter source port range | n/a
|
||||
|--dport_range| filter destination port range | n/a
|
||||
|--port_range| filter port range | n/a
|
||||
|--sports| filter on either of two source ports | n/a
|
||||
|--dports| filter on either of two destination ports | n/a
|
||||
|--ports| filter on either of two ports | n/a
|
||||
|--tcp_flags| filter TCP flags | n/a
|
||||
|--action| filter action | Accept
|
||||
|--icmp_type| filter ICMP type | n/a
|
||||
|--icmp_code| filter ICMP code | n/a
|
||||
|--peer_ip| filter peer IP | n/a
|
||||
|--interfaces| interfaces to monitor | n/a
|
||||
|--enable_all| enable all eBPF features | false
|
||||
|--enable_dns| enable DNS tracking | false
|
||||
|--enable_network_events| enable network events monitoring | false
|
||||
|--enable_pkt_translation| enable packet translation | false
|
||||
|--enable_pkt_drop| enable packet drop | false
|
||||
|--enable_rtt| enable RTT tracking | false
|
||||
|--enable_udn_mapping| enable User Defined Network mapping | false
|
||||
|--get-subnets| get subnets information | false
|
||||
|--background| run in background | false
|
||||
|--copy| copy the output files locally | prompt
|
||||
|--log-level| components logs | info
|
||||
|--max-time| maximum capture time | 5m
|
||||
|--max-bytes| maximum capture bytes | 50000000 = 50MB
|
||||
|--action| filter action | Accept
|
||||
|--cidr| filter CIDR | 0.0.0.0/0
|
||||
|--direction| filter direction | –
|
||||
|--dport| filter destination port | –
|
||||
|--dport_range| filter destination port range | –
|
||||
|--dports| filter on either of two destination ports | –
|
||||
|--drops| filter flows with only dropped packets | false
|
||||
|--icmp_code| filter ICMP code | –
|
||||
|--icmp_type| filter ICMP type | –
|
||||
|--node-selector| capture on specific nodes | –
|
||||
|--peer_ip| filter peer IP | –
|
||||
|--peer_cidr| filter peer CIDR | –
|
||||
|--port_range| filter port range | –
|
||||
|--port| filter port | –
|
||||
|--ports| filter on either of two ports | –
|
||||
|--protocol| filter protocol | –
|
||||
|--regexes| filter flows using regular expression | –
|
||||
|--sport_range| filter source port range | –
|
||||
|--sport| filter source port | –
|
||||
|--sports| filter on either of two source ports | –
|
||||
|--tcp_flags| filter TCP flags | –
|
||||
|--interfaces| interfaces to monitor | –
|
||||
|===
|
||||
|
||||
.Example running flows capture on TCP protocol and port 49051 with PacketDrop and RTT features enabled:
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc netobserv flows --enable_pktdrop=true --enable_rtt=true --enable_filter=true --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051
|
||||
$ oc netobserv flows --enable_pkt_drop --enable_rtt --enable_filter --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051
|
||||
----
|
||||
|
||||
[id="cli-reference-packet-capture-options_{context}"]
|
||||
== Packets capture options
|
||||
You can filter on port and protocol for packet capture data.
|
||||
You can filter packets capture data the as same as flows capture by using the filters.
|
||||
Certain features, such as packets drop, DNS, RTT, and network events, are only available for flows and metrics capture.
|
||||
|
||||
.`oc netobserv packets` syntax
|
||||
[source,terminal]
|
||||
@@ -93,27 +110,32 @@ $ oc netobserv packets [<option>]
|
||||
[cols="1,1,1",options="header"]
|
||||
|===
|
||||
| Option | Description | Default
|
||||
|--log-level| components logs | info
|
||||
|--max-time| maximum capture time | 5m
|
||||
|--max-bytes| maximum capture bytes | 50000000 = 50MB
|
||||
|--copy| copy the output files locally | prompt
|
||||
|--direction| filter direction | n/a
|
||||
|--cidr| filter CIDR | 0.0.0.0/0
|
||||
|--protocol| filter protocol | n/a
|
||||
|--sport| filter source port | n/a
|
||||
|--dport| filter destination port | n/a
|
||||
|--port| filter port | n/a
|
||||
|--sport_range| filter source port range | n/a
|
||||
|--dport_range| filter destination port range | n/a
|
||||
|--port_range| filter port range | n/a
|
||||
|--sports| filter on either of two source ports | n/a
|
||||
|--dports| filter on either of two destination ports | n/a
|
||||
|--ports| filter on either of two ports | n/a
|
||||
|--tcp_flags| filter TCP flags | n/a
|
||||
|--action| filter action | Accept
|
||||
|--icmp_type| filter ICMP type | n/a
|
||||
|--icmp_code| filter ICMP code | n/a
|
||||
|--peer_ip| filter peer IP | n/a
|
||||
|--background| run in background | false
|
||||
|--copy| copy the output files locally | prompt
|
||||
|--log-level| components logs | info
|
||||
|--max-time| maximum capture time | 5m
|
||||
|--max-bytes| maximum capture bytes | 50000000 = 50MB
|
||||
|--action| filter action | Accept
|
||||
|--cidr| filter CIDR | 0.0.0.0/0
|
||||
|--direction| filter direction | –
|
||||
|--dport| filter destination port | –
|
||||
|--dport_range| filter destination port range | –
|
||||
|--dports| filter on either of two destination ports | –
|
||||
|--drops| filter flows with only dropped packets | false
|
||||
|--icmp_code| filter ICMP code | –
|
||||
|--icmp_type| filter ICMP type | –
|
||||
|--node-selector| capture on specific nodes | –
|
||||
|--peer_ip| filter peer IP | –
|
||||
|--peer_cidr| filter peer CIDR | –
|
||||
|--port_range| filter port range | –
|
||||
|--port| filter port | –
|
||||
|--ports| filter on either of two ports | –
|
||||
|--protocol| filter protocol | –
|
||||
|--regexes| filter flows using regular expression | –
|
||||
|--sport_range| filter source port range | –
|
||||
|--sport| filter source port | –
|
||||
|--sports| filter on either of two source ports | –
|
||||
|--tcp_flags| filter TCP flags | –
|
||||
|===
|
||||
|
||||
.Example running packets capture on TCP protocol and port 49051:
|
||||
@@ -121,3 +143,52 @@ $ oc netobserv packets [<option>]
|
||||
----
|
||||
$ oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051
|
||||
----
|
||||
[id="cli-reference-metrics-capture-options_{context}"]
|
||||
== Metrics capture options
|
||||
You can enable features and use filters on metrics capture, the same as flows capture. The generated graphs fill accordingly in the dashboard.
|
||||
|
||||
.`oc netobserv metrics` syntax
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc netobserv metrics [<option>]
|
||||
----
|
||||
[cols="1,1,1",options="header"]
|
||||
|===
|
||||
| Option | Description | Default
|
||||
|--enable_all| enable all eBPF features | false
|
||||
|--enable_dns| enable DNS tracking | false
|
||||
|--enable_network_events| enable network events monitoring | false
|
||||
|--enable_pkt_translation| enable packet translation | false
|
||||
|--enable_pkt_drop| enable packet drop | false
|
||||
|--enable_rtt| enable RTT tracking | false
|
||||
|--enable_udn_mapping| enable User Defined Network mapping | false
|
||||
|--get-subnets| get subnets information | false
|
||||
|--action| filter action | Accept
|
||||
|--cidr| filter CIDR | 0.0.0.0/0
|
||||
|--direction| filter direction | –
|
||||
|--dport| filter destination port | –
|
||||
|--dport_range| filter destination port range | –
|
||||
|--dports| filter on either of two destination ports | –
|
||||
|--drops| filter flows with only dropped packets | false
|
||||
|--icmp_code| filter ICMP code | –
|
||||
|--icmp_type| filter ICMP type | –
|
||||
|--node-selector| capture on specific nodes | –
|
||||
|--peer_ip| filter peer IP | –
|
||||
|--peer_cidr| filter peer CIDR | –
|
||||
|--port_range| filter port range | –
|
||||
|--port| filter port | –
|
||||
|--ports| filter on either of two ports | –
|
||||
|--protocol| filter protocol | –
|
||||
|--regexes| filter flows using regular expression | –
|
||||
|--sport_range| filter source port range | –
|
||||
|--sport| filter source port | –
|
||||
|--sports| filter on either of two source ports | –
|
||||
|--tcp_flags| filter TCP flags | –
|
||||
|--interfaces| interfaces to monitor | –
|
||||
|===
|
||||
|
||||
.Example running metrics capture for TCP drops
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc netobserv metrics --enable_pkt_drop --enable_filter --protocol=TCP
|
||||
----
|
||||
@@ -0,0 +1,22 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// network_observability/observing-network-traffic.adoc
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="network-observability-packet-translation-overview_{context}"]
|
||||
= Endpoint translation (xlat)
|
||||
You can gain visibility into the endpoints serving traffic in a consolidated view using Network Observability and extended Berkeley Packet Filter (eBPF). Typically, when traffic flows through a service, egressIP, or load balancer, the traffic flow information is abstracted as it is routed to one of the available pods. If you try to get information about the traffic, you can only view service related info, such as service IP and port, and not information about the specific pod that is serving the request. Often the information for both the service traffic and the virtual service endpoint is captured as two separate flows, which complicates troubleshooting.
|
||||
|
||||
To solve this, endpoint xlat can help in the following ways:
|
||||
|
||||
- Capture the network flows at the kernel level, which has a minimal impact on performance.
|
||||
- Enrich the network flows with translated endpoint information, showing not only the service but also the specific backend pod, so you can see which pod served a request.
|
||||
|
||||
As network packets are processed, the eBPF hook enriches flow logs with metadata about the translated endpoint that includes the following pieces of information that you can view in the *Network Traffic* page in a single row:
|
||||
|
||||
- Source Pod IP
|
||||
- Source Port
|
||||
- Destination Pod IP
|
||||
- Destination Port
|
||||
- link:https://lwn.net/Articles/370152/#:~:text=A%20zone%20is%20simply%20a,to%20seperate%20conntrack%20defragmentation%20queues.[Conntrack Zone ID]
|
||||
|
||||
44
modules/network-observability-packet-translation.adoc
Normal file
44
modules/network-observability-packet-translation.adoc
Normal file
@@ -0,0 +1,44 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * network_observability/observing-network-traffic.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="network-observability-packet-translation_{context}"]
|
||||
= Working with endpoint translation (xlat)
|
||||
You can use Network Observability and eBPF to enrich network flows from a Kubernetes service with translated endpoint information, gaining insight into the endpoints serving traffic.
|
||||
|
||||
.Procedure
|
||||
. In the web console, navigate to *Operators* -> *Installed Operators*.
|
||||
. In the *Provided APIs* heading for the *NetObserv Operator*, select *Flow Collector*.
|
||||
. Select *cluster*, and then select the *YAML* tab.
|
||||
. Configure the `FlowCollector` custom resource for `PacketTranslation`, for example:
|
||||
+
|
||||
[id="network-observability-flowcollector-configuring-packet-translation_{context}"]
|
||||
.Example `FlowCollector` configuration
|
||||
[source, yaml]
|
||||
----
|
||||
apiVersion: flows.netobserv.io/v1beta2
|
||||
kind: FlowCollector
|
||||
metadata:
|
||||
name: cluster
|
||||
spec:
|
||||
namespace: netobserv
|
||||
agent:
|
||||
type: eBPF
|
||||
ebpf:
|
||||
features:
|
||||
- PacketTranslation <1>
|
||||
----
|
||||
<1> You can start enriching network flows with translated packet information by listing the `PacketTranslation` parameter in the `spec.agent.ebpf.features` specification list.
|
||||
|
||||
.Example filtering
|
||||
When you refresh the *Network Traffic* page you can filter for information about translated packets:
|
||||
|
||||
. Filter the network flow data based on *Destination kind: Service*.
|
||||
. You can see the *xlat* column, which distinguishes where translated information is displayed, and the following default columns:
|
||||
|
||||
* *Xlat Zone ID*
|
||||
* *Xlat Src Kubernetes Object*
|
||||
* *Xlat Dst Kubernetes Object*
|
||||
|
||||
You can manage the display of additional *xlat* columns in *Manage columns*.
|
||||
@@ -10,6 +10,7 @@ You can visualize and filter the flows and packets data directly in the terminal
|
||||
|
||||
include::modules/network-observability-cli-capturing-flows.adoc[leveloffset=+1]
|
||||
include::modules/network-observability-cli-capturing-packets.adoc[leveloffset=+1]
|
||||
include::modules/network-observability-cli-capturing-metrics.adoc[leveloffset=+1]
|
||||
include::modules/network-observability-netobserv-cli-cleaning.adoc[leveloffset=+1]
|
||||
|
||||
[role=_additional_resources]
|
||||
|
||||
@@ -52,6 +52,8 @@ include::modules/network-observability-RTT.adoc[leveloffset=+2]
|
||||
include::modules/network-observability-histogram-trafficflow.adoc[leveloffset=+2]
|
||||
include::modules/network-observability-working-with-zones.adoc[leveloffset=+2]
|
||||
include::modules/network-observability-filtering-ebpf-rule.adoc[leveloffset=+2]
|
||||
include::modules/network-observability-packet-translation-overview.adoc[leveloffset=+2]
|
||||
include::modules/network-observability-packet-translation.adoc[leveloffset=+2]
|
||||
|
||||
//Topology
|
||||
include::modules/network-observability-topology.adoc[leveloffset=+1]
|
||||
|
||||
Reference in New Issue
Block a user