diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 34e70601ae..c871e81912 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -3085,6 +3085,8 @@ Topics: File: network-observability-operator-monitoring - Name: Scheduling resources File: network-observability-scheduling-resources + - Name: Secondary networks + File: network-observability-secondary-networks - Name: Network Observability CLI Dir: netobserv_cli Topics: diff --git a/modules/network-observability-SRIOV-configuration.adoc b/modules/network-observability-SRIOV-configuration.adoc index e6845c3392..4545d57a3b 100644 --- a/modules/network-observability-SRIOV-configuration.adoc +++ b/modules/network-observability-SRIOV-configuration.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * network_observability/configuring-operator.adoc +// * observability/network_observability/network-observability-secondary-networks.adoc :_mod-docs-content-type: PROCEDURE [id="network-observability-SR-IOV-config_{context}"] @@ -16,7 +16,7 @@ In order to collect traffic from a cluster with a Single Root I/O Virtualization . Under the *Provided APIs* heading for the *NetObserv Operator*, select *Flow Collector*. . Select *cluster* and then select the *YAML* tab. . Configure the `FlowCollector` custom resource. A sample configuration is as follows: - ++ .Configure `FlowCollector` for SR-IOV monitoring [source,yaml] ---- diff --git a/modules/network-observability-cli-capturing-flows.adoc b/modules/network-observability-cli-capturing-flows.adoc index 6f101c6106..b4911c4b8d 100644 --- a/modules/network-observability-cli-capturing-flows.adoc +++ b/modules/network-observability-cli-capturing-flows.adoc @@ -25,6 +25,7 @@ $ oc netobserv flows --enable_filter=true --action=Accept --cidr=0.0.0.0/0 --pro ---- live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once ---- +. Use the *PageUp* and *PageDown* keys to toggle between *None*, *Resource*, *Zone*, *Host*, *Owner* and *all of the above*. . To stop capturing, press kbd:[Ctrl+C]. The data that was captured is written to two separate files in an `./output` directory located in the same path used to install the CLI. . View the captured data in the `./output/flow/.json` JSON file, which contains JSON arrays of the captured data. + diff --git a/modules/network-observability-cli-capturing-packets.adoc b/modules/network-observability-cli-capturing-packets.adoc index 0c2482b6c0..08d1929e7f 100644 --- a/modules/network-observability-cli-capturing-packets.adoc +++ b/modules/network-observability-cli-capturing-packets.adoc @@ -16,7 +16,7 @@ You can capture packets using the Network Observability CLI. + [source,terminal] ---- -$ oc netobserv packets tcp,80 +$ oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051 ---- . Add filters to the `live table filter` prompt in the terminal to refine the incoming packets. An example filter is as follows: + @@ -24,6 +24,7 @@ $ oc netobserv packets tcp,80 ---- live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once ---- +. Use the *PageUp* and *PageDown* keys to toggle between *None*, *Resource*, *Zone*, *Host*, *Owner* and *all of the above*. . To stop capturing, press kbd:[Ctrl+C]. . View the captured data, which is written to a single file in an `./output/pcap` directory located in the same path that was used to install the CLI: .. The `./output/pcap/.pcap` file can be opened with Wireshark. \ No newline at end of file diff --git a/modules/network-observability-create-network-policy.adoc b/modules/network-observability-create-network-policy.adoc index d8c97aed88..b28275d29f 100644 --- a/modules/network-observability-create-network-policy.adoc +++ b/modules/network-observability-create-network-policy.adoc @@ -6,7 +6,54 @@ :_mod-docs-content-type: PROCEDURE [id="network-observability-network-policy_{context}"] = Creating a network policy for Network Observability -You might need to create a network policy to secure ingress traffic to the `netobserv` namespace. In the web console, you can create a network policy using the form view. +If you want to further customize the network policies for the `netobserv` and `netobserv-privileged` namespaces, you must disable the managed installation of the policy from the `FlowCollector` CR, and create your own. You can use the network policy resources that are enabled from the `FlowCollector` CR as a starting point for the procedure that follows: + +.Example `netobserv` network policy +[source,yaml] +---- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +spec: + ingress: + - from: + - podSelector: {} + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: netobserv-privileged + - from: + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: openshift-console + ports: + - port: 9001 + protocol: TCP + - from: + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: openshift-monitoring + podSelector: {} + policyTypes: + - Ingress +---- + +.Example `netobserv-privileged` network policy +[source,yaml] +---- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: netobserv + namespace: netobserv-privileged +spec: + ingress: + - from: + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: openshift-monitoring + podSelector: {} + policyTypes: + - Ingress +---- .Procedure . Navigate to *Networking* -> *NetworkPolicies*. diff --git a/modules/network-observability-deploy-network-policy.adoc b/modules/network-observability-deploy-network-policy.adoc new file mode 100644 index 0000000000..82255c8701 --- /dev/null +++ b/modules/network-observability-deploy-network-policy.adoc @@ -0,0 +1,40 @@ +// Module included in the following assemblies: + +// * networking/network_observability/network-observability-network-policy.adoc + + +:_mod-docs-content-type: PROCEDURE +[id="network-observability-deploy-network-policy_{context}"] += Configuring an ingress network policy by using the FlowCollector custom resource +You can configure the `FlowCollector` custom resource (CR) to deploy an ingress network policy for Network Observability by setting the `spec.NetworkPolicy.enable` specification to `true`. By default, the specification is `false`. + +If you have installed Loki, Kafka or any exporter in a different namespace that also has a network policy, you must ensure that the Network Observability components can communicate with them. Consider the following about your setup: + + * Connection to Loki (as defined in the `FlowCollector` CR `spec.loki` parameter) + * Connection to Kafka (as defined in the `FlowCollector` CR `spec.kafka` parameter) + * Connection to any exporter (as defined in FlowCollector CR `spec.exporters` parameter) + * If you are using Loki and including it in the policy target, connection to an external object storage (as defined in your `LokiStack` related secret) + +.Procedure +. . In the web console, go to *Operators* -> *Installed Operators* page. +. Under the *Provided APIs* heading for *Network Observability*, select *Flow Collector*. +. Select *cluster* then select the *YAML* tab. +. Configure the `FlowCollector` CR. A sample configuration is as follows: ++ +[id="network-observability-flowcollector-configuring-network-policy_{context}"] +.Example `FlowCollector` CR for network policy +[source, yaml] +---- +apiVersion: flows.netobserv.io/v1beta2 +kind: FlowCollector +metadata: + name: cluster +spec: + namespace: netobserv + networkPolicy: + enable: true <1> + additionalNamespaces: ["openshift-console", "openshift-monitoring"] <2> +# ... +---- +<1> By default, the `enable` value is `false`. +<2> Default values are `["openshift-console", "openshift-monitoring"]`. \ No newline at end of file diff --git a/modules/network-observability-enriched-flows.adoc b/modules/network-observability-enriched-flows.adoc index 2bf90b76ca..8d756ae0fd 100644 --- a/modules/network-observability-enriched-flows.adoc +++ b/modules/network-observability-enriched-flows.adoc @@ -6,10 +6,11 @@ [id="network-observability-enriched-flows_{context}"] = Export enriched network flow data -You can send network flows to Kafka, IPFIX, or both at the same time. Any processor or storage that supports Kafka or IPFIX input, such as Splunk, Elasticsearch, or Fluentd, can consume the enriched network flow data. +You can send network flows to Kafka, IPFIX, the Red{nbsp}Hat build of OpenTelemetry, or all three at the same time. For Kafka or IPFIX, any processor or storage that supports those inputs, such as Splunk, Elasticsearch, or Fluentd, can consume the enriched network flow data. For OpenTelemetry, network flow data and metrics can be exported to a compatible OpenTelemetry endpoint, such as Red{nbsp}Hat build of OpenTelemetry, Jaeger, or Prometheus. .Prerequisites -* Your Kafka or IPFIX collector endpoint(s) are available from Network Observability `flowlogs-pipeline` pods. +* Your Kafka, IPFIX, or OpenTelemetry collector endpoints are available from Network Observability `flowlogs-pipeline` pods. + .Procedure @@ -26,22 +27,41 @@ metadata: name: cluster spec: exporters: - - type: Kafka <3> + - type: Kafka <1> kafka: address: "kafka-cluster-kafka-bootstrap.netobserv" - topic: netobserv-flows-export <1> + topic: netobserv-flows-export <2> tls: - enable: false <2> - - type: IPFIX <3> + enable: false <3> + - type: IPFIX <1> ipfix: targetHost: "ipfix-collector.ipfix.svc.cluster.local" targetPort: 4739 transport: tcp or udp <4> - - + - type: OpenTelemetry <1> + openTelemetry: + targetHost: my-otelcol-collector-headless.otlp.svc + targetPort: 4317 + type: grpc <5> + logs: <6> + enable: true + metrics: <7> + enable: true + prefix: netobserv + pushTimeInterval: 20s <8> + expiryTime: 2m + # fieldsMapping: <9> + # input: SrcAddr + # output: source.address ---- -<1> The Network Observability Operator exports all flows to the configured Kafka topic. -<2> You can encrypt all communications to and from Kafka with SSL/TLS or mTLS. When enabled, the Kafka CA certificate must be available as a ConfigMap or a Secret, both in the namespace where the `flowlogs-pipeline` processor component is deployed (default: netobserv). It must be referenced with `spec.exporters.tls.caCert`. When using mTLS, client secrets must be available in these namespaces as well (they can be generated for instance using the AMQ Streams User Operator) and referenced with `spec.exporters.tls.userCert`. -<3> You can export flows to IPFIX instead of or in conjunction with exporting flows to Kafka. +<1> You can export flows to IPFIX, OpenTelemetry, and Kafka individually or concurrently. +<2> The Network Observability Operator exports all flows to the configured Kafka topic. +<3> You can encrypt all communications to and from Kafka with SSL/TLS or mTLS. When enabled, the Kafka CA certificate must be available as a ConfigMap or a Secret, both in the namespace where the `flowlogs-pipeline` processor component is deployed (default: netobserv). It must be referenced with `spec.exporters.tls.caCert`. When using mTLS, client secrets must be available in these namespaces as well (they can be generated for instance using the AMQ Streams User Operator) and referenced with `spec.exporters.tls.userCert`. <4> You have the option to specify transport. The default value is `tcp` but you can also specify `udp`. -. After configuration, network flows data can be sent to an available output in a JSON format. For more information, see _Network flows format reference_. +<5> The protocol of OpenTelemetry connection. The available options are `http` and `grpc`. +<6> OpenTelemetry configuration for exporting logs, which are the same as the logs created for Loki. +<7> OpenTelemetry configuration for exporting metrics, which are the same as the metrics created for Prometheus. These configurations are specified in the `spec.processor.metrics.includeList` parameter of the `FlowCollector` custom resource, along with any custom metrics you defined using the `FlowMetrics` custom resource. +<8> The time interval that metrics are sent to the OpenTelemetry collector. +<9> *Optional*:Network Observability network flows formats get automatically renamed to an OpenTelemetry compliant format. The `fieldsMapping` specification gives you the ability to customize the OpenTelemetry format output. For example in the YAML sample, `SrcAddr` is the Network Observability input field, and it is being renamed `source.address` in OpenTelemetry output. You can see both Network Observability and OpenTelemetry formats in the "Network flows format reference". + +After configuration, network flows data can be sent to an available output in a JSON format. For more information, see "Network flows format reference". diff --git a/modules/network-observability-flowcollector-api-specifications.adoc b/modules/network-observability-flowcollector-api-specifications.adoc index c073318a00..ba8b0b3ee4 100644 --- a/modules/network-observability-flowcollector-api-specifications.adoc +++ b/modules/network-observability-flowcollector-api-specifications.adoc @@ -1,3 +1,4 @@ +// Automatically generated by 'openshift-apidocs-gen'. Do not edit. :_mod-docs-content-type: REFERENCE [id="network-observability-flowcollector-api-specifications_{context}"] = FlowCollector API specifications @@ -115,6 +116,10 @@ Kafka can provide better scalability, resiliency, and high availability (for mor | `string` | Namespace where Network Observability pods are deployed. +| `networkPolicy` +| `object` +| `networkPolicy` defines ingress network policy settings for Network Observability components isolation. + | `processor` | `object` | `processor` defines the settings of the component that receives the flows from the agent, @@ -197,16 +202,21 @@ Otherwise it is matched as a case-sensitive string. | `features` | `array (string)` -| List of additional features to enable. They are all disabled by default. Enabling additional features might have performance impacts. Possible values are: + +| List of additional features to enable. They are all disabled by default. Enabling additional features might have performance impacts. A possible value is `+`. - `PacketDrop`: enable the packets drop flows logging feature. This feature requires mounting -the kernel debug filesystem, so the eBPF pod has to run as privileged. +the kernel debug filesystem, so the eBPF agent pods have to run as privileged. If the `spec.agent.ebpf.privileged` parameter is not set, an error is reported. + - `DNSTracking`: enable the DNS tracking feature. + - `FlowRTT`: enable flow latency (sRTT) extraction in the eBPF agent from TCP traffic. + +- `NetworkEvents`: enable the network events monitoring feature, such as correlating flows and network policies. +This feature requires mounting the kernel debug filesystem, so the eBPF agent pods have to run as privileged. +It requires using the OVN-Kubernetes network plugin with the Observability feature. +IMPORTANT: This feature is available as a Developer Preview. + + | `flowFilter` | `object` @@ -377,8 +387,9 @@ Examples: `10.10.10.0/24` or `100:100:100:100::/64` | `destPorts` | `integer-or-string` | `destPorts` defines the destination ports to filter flows by. -To filter a single port, set a single port as an integer value. For example: `destPorts: 80`. -To filter a range of ports, use a "start-end" range in string format. For example: `destPorts: "80-100"`. +To filter a single port, set a single port as an integer value. For example, `destPorts: 80`. +To filter a range of ports, use a "start-end" range in string format. For example, `destPorts: "80-100"`. +To filter two ports, use a "port1,port2" in string format. For example, `ports: "80,100"`. | `direction` | `string` @@ -401,11 +412,16 @@ To filter a range of ports, use a "start-end" range in string format. For exampl | `peerIP` defines the IP address to filter flows by. Example: `10.10.10.10`. +| `pktDrops` +| `boolean` +| `pktDrops` filters flows with packet drops + | `ports` | `integer-or-string` | `ports` defines the ports to filter flows by. It is used both for source and destination ports. -To filter a single port, set a single port as an integer value. For example: `ports: 80`. -To filter a range of ports, use a "start-end" range in string format. For example: `ports: "80-100"`. +To filter a single port, set a single port as an integer value. For example, `ports: 80`. +To filter a range of ports, use a "start-end" range in string format. For example, `ports: "80-100"`. +To filter two ports, use a "port1,port2" in string format. For example, `ports: "80,100"`. | `protocol` | `string` @@ -414,8 +430,13 @@ To filter a range of ports, use a "start-end" range in string format. For exampl | `sourcePorts` | `integer-or-string` | `sourcePorts` defines the source ports to filter flows by. -To filter a single port, set a single port as an integer value. For example: `sourcePorts: 80`. -To filter a range of ports, use a "start-end" range in string format. For example: `sourcePorts: "80-100"`. +To filter a single port, set a single port as an integer value. For example, `sourcePorts: 80`. +To filter a range of ports, use a "start-end" range in string format. For example, `sourcePorts: "80-100"`. +To filter two ports, use a "port1,port2" in string format. For example, `ports: "80,100"`. + +| `tcpFlags` +| `string` +| `tcpFlags` defines the TCP flags to filter flows by. |=== == .spec.agent.ebpf.metrics @@ -440,7 +461,7 @@ Type:: | `disableAlerts` is a list of alerts that should be disabled. Possible values are: + -`NetObservDroppedFlows`, which is triggered when the eBPF agent is dropping flows, such as when the BPF hashmap is full or the capacity limiter is being triggered. + +`NetObservDroppedFlows` is triggered when the eBPF agent is missing packets or flows, such as when the eBPF hashmap is busy or full, or the capacity limiter is triggered. + | `enable` @@ -488,6 +509,8 @@ TLS configuration. Type:: `object` +Required:: + - `type` @@ -949,6 +972,10 @@ Required:: | `object` | Kafka configuration, such as the address and topic, to send enriched flows to. +| `openTelemetry` +| `object` +| OpenTelemetry configuration, such as the IP address and port to send enriched logs or metrics to. + | `type` | `string` | `type` selects the type of exporters. The available options are `Kafka` and `IPFIX`. @@ -1211,6 +1238,267 @@ Type:: +[cols="1,1,1",options="header"] +|=== +| Property | Type | Description + +| `certFile` +| `string` +| `certFile` defines the path to the certificate file name within the config map or secret. + +| `certKey` +| `string` +| `certKey` defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. + +| `name` +| `string` +| Name of the config map or secret containing certificates. + +| `namespace` +| `string` +| Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. +If the namespace is different, the config map or the secret is copied so that it can be mounted as required. + +| `type` +| `string` +| Type for the certificate reference: `configmap` or `secret`. + +|=== +== .spec.exporters[].openTelemetry +Description:: ++ +-- +OpenTelemetry configuration, such as the IP address and port to send enriched logs or metrics to. +-- + +Type:: + `object` + +Required:: + - `targetHost` + - `targetPort` + + + +[cols="1,1,1",options="header"] +|=== +| Property | Type | Description + +| `fieldsMapping` +| `array` +| Custom fields mapping to an OpenTelemetry conformant format. +By default, Network Observability format proposal is used: https://github.com/rhobs/observability-data-model/blob/main/network-observability.md#format-proposal . +As there is currently no accepted standard for L3 or L4 enriched network logs, you can freely override it with your own. + +| `headers` +| `object (string)` +| Headers to add to messages (optional) + +| `logs` +| `object` +| OpenTelemetry configuration for logs. + +| `metrics` +| `object` +| OpenTelemetry configuration for metrics. + +| `protocol` +| `string` +| Protocol of the OpenTelemetry connection. The available options are `http` and `grpc`. + +| `targetHost` +| `string` +| Address of the OpenTelemetry receiver. + +| `targetPort` +| `integer` +| Port for the OpenTelemetry receiver. + +| `tls` +| `object` +| TLS client configuration. + +|=== +== .spec.exporters[].openTelemetry.fieldsMapping +Description:: ++ +-- +Custom fields mapping to an OpenTelemetry conformant format. +By default, Network Observability format proposal is used: https://github.com/rhobs/observability-data-model/blob/main/network-observability.md#format-proposal . +As there is currently no accepted standard for L3 or L4 enriched network logs, you can freely override it with your own. +-- + +Type:: + `array` + + + + +== .spec.exporters[].openTelemetry.fieldsMapping[] +Description:: ++ +-- + +-- + +Type:: + `object` + + + + +[cols="1,1,1",options="header"] +|=== +| Property | Type | Description + +| `input` +| `string` +| + +| `multiplier` +| `integer` +| + +| `output` +| `string` +| + +|=== +== .spec.exporters[].openTelemetry.logs +Description:: ++ +-- +OpenTelemetry configuration for logs. +-- + +Type:: + `object` + + + + +[cols="1,1,1",options="header"] +|=== +| Property | Type | Description + +| `enable` +| `boolean` +| Set `enable` to `true` to send logs to an OpenTelemetry receiver. + +|=== +== .spec.exporters[].openTelemetry.metrics +Description:: ++ +-- +OpenTelemetry configuration for metrics. +-- + +Type:: + `object` + + + + +[cols="1,1,1",options="header"] +|=== +| Property | Type | Description + +| `enable` +| `boolean` +| Set `enable` to `true` to send metrics to an OpenTelemetry receiver. + +| `pushTimeInterval` +| `string` +| Specify how often metrics are sent to a collector. + +|=== +== .spec.exporters[].openTelemetry.tls +Description:: ++ +-- +TLS client configuration. +-- + +Type:: + `object` + + + + +[cols="1,1,1",options="header"] +|=== +| Property | Type | Description + +| `caCert` +| `object` +| `caCert` defines the reference of the certificate for the Certificate Authority. + +| `enable` +| `boolean` +| Enable TLS + +| `insecureSkipVerify` +| `boolean` +| `insecureSkipVerify` allows skipping client-side verification of the server certificate. +If set to `true`, the `caCert` field is ignored. + +| `userCert` +| `object` +| `userCert` defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. + +|=== +== .spec.exporters[].openTelemetry.tls.caCert +Description:: ++ +-- +`caCert` defines the reference of the certificate for the Certificate Authority. +-- + +Type:: + `object` + + + + +[cols="1,1,1",options="header"] +|=== +| Property | Type | Description + +| `certFile` +| `string` +| `certFile` defines the path to the certificate file name within the config map or secret. + +| `certKey` +| `string` +| `certKey` defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. + +| `name` +| `string` +| Name of the config map or secret containing certificates. + +| `namespace` +| `string` +| Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. +If the namespace is different, the config map or the secret is copied so that it can be mounted as required. + +| `type` +| `string` +| Type for the certificate reference: `configmap` or `secret`. + +|=== +== .spec.exporters[].openTelemetry.tls.userCert +Description:: ++ +-- +`userCert` defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. +-- + +Type:: + `object` + + + + [cols="1,1,1",options="header"] |=== | Property | Type | Description @@ -1497,6 +1785,8 @@ Description:: Type:: `object` +Required:: + - `mode` @@ -1618,6 +1908,8 @@ It is ignored for other modes. Type:: `object` +Required:: + - `name` @@ -2218,6 +2510,36 @@ If the namespace is different, the config map or the secret is copied so that it | `string` | Type for the certificate reference: `configmap` or `secret`. +|=== +== .spec.networkPolicy +Description:: ++ +-- +`networkPolicy` defines ingress network policy settings for Network Observability components isolation. +-- + +Type:: + `object` + + + + +[cols="1,1,1",options="header"] +|=== +| Property | Type | Description + +| `additionalNamespaces` +| `array (string)` +| `additionalNamespaces` contains additional namespaces allowed to connect to the Network Observability namespace. +It provides flexibility in the network policy configuration, but if you need a more specific +configuration, you can disable it and install your own instead. + +| `enable` +| `boolean` +| Set `enable` to `true` to deploy network policies on the namespaces used by Network Observability (main and privileged). It is disabled by default. +These network policies better isolate the Network Observability components to prevent undesired connections to them. +Either enable it, or create your own network policy for Network Observability. + |=== == .spec.processor Description:: @@ -2375,6 +2697,12 @@ By convention, some values are forbidden. It must be greater than 1024 and diffe | `object` | scheduling controls how the pods are scheduled on nodes. +| `secondaryNetworks` +| `array` +| Define secondary networks to be checked for resources identification. +To guarantee a correct identification, indexed values must form an unique identifier across the cluster. +If the same index is used by several resources, those resources might be incorrectly labeled. + |=== == .spec.processor.advanced.scheduling Description:: @@ -2440,6 +2768,52 @@ Type:: +== .spec.processor.advanced.secondaryNetworks +Description:: ++ +-- +Define secondary networks to be checked for resources identification. +To guarantee a correct identification, the indexed values must form an unique identifier across the cluster. +If the same index is used by several resources, those resources might be wrongly labeled. +-- + +Type:: + `array` + + + + +== .spec.processor.advanced.secondaryNetworks[] +Description:: ++ +-- + +-- + +Type:: + `object` + +Required:: + - `index` + - `name` + + + +[cols="1,1,1",options="header"] +|=== +| Property | Type | Description + +| `index` +| `array (string)` +| `index` is a list of fields to use for indexing the pods. They should form a unique Pod identifier across the cluster. +Can be any of: `MAC`, `IP`, `Interface`. +Fields absent from the 'k8s.v1.cni.cncf.io/network-status' annotation must not be added to the index. + +| `name` +| `string` +| `name` should match the network name as visible in the pods annotation 'k8s.v1.cni.cncf.io/network-status'. + +|=== == .spec.processor.kafkaConsumerAutoscaler Description:: + @@ -2488,7 +2862,8 @@ The names correspond to the names in Prometheus without the prefix. For example, `namespace_egress_packets_total` shows up as `netobserv_namespace_egress_packets_total` in Prometheus. Note that the more metrics you add, the bigger is the impact on Prometheus workload resources. Metrics enabled by default are: -`namespace_flows_total`, `node_ingress_bytes_total`, `workload_ingress_bytes_total`, `namespace_drop_packets_total` (when `PacketDrop` feature is enabled), +`namespace_flows_total`, `node_ingress_bytes_total`, `node_egress_bytes_total`, `workload_ingress_bytes_total`, +`workload_egress_bytes_total`, `namespace_drop_packets_total` (when `PacketDrop` feature is enabled), `namespace_rtt_seconds` (when `FlowRTT` feature is enabled), `namespace_dns_latency_seconds` (when `DNSTracking` feature is enabled). More information, with full list of available metrics: https://github.com/netobserv/network-observability-operator/blob/main/docs/Metrics.md @@ -2533,6 +2908,8 @@ TLS configuration. Type:: `object` +Required:: + - `type` @@ -2721,6 +3098,9 @@ SubnetLabel allows to label subnets and IPs, such as to identify cluster-externa Type:: `object` +Required:: + - `cidrs` + - `name` @@ -2769,6 +3149,8 @@ Prometheus querying configuration, such as client settings, used in the Console Type:: `object` +Required:: + - `mode` diff --git a/modules/network-observability-flowmetric-api-specifications.adoc b/modules/network-observability-flowmetric-api-specifications.adoc index 3bcc0f41db..09e4b3b446 100644 --- a/modules/network-observability-flowmetric-api-specifications.adoc +++ b/modules/network-observability-flowmetric-api-specifications.adoc @@ -120,6 +120,10 @@ Refer to the documentation for the list of available fields: https://docs.opensh | `string` | Name of the metric. In Prometheus, it is automatically prefixed with "netobserv_". +| `remap` +| `object (string)` +| Set the `remap` property to use different names for the generated metric labels than the flow fields. Use the origin flow fields as keys, and the desired label names as values. + | `type` | `string` | Metric type: "Counter" or "Histogram". diff --git a/modules/network-observability-flows-format.adoc b/modules/network-observability-flows-format.adoc index 2cd2666fcc..ce2635f024 100644 --- a/modules/network-observability-flows-format.adoc +++ b/modules/network-observability-flows-format.adoc @@ -9,140 +9,162 @@ The "Filter ID" column shows which related name to use when defining Quick Filte The "Loki label" column is useful when querying Loki directly: label fields need to be selected using link:https://grafana.com/docs/loki/latest/logql/log_queries/#log-stream-selector[stream selectors]. -The "Cardinality" column gives information about the implied metric cardinality if this field was to be used as a Prometheus label with the `FlowMetric` API. For more information, see the "FlowMetric API reference". +The "Cardinality" column contains information about the implied metric cardinality if this field was to be used as a Prometheus label with the `FlowMetrics` API. For more information, see the `FlowMetrics` documentation for more information on using this API. -[cols="1,1,3,1,1,1",options="header"] + +[cols="1,1,3,1,1,1,1",options="header"] |=== -| Name | Type | Description | Filter ID | Loki label | Cardinality +| Name | Type | Description | Filter ID | Loki label | Cardinality | OpenTelemetry | `Bytes` | number | Number of bytes | n/a | no | avoid +| bytes | `DnsErrno` | number | Error number returned from DNS tracker ebpf hook function | `dns_errno` | no | fine +| dns.errno | `DnsFlags` | number | DNS flags for DNS record | n/a | no | fine +| dns.flags | `DnsFlagsResponseCode` | string | Parsed DNS header RCODEs name | `dns_flag_response_code` | no | fine +| dns.responsecode | `DnsId` | number | DNS record id | `dns_id` | no | avoid +| dns.id | `DnsLatencyMs` | number | Time between a DNS request and response, in milliseconds | `dns_latency` | no | avoid +| dns.latency | `Dscp` | number | Differentiated Services Code Point (DSCP) value | `dscp` | no | fine +| dscp | `DstAddr` | string | Destination IP address (ipv4 or ipv6) | `dst_address` | no | avoid +| destination.address | `DstK8S_HostIP` | string | Destination node IP | `dst_host_address` | no | fine +| destination.k8s.host.address | `DstK8S_HostName` | string | Destination node name | `dst_host_name` | no | fine +| destination.k8s.host.name | `DstK8S_Name` | string | Name of the destination Kubernetes object, such as Pod name, Service name or Node name. | `dst_name` | no | careful +| destination.k8s.name | `DstK8S_Namespace` | string | Destination namespace | `dst_namespace` | yes | fine +| destination.k8s.namespace.name | `DstK8S_OwnerName` | string | Name of the destination owner, such as Deployment name, StatefulSet name, etc. | `dst_owner_name` | yes | fine +| destination.k8s.owner.name | `DstK8S_OwnerType` | string | Kind of the destination owner, such as Deployment, StatefulSet, etc. | `dst_kind` | no | fine +| destination.k8s.owner.kind | `DstK8S_Type` | string | Kind of the destination Kubernetes object, such as Pod, Service or Node. | `dst_kind` | yes | fine +| destination.k8s.kind | `DstK8S_Zone` | string | Destination availability zone | `dst_zone` | yes | fine +| destination.zone | `DstMac` | string | Destination MAC address | `dst_mac` | no | avoid +| destination.mac | `DstPort` | number | Destination port | `dst_port` | no | careful +| destination.port | `DstSubnetLabel` | string | Destination subnet label | `dst_subnet_label` | no | fine +| n/a | `Duplicate` | boolean | Indicates if this flow was also captured from another interface on the same host | n/a -| yes +| no | fine +| n/a | `Flags` | number | Logical OR combination of unique TCP flags comprised in the flow, as per RFC-9293, with additional custom flags to represent the following per-packet combinations: + - SYN+ACK (0x100) + - FIN+ACK (0x200) + - RST+ACK (0x400) -| n/a +| `tcp_flags` | no | fine +| tcp.flags | `FlowDirection` | number | Flow interpreted direction from the node observation point. Can be one of: + @@ -152,18 +174,21 @@ The "Cardinality" column gives information about the implied metric cardinality | `node_direction` | yes | fine +| host.direction | `IcmpCode` | number | ICMP code | `icmp_code` | no | fine +| icmp.code | `IcmpType` | number | ICMP type | `icmp_type` | no | fine +| icmp.type | `IfDirections` | number | Flow directions from the network interface observation point. Can be one of: + @@ -172,172 +197,208 @@ The "Cardinality" column gives information about the implied metric cardinality | `ifdirections` | no | fine +| interface.directions | `Interfaces` | string | Network interfaces | `interfaces` | no | careful +| interface.names | `K8S_ClusterName` | string | Cluster name or identifier | `cluster_name` | yes | fine +| k8s.cluster.name | `K8S_FlowLayer` | string | Flow layer: 'app' or 'infra' | `flow_layer` -| no +| yes | fine +| k8s.layer +| `NetworkEvents` +| string +| Network events flow monitoring +| `network_events` +| no +| avoid +| n/a | `Packets` | number | Number of packets | n/a | no | avoid +| packets | `PktDropBytes` | number | Number of bytes dropped by the kernel | n/a | no | avoid +| drops.bytes | `PktDropLatestDropCause` | string | Latest drop cause | `pkt_drop_cause` | no | fine +| drops.latestcause | `PktDropLatestFlags` | number | TCP flags on last dropped packet | n/a | no | fine +| drops.latestflags | `PktDropLatestState` | string | TCP state on last dropped packet | `pkt_drop_state` | no | fine +| drops.lateststate | `PktDropPackets` | number | Number of packets dropped by the kernel | n/a | no | avoid +| drops.packets | `Proto` | number | L4 protocol | `protocol` | no | fine +| protocol | `SrcAddr` | string | Source IP address (ipv4 or ipv6) | `src_address` | no | avoid +| source.address | `SrcK8S_HostIP` | string | Source node IP | `src_host_address` | no | fine +| source.k8s.host.address | `SrcK8S_HostName` | string | Source node name | `src_host_name` | no | fine +| source.k8s.host.name | `SrcK8S_Name` | string | Name of the source Kubernetes object, such as Pod name, Service name or Node name. | `src_name` | no | careful +| source.k8s.name | `SrcK8S_Namespace` | string | Source namespace | `src_namespace` | yes | fine +| source.k8s.namespace.name | `SrcK8S_OwnerName` | string | Name of the source owner, such as Deployment name, StatefulSet name, etc. | `src_owner_name` | yes | fine +| source.k8s.owner.name | `SrcK8S_OwnerType` | string | Kind of the source owner, such as Deployment, StatefulSet, etc. | `src_kind` | no | fine +| source.k8s.owner.kind | `SrcK8S_Type` | string | Kind of the source Kubernetes object, such as Pod, Service or Node. | `src_kind` | yes | fine +| source.k8s.kind | `SrcK8S_Zone` | string | Source availability zone | `src_zone` | yes | fine +| source.zone | `SrcMac` | string | Source MAC address | `src_mac` | no | avoid +| source.mac | `SrcPort` | number | Source port | `src_port` | no | careful +| source.port | `SrcSubnetLabel` | string | Source subnet label | `src_subnet_label` | no | fine +| n/a | `TimeFlowEndMs` | number | End timestamp of this flow, in milliseconds | n/a | no | avoid +| timeflowend | `TimeFlowRttNs` | number | TCP Smoothed Round Trip Time (SRTT), in nanoseconds | `time_flow_rtt` | no | avoid +| tcp.rtt | `TimeFlowStartMs` | number | Start timestamp of this flow, in milliseconds | n/a | no | avoid +| timeflowstart | `TimeReceived` | number | Timestamp when this flow was received and processed by the flow collector, in seconds | n/a | no | avoid +| timereceived | `_HashId` | string | In conversation tracking, the conversation identifier | `id` | no | avoid +| n/a | `_RecordType` | string | Type of record: 'flowLog' for regular flow logs, or 'newConnection', 'heartbeat', 'endConnection' for conversation tracking | `type` | yes | fine +| n/a |=== \ No newline at end of file diff --git a/modules/network-observability-multitenancy.adoc b/modules/network-observability-multitenancy.adoc index ef462e6952..298c2e43c6 100644 --- a/modules/network-observability-multitenancy.adoc +++ b/modules/network-observability-multitenancy.adoc @@ -3,22 +3,43 @@ // network_observability/installing-operators.adoc :_mod-docs-content-type: PROCEDURE -[id="network-observability-multi-tenancy{context}"] +[id="network-observability-multi-tenancy_{context}"] = Enabling multi-tenancy in Network Observability -Multi-tenancy in the Network Observability Operator allows and restricts individual user access, or group access, to the flows stored in Loki. Access is enabled for project admins. Project admins who have limited access to some namespaces can access flows for only those namespaces. +Multi-tenancy in the Network Observability Operator allows and restricts individual user access, or group access, to the flows stored in Loki and or or Prometheus. Access is enabled for project administrators. Project administrators who have limited access to some namespaces can access flows for only those namespaces. + +For Developers, multi-tenancy is available for both Loki and Prometheus but requires different access rights. .Prerequisite -* You have installed at least link:https://catalog.redhat.com/software/containers/openshift-logging/loki-rhel8-operator/622b46bcae289285d6fcda39[Loki Operator version 5.7] -* You must be logged in as a project administrator +* If you are using Loki, you have installed at least link:https://catalog.redhat.com/software/containers/openshift-logging/loki-rhel8-operator/622b46bcae289285d6fcda39[Loki Operator version 5.7]. +* You must be logged in as a project administrator. .Procedure -. Authorize reading permission to `user1` by running the following command: +* For per-tenant access, you must have the `netobserv-reader` cluster role and the `netobserv-metrics-reader` namespace role to use the developer perspective. Run the following commands for this level of access: + [source,terminal] ---- -$ oc adm policy add-cluster-role-to-user netobserv-reader user1 +$ oc adm policy add-cluster-role-to-user netobserv-reader ---- + -Now, the data is restricted to only allowed user namespaces. For example, a user that has access to a single namespace can see all the flows internal to this namespace, as well as flows going from and to this namespace. -Project admins have access to the Administrator perspective in the {product-title} console to access the Network Flows Traffic page. +[source,terminal] +---- +$ oc adm policy add-role-to-user netobserv-metrics-reader -n +---- + +* For cluster-wide access, non-cluster-administrators must have the `netobserv-reader`, `cluster-monitoring-view`, and `netobserv-metrics-reader` cluster roles. In this scenario, you can use either the admin perspective or the developer perspective. Run the following commands for this level of access: ++ +[source,terminal] +---- +$ oc adm policy add-cluster-role-to-user netobserv-reader +---- ++ +[source,terminal] +---- +$ oc adm policy add-cluster-role-to-user cluster-monitoring-view +---- ++ +[source,terminal] +---- +$ oc adm policy add-cluster-role-to-user netobserv-metrics-reader +---- \ No newline at end of file diff --git a/modules/network-observability-netobserv-cli-reference.adoc b/modules/network-observability-netobserv-cli-reference.adoc index 1b2a71b677..476951c370 100644 --- a/modules/network-observability-netobserv-cli-reference.adoc +++ b/modules/network-observability-netobserv-cli-reference.adoc @@ -1,10 +1,13 @@ -// Module included in the following assemblies: -// * observability/network_observability/netobserv-cli-reference.adoc - +// Automatically generated by './scripts/generate-doc.sh'. Do not edit, or make the NETOBSERV team aware of the editions. :_mod-docs-content-type: REFERENCE [id="network-observability-netobserv-cli-reference_{context}"] -= oc netobserv CLI reference -The Network Observability CLI (`oc netobserv`) is a CLI tool for capturing flow data and packet data for further analysis. += Network Observability CLI usage + +You can use the Network Observability CLI (`oc netobserv`) to pass command line arguments to capture flow data and packet data for further analysis, enable Network Observability Operator features, or pass configuration options to the eBPF agent and `flowlogs-pipeline`. + +[id="cli-syntax_{context}"] +== Syntax +The basic syntax for `oc netobserv` commands is as follows: .`oc netobserv` syntax [source,terminal] @@ -13,168 +16,73 @@ $ oc netobserv [] [] [] <1> ---- <1> Feature options can only be used with the `oc netobserv flows` command. They cannot be used with the `oc netobserv packets` command. +[id="cli-basic-commands_{context}"] +== Basic commands [cols="3a,8a",options="header"] .Basic commands |=== -|Command| Description - -| `flows` -| Capture flows information. For subcommands, see the "Flow capture subcommands" table. - -| `packets` -| Capture packets from a specific protocol or port pair, such as `netobserv packets --filter=tcp,80`. For more information about packet capture, see the "Packet capture subcommand" table. - -| `cleanup` +| Command | Description +| flows +| Capture flows information. For subcommands, see the "Flows capture options" table. +| packets +| Capture packets data. For subcommands, see the "Packets capture options" table. +| cleanup | Remove the Network Observability CLI components. - -| `version` +| version | Print the software version. - -| `help` +| help | Show help. |=== -[id="network-observability-cli-enrichment_{context}"] -== Network Observability enrichment -The Network Observability enrichment to display zone, node, owner and resource names including optional features about packet drops, DNS latencies and Round-trip time can only be enabled when capturing flows. These do not appear in packet capture pcap output file. - -.Network Observability enrichment syntax -[source,terminal] ----- -$ oc netobserv flows [] [] ----- - -.Network Observability enrichment options -|=== -|Option| Description| Possible values| Default - -| `--enable_pktdrop` -| Enable packet drop. -| `true`, `false` -| `false` - -| `--enable_rtt` -| Enable round trip time. -| `true`, `false` -| `false` - -| `--enable_dns` -| Enable DNS tracking. -| `true`, `false` -| `false` - -| `--help` -| Show help. -| - -| - - -| `--interfaces` -| Interfaces to match on the flow. For example, `"eth0,eth1"`. -| `""` -| - -|=== - -[id="cli-reference-flow-capture-options_{context}"] -== Flow capture options -Flow capture has mandatory commands as well as additional options, such as enabling extra features about packet drops, DNS latencies, Round-trip time, and filtering. +[id="cli-reference-flows-capture-options_{context}"] +== Flows capture options +Flows capture has mandatory commands as well as additional options, such as enabling extra features about packet drops, DNS latencies, Round-trip time, and filtering. .`oc netobserv flows` syntax [source,terminal] ---- $ oc netobserv flows [] [] ---- - -.Flow capture filter options +[cols="1,1,1",options="header"] |=== -|Option| Description| Possible values| Mandatory| Default - -| `--enable_filter` -| Enable flow filter. -| `true`, `false` -| Yes -| `false` - -| `--action` -| Action to apply on the flow. -| `Accept`, `Reject` -| Yes -| `Accept` - -| `--cidr` -| CIDR to match on the flow. -| `1.1.1.0/24`, `1::100/64`, or `0.0.0.0/0` -| Yes -| `0.0.0.0/0` - -| `--protocol` -| Protocol to match on the flow -| `TCP`, `UDP`, `SCTP`, `ICMP`, or `ICMPv6` -| No -| - - -| `--direction` -| Direction to match on the flow -| `Ingress`, `Egress` -| No -| - - -| `--dport` -| Destination port to match on the flow. -| `80`, `443`, or `49051` -| no -| - - -| `--sport` -| Source port to match on the flow. -| `80`, `443`, or `49051` -| No -| - - -| `--port` -| Port to match on the flow. -| `80`, `443`, or `49051` -| No -| - - -| `--sport_range` -| Source port range to match on the flow. -| `80-100` or `443-445` -| No -| - - -| `--dport_range` -| Destination port range to match on the flow. -| `80-100` -| No -| - - -| `--port_range` -| Port range to match on the flow. -| `80-100` or `443-445` -| No -| - - -| `--icmp_type` -| ICMP type to match on the flow. -| `8` or `13` -| No -| - - -| `--icmp_code` -| ICMP code to match on the flow. -| `0` or `1` -| No -| - - -| `--peer_ip` -| Peer IP to match on the flow. -| `1.1.1.1` or `1::1` -| No -| - +| Option | Description | Default +|--enable_pktdrop| enable packet drop | false +|--enable_dns| enable DNS tracking | false +|--enable_rtt| enable RTT tracking | false +|--enable_network_events| enable Network events monitoring | false +|--enable_filter| enable flow filter | false +|--log-level| components logs | info +|--max-time| maximum capture time | 5m +|--max-bytes| maximum capture bytes | 50000000 = 50MB +|--copy| copy the output files locally | prompt +|--direction| filter direction | n/a +|--cidr| filter CIDR | 0.0.0.0/0 +|--protocol| filter protocol | n/a +|--sport| filter source port | n/a +|--dport| filter destination port | n/a +|--port| filter port | n/a +|--sport_range| filter source port range | n/a +|--dport_range| filter destination port range | n/a +|--port_range| filter port range | n/a +|--sports| filter on either of two source ports | n/a +|--dports| filter on either of two destination ports | n/a +|--ports| filter on either of two ports | n/a +|--tcp_flags| filter TCP flags | n/a +|--action| filter action | Accept +|--icmp_type| filter ICMP type | n/a +|--icmp_code| filter ICMP code | n/a +|--peer_ip| filter peer IP | n/a +|--interfaces| interfaces to monitor | n/a |=== +.Example running flows capture on TCP protocol and port 49051 with PacketDrop and RTT features enabled: +[source,terminal] +---- +$ oc netobserv flows --enable_pktdrop=true --enable_rtt=true --enable_filter=true --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051 +---- + [id="cli-reference-packet-capture-options_{context}"] -== Packet capture options +== Packets capture options You can filter on port and protocol for packet capture data. .`oc netobserv packets` syntax @@ -182,12 +90,34 @@ You can filter on port and protocol for packet capture data. ---- $ oc netobserv packets [