mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
OSDOCS-17450 [NETOBSERV] Module short descriptions to configuring-operator.adoc
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
c0b5fad5d3
commit
ba9bd9d4aa
@@ -4,10 +4,10 @@
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="network-observability-config-FLP-sampling_{context}"]
|
||||
= Updating the FlowCollector resource
|
||||
|
||||
= Updating the Flow Collector resource
|
||||
|
||||
As an alternative to editing YAML in the {product-title} web console, you can configure specifications, such as eBPF sampling, by patching the `flowcollector` custom resource (CR):
|
||||
[role="_abstract"]
|
||||
As an alternative to using the web console, use the `oc patch` command with the `flowcollector` custom resource to quickly update specific specifications, such as eBPF sampling
|
||||
|
||||
.Procedure
|
||||
|
||||
|
||||
@@ -2,11 +2,14 @@
|
||||
|
||||
// * networking/network_observability/configuring-operators.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="network-observability-config-quick-filters_{context}"]
|
||||
= Configuring quick filters
|
||||
|
||||
You can modify the filters in the `FlowCollector` resource. Exact matches are possible using double-quotes around values. Otherwise, partial matches are used for textual values. The bang (!) character, placed at the end of a key, means negation. See the sample `FlowCollector` resource for more context about modifying the YAML.
|
||||
[role="_abstract"]
|
||||
Use the list of available source, destination, and universal filter keys to modify quick filters within the `FlowCollector` resource.
|
||||
|
||||
Exact matches are possible using double-quotes around values. Otherwise, partial matches are used for textual values. The bang (!) character, placed at the end of a key, means negation. See the sample `FlowCollector` resource for more context about modifying the YAML.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
|
||||
@@ -6,7 +6,12 @@
|
||||
[id="network-observability-enriched-flows_{context}"]
|
||||
= Export enriched network flow data
|
||||
|
||||
You can send network flows to Kafka, IPFIX, the Red{nbsp}Hat build of OpenTelemetry, or all three at the same time. For Kafka or IPFIX, any processor or storage that supports those inputs, such as Splunk, Elasticsearch, or Fluentd, can consume the enriched network flow data. For OpenTelemetry, network flow data and metrics can be exported to a compatible OpenTelemetry endpoint, such as Red{nbsp}Hat build of OpenTelemetry or Prometheus.
|
||||
[role="_abstract"]
|
||||
Configure the `FlowCollector` resource to export enriched network flow data simultaneously to Kafka, IPFIX, or an OpenTelemetry endpoint for external consumption by tools like Splunk or Prometheus.
|
||||
|
||||
For Kafka or IPFIX, any processor or storage that supports those inputs, such as Splunk, Elasticsearch, or Fluentd, can consume the enriched network flow data.
|
||||
|
||||
For OpenTelemetry, network flow data and metrics can be exported to a compatible OpenTelemetry endpoint, such as {OTELName} or Prometheus.
|
||||
|
||||
.Prerequisites
|
||||
* Your Kafka, IPFIX, or OpenTelemetry collector endpoints are available from Network Observability `flowlogs-pipeline` pods.
|
||||
|
||||
@@ -6,7 +6,8 @@
|
||||
[id="network-observability-filter-network-flows-at-ingestion_{context}"]
|
||||
= Filter network flows at ingestion
|
||||
|
||||
You can create filters to reduce the number of generated network flows. Filtering network flows can reduce the resource usage of the network observability components.
|
||||
[role="_abstract"]
|
||||
Create filters to reduce the number of generated network flows. Filtering network flows can reduce the resource usage of the network observability components.
|
||||
|
||||
You can configure two kinds of filters:
|
||||
|
||||
80
modules/network-observability-flowcollector-example.adoc
Normal file
80
modules/network-observability-flowcollector-example.adoc
Normal file
@@ -0,0 +1,80 @@
|
||||
// Module included in the following assemblies:
|
||||
|
||||
// * networking/network_observability/configuring-operators.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="network-observability-flowcollector-example_{context}"]
|
||||
= Example of a FlowCollector resource
|
||||
|
||||
[role="_abstract"]
|
||||
Review a comprehensive, annotated example of the `FlowCollector` custom resource that demonstrates configurations for `eBPF` sampling, conversation tracking, Loki integration, and console quick filters.
|
||||
|
||||
[id="network-observability-flowcollector-configuring-about-sample_{context}"]
|
||||
== Sample `FlowCollector` resource
|
||||
[source, yaml]
|
||||
----
|
||||
apiVersion: flows.netobserv.io/v1beta2
|
||||
kind: FlowCollector
|
||||
metadata:
|
||||
name: cluster
|
||||
spec:
|
||||
namespace: netobserv
|
||||
deploymentModel: Direct
|
||||
agent:
|
||||
type: eBPF <1>
|
||||
ebpf:
|
||||
sampling: 50 <2>
|
||||
logLevel: info
|
||||
privileged: false
|
||||
resources:
|
||||
requests:
|
||||
memory: 50Mi
|
||||
cpu: 100m
|
||||
limits:
|
||||
memory: 800Mi
|
||||
processor: <3>
|
||||
logLevel: info
|
||||
resources:
|
||||
requests:
|
||||
memory: 100Mi
|
||||
cpu: 100m
|
||||
limits:
|
||||
memory: 800Mi
|
||||
logTypes: Flows
|
||||
advanced:
|
||||
conversationEndTimeout: 10s
|
||||
conversationHeartbeatInterval: 30s
|
||||
loki: <4>
|
||||
mode: LokiStack <5>
|
||||
consolePlugin:
|
||||
register: true
|
||||
logLevel: info
|
||||
portNaming:
|
||||
enable: true
|
||||
portNames:
|
||||
"3100": loki
|
||||
quickFilters: <6>
|
||||
- name: Applications
|
||||
filter:
|
||||
src_namespace!: 'openshift-,netobserv'
|
||||
dst_namespace!: 'openshift-,netobserv'
|
||||
default: true
|
||||
- name: Infrastructure
|
||||
filter:
|
||||
src_namespace: 'openshift-,netobserv'
|
||||
dst_namespace: 'openshift-,netobserv'
|
||||
- name: Pods network
|
||||
filter:
|
||||
src_kind: 'Pod'
|
||||
dst_kind: 'Pod'
|
||||
default: true
|
||||
- name: Services network
|
||||
filter:
|
||||
dst_kind: 'Service'
|
||||
----
|
||||
<1> The Agent specification, `spec.agent.type`, must be `EBPF`. eBPF is the only {product-title} supported option.
|
||||
<2> You can set the Sampling specification, `spec.agent.ebpf.sampling`, to manage resources. By default, eBPF sampling is set to `50`, so a flow has a 1 in 50 chance of being sampled. A lower sampling interval value requires more computational, memory, and storage resources. A value of `0` or `1` means all flows are sampled. It is recommended to start with the default value and refine it empirically to determine the optimal setting for your cluster.
|
||||
<3> The Processor specification `spec.processor.` can be set to enable conversation tracking. When enabled, conversation events are queryable in the web console. The `spec.processor.logTypes` value is `Flows`. The `spec.processor.advanced` values are `Conversations`, `EndedConversations`, or `ALL`. Storage requirements are highest for `All` and lowest for `EndedConversations`.
|
||||
<4> The Loki specification, `spec.loki`, specifies the Loki client. The default values match the Loki install paths mentioned in the Installing the Loki Operator section. If you used another installation method for Loki, specify the appropriate client information for your install.
|
||||
<5> The `LokiStack` mode automatically sets a few configurations: `querierUrl`, `ingesterUrl` and `statusUrl`, `tenantID`, and corresponding TLS configuration. Cluster roles and a cluster role binding are created for reading and writing logs to Loki. And `authToken` is set to `Forward`. You can set these manually using the `Manual` mode.
|
||||
<6> The `spec.quickFilters` specification defines filters that show up in the web console. The `Application` filter keys,`src_namespace` and `dst_namespace`, are negated (`!`), so the `Application` filter shows all traffic that _does not_ originate from, or have a destination to, any `openshift-` or `netobserv` namespaces. For more information, see Configuring quick filters below.
|
||||
@@ -4,9 +4,12 @@
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="network-observability-flowcollector-kafka-config_{context}"]
|
||||
= Configuring the Flow Collector resource with Kafka
|
||||
= Configuring the FlowCollector resource with Kafka
|
||||
|
||||
You can configure the `FlowCollector` resource to use Kafka for high-throughput and low-latency data feeds. A Kafka instance needs to be running, and a Kafka topic dedicated to {product-title} Network Observability must be created in that instance. For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_amq/7.7/html/using_amq_streams_on_openshift/using-the-topic-operator-str[Kafka documentation with AMQ Streams].
|
||||
[role="_abstract"]
|
||||
Configure the `FlowCollector` resource to use Kafka for high-throughput and low-latency data feeds.
|
||||
|
||||
A Kafka instance needs to be running, and a Kafka topic dedicated to {product-title} Network Observability must be created in that instance. For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_amq/7.7/html/using_amq_streams_on_openshift/using-the-topic-operator-str[Kafka documentation with AMQ Streams].
|
||||
|
||||
.Prerequisites
|
||||
* Kafka is installed. Red Hat supports Kafka with AMQ Streams Operator.
|
||||
@@ -19,7 +22,7 @@ You can configure the `FlowCollector` resource to use Kafka for high-throughput
|
||||
. Select the cluster and then click the *YAML* tab.
|
||||
|
||||
. Modify the `FlowCollector` resource for {product-title} Network Observability Operator to use Kafka, as shown in the following sample YAML:
|
||||
|
||||
+
|
||||
.Sample Kafka configuration in `FlowCollector` resource
|
||||
[source, yaml]
|
||||
----
|
||||
|
||||
@@ -2,84 +2,14 @@
|
||||
|
||||
// * networking/network_observability/configuring-operators.adoc
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="network-observability-flowcollector-view_{context}"]
|
||||
= View the FlowCollector resource
|
||||
|
||||
The `FlowCollector` resource can be viewed and modified in the {product-title} web console through the integrated setup, advanced form, or by editing the YAML directly.
|
||||
[role="_abstract"]
|
||||
View and modify the `FlowCollector` resource in the {product-title} web console through the integrated setup, advanced form, or by editing the YAML directly to configure the Network Observability Operator.
|
||||
|
||||
.Procedure
|
||||
. In the web console, navigate to *Ecosystem* -> *Installed Operators*.
|
||||
. Under the *Provided APIs* heading for the *NetObserv Operator*, select *Flow Collector*.
|
||||
. Select *cluster* then select the *YAML* tab. There, you can modify the `FlowCollector` resource to configure the Network Observability Operator.
|
||||
|
||||
The following example shows a sample `FlowCollector` resource for {product-title} Network Observability Operator:
|
||||
[id="network-observability-flowcollector-configuring-about-sample_{context}"]
|
||||
.Sample `FlowCollector` resource
|
||||
[source, yaml]
|
||||
----
|
||||
apiVersion: flows.netobserv.io/v1beta2
|
||||
kind: FlowCollector
|
||||
metadata:
|
||||
name: cluster
|
||||
spec:
|
||||
namespace: netobserv
|
||||
deploymentModel: Direct
|
||||
agent:
|
||||
type: eBPF <1>
|
||||
ebpf:
|
||||
sampling: 50 <2>
|
||||
logLevel: info
|
||||
privileged: false
|
||||
resources:
|
||||
requests:
|
||||
memory: 50Mi
|
||||
cpu: 100m
|
||||
limits:
|
||||
memory: 800Mi
|
||||
processor: <3>
|
||||
logLevel: info
|
||||
resources:
|
||||
requests:
|
||||
memory: 100Mi
|
||||
cpu: 100m
|
||||
limits:
|
||||
memory: 800Mi
|
||||
logTypes: Flows
|
||||
advanced:
|
||||
conversationEndTimeout: 10s
|
||||
conversationHeartbeatInterval: 30s
|
||||
loki: <4>
|
||||
mode: LokiStack <5>
|
||||
consolePlugin:
|
||||
register: true
|
||||
logLevel: info
|
||||
portNaming:
|
||||
enable: true
|
||||
portNames:
|
||||
"3100": loki
|
||||
quickFilters: <6>
|
||||
- name: Applications
|
||||
filter:
|
||||
src_namespace!: 'openshift-,netobserv'
|
||||
dst_namespace!: 'openshift-,netobserv'
|
||||
default: true
|
||||
- name: Infrastructure
|
||||
filter:
|
||||
src_namespace: 'openshift-,netobserv'
|
||||
dst_namespace: 'openshift-,netobserv'
|
||||
- name: Pods network
|
||||
filter:
|
||||
src_kind: 'Pod'
|
||||
dst_kind: 'Pod'
|
||||
default: true
|
||||
- name: Services network
|
||||
filter:
|
||||
dst_kind: 'Service'
|
||||
----
|
||||
<1> The Agent specification, `spec.agent.type`, must be `EBPF`. eBPF is the only {product-title} supported option.
|
||||
<2> You can set the Sampling specification, `spec.agent.ebpf.sampling`, to manage resources. By default, eBPF sampling is set to `50`, so a flow has a 1 in 50 chance of being sampled. A lower sampling interval value requires more computational, memory, and storage resources. A value of `0` or `1` means all flows are sampled. It is recommended to start with the default value and refine it empirically to determine the optimal setting for your cluster.
|
||||
<3> The Processor specification `spec.processor.` can be set to enable conversation tracking. When enabled, conversation events are queryable in the web console. The `spec.processor.logTypes` value is `Flows`. The `spec.processor.advanced` values are `Conversations`, `EndedConversations`, or `ALL`. Storage requirements are highest for `All` and lowest for `EndedConversations`.
|
||||
<4> The Loki specification, `spec.loki`, specifies the Loki client. The default values match the Loki install paths mentioned in the Installing the Loki Operator section. If you used another installation method for Loki, specify the appropriate client information for your install.
|
||||
<5> The `LokiStack` mode automatically sets a few configurations: `querierUrl`, `ingesterUrl` and `statusUrl`, `tenantID`, and corresponding TLS configuration. Cluster roles and a cluster role binding are created for reading and writing logs to Loki. And `authToken` is set to `Forward`. You can set these manually using the `Manual` mode.
|
||||
<6> The `spec.quickFilters` specification defines filters that show up in the web console. The `Application` filter keys,`src_namespace` and `dst_namespace`, are negated (`!`), so the `Application` filter shows all traffic that _does not_ originate from, or have a destination to, any `openshift-` or `netobserv` namespaces. For more information, see Configuring quick filters below.
|
||||
|
||||
@@ -5,6 +5,9 @@
|
||||
[id="network-observability-resource-recommendations_{context}"]
|
||||
= Resource management and performance considerations
|
||||
|
||||
[role="_abstract"]
|
||||
Review the key configuration settings, including eBPF sampling, feature enablement, and resource limits, necessary to manage performance criteria and optimize resource consumption for network observability.
|
||||
|
||||
The amount of resources required by network observability depends on the size of your cluster and your requirements for the cluster to ingest and store observability data. To manage resources and set performance criteria for your cluster, consider configuring the following settings. Configuring these settings might meet your optimal setup and observability needs.
|
||||
|
||||
The following settings can help you manage resources and performance from the outset:
|
||||
|
||||
@@ -5,6 +5,9 @@
|
||||
[id="network-observability-resources-table_{context}"]
|
||||
= Resource considerations
|
||||
|
||||
[role="_abstract"]
|
||||
Review the resource considerations table, which provides baseline examples for configuration settings, such as eBPF memory limits and LokiStack size, tailored to various cluster workload sizes.
|
||||
|
||||
The following table outlines examples of resource considerations for clusters with certain workload sizes.
|
||||
|
||||
[IMPORTANT]
|
||||
|
||||
@@ -5,6 +5,9 @@
|
||||
[id="network-observability-total-resource-usage-table_{context}"]
|
||||
= Total average memory and CPU usage
|
||||
|
||||
[role="_abstract"]
|
||||
Review the table detailing the total average CPU and memory usage for network observability components under two distinct traffic scenarios (`Test 1` and `Test 2`) at different eBPF sampling values.
|
||||
|
||||
The following table outlines averages of total resource usage for clusters with a sampling value of `1` and `50` for two different tests: `Test 1` and `Test 2`. The tests differ in the following ways:
|
||||
|
||||
- `Test 1` takes into account high ingress traffic volume in addition to the total number of namespace, pods and services in an {product-title} cluster, places load on the eBPF agent, and represents use cases with a high number of workloads for a given cluster size. For example, `Test 1` consists of 76 Namespaces, 5153 Pods, and 2305 Services with a network traffic scale of ~350 MB/s.
|
||||
|
||||
@@ -14,6 +14,8 @@ The `FlowCollector` is explicitly created during installation. Since this resour
|
||||
|
||||
include::modules/network-observability-flowcollector-view.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/network-observability-flowcollector-example.adoc[leveloffset=+2]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
* xref:../../observability/network_observability/flowcollector-api.adoc#network-observability-flowcollector-api-specifications_network_observability[FlowCollector API reference]
|
||||
@@ -29,7 +31,7 @@ include::modules/network-observability-enriched-flows.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/network-observability-configuring-FLP-sampling.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/network-observability-con_filter-network-flows-at-ingestion.adoc[leveloffset=+1]
|
||||
include::modules/network-observability-filter-network-flows-at-ingestion.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
Reference in New Issue
Block a user