1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/modules/network-observability-flowcollector-api-specifications.adoc

2950 lines
73 KiB
Plaintext

:_mod-docs-content-type: REFERENCE
[id="network-observability-flowcollector-api-specifications_{context}"]
= FlowCollector API specifications
Description::
+
--
`FlowCollector` is the schema for the network flows collection API, which pilots and configures the underlying deployments.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `apiVersion`
| `string`
| APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and might reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
| `kind`
| `string`
| Kind is a string value representing the REST resource this object represents. Servers might infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
| `metadata`
| `object`
| Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
| `spec`
| `object`
| Defines the desired state of the FlowCollector resource.
+
+
*: the mention of "unsupported" or "deprecated" for a feature throughout this document means that this feature
is not officially supported by Red Hat. It might have been, for example, contributed by the community
and accepted without a formal agreement for maintenance. The product maintainers might provide some support
for these features as a best effort only.
|===
== .metadata
Description::
+
--
Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
--
Type::
`object`
== .spec
Description::
+
--
Defines the desired state of the FlowCollector resource.
+
+
*: the mention of "unsupported" or "deprecated" for a feature throughout this document means that this feature
is not officially supported by Red Hat. It might have been, for example, contributed by the community
and accepted without a formal agreement for maintenance. The product maintainers might provide some support
for these features as a best effort only.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `agent`
| `object`
| Agent configuration for flows extraction.
| `consolePlugin`
| `object`
| `consolePlugin` defines the settings related to the {product-title} Console plugin, when available.
| `deploymentModel`
| `string`
| `deploymentModel` defines the desired type of deployment for flow processing. Possible values are: +
- `Direct` (default) to make the flow processor listen directly from the agents. +
- `Kafka` to make flows sent to a Kafka pipeline before consumption by the processor. +
Kafka can provide better scalability, resiliency, and high availability (for more details, see https://www.redhat.com/en/topics/integration/what-is-apache-kafka).
| `exporters`
| `array`
| `exporters` define additional optional exporters for custom consumption or storage.
| `kafka`
| `object`
| Kafka configuration, allowing to use Kafka as a broker as part of the flow collection pipeline. Available when the `spec.deploymentModel` is `Kafka`.
| `loki`
| `object`
| `loki`, the flow store, client settings.
| `namespace`
| `string`
| Namespace where Network Observability pods are deployed.
| `processor`
| `object`
| `processor` defines the settings of the component that receives the flows from the agent,
enriches them, generates metrics, and forwards them to the Loki persistence layer and/or any available exporter.
| `prometheus`
| `object`
| `prometheus` defines Prometheus settings, such as querier configuration used to fetch metrics from the Console plugin.
|===
== .spec.agent
Description::
+
--
Agent configuration for flows extraction.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `ebpf`
| `object`
| `ebpf` describes the settings related to the eBPF-based flow reporter when `spec.agent.type`
is set to `eBPF`.
| `type`
| `string`
| `type` [deprecated (*)] selects the flows tracing agent. Previously, this field allowed to select between `eBPF` or `IPFIX`.
Only `eBPF` is allowed now, so this field is deprecated and is planned for removal in a future version of the API.
|===
== .spec.agent.ebpf
Description::
+
--
`ebpf` describes the settings related to the eBPF-based flow reporter when `spec.agent.type`
is set to `eBPF`.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `advanced`
| `object`
| `advanced` allows setting some aspects of the internal configuration of the eBPF agent.
This section is aimed mostly for debugging and fine-grained performance optimizations,
such as `GOGC` and `GOMAXPROCS` env vars. Set these values at your own risk.
| `cacheActiveTimeout`
| `string`
| `cacheActiveTimeout` is the max period during which the reporter aggregates flows before sending.
Increasing `cacheMaxFlows` and `cacheActiveTimeout` can decrease the network traffic overhead and the CPU load,
however you can expect higher memory consumption and an increased latency in the flow collection.
| `cacheMaxFlows`
| `integer`
| `cacheMaxFlows` is the max number of flows in an aggregate; when reached, the reporter sends the flows.
Increasing `cacheMaxFlows` and `cacheActiveTimeout` can decrease the network traffic overhead and the CPU load,
however you can expect higher memory consumption and an increased latency in the flow collection.
| `excludeInterfaces`
| `array (string)`
| `excludeInterfaces` contains the interface names that are excluded from flow tracing.
An entry enclosed by slashes, such as `/br-/`, is matched as a regular expression.
Otherwise it is matched as a case-sensitive string.
| `features`
| `array (string)`
| List of additional features to enable. They are all disabled by default. Enabling additional features might have performance impacts. Possible values are: +
- `PacketDrop`: enable the packets drop flows logging feature. This feature requires mounting
the kernel debug filesystem, so the eBPF pod has to run as privileged.
If the `spec.agent.ebpf.privileged` parameter is not set, an error is reported. +
- `DNSTracking`: enable the DNS tracking feature. +
- `FlowRTT`: enable flow latency (sRTT) extraction in the eBPF agent from TCP traffic. +
| `flowFilter`
| `object`
| `flowFilter` defines the eBPF agent configuration regarding flow filtering.
| `imagePullPolicy`
| `string`
| `imagePullPolicy` is the Kubernetes pull policy for the image defined above
| `interfaces`
| `array (string)`
| `interfaces` contains the interface names from where flows are collected. If empty, the agent
fetches all the interfaces in the system, excepting the ones listed in `excludeInterfaces`.
An entry enclosed by slashes, such as `/br-/`, is matched as a regular expression.
Otherwise it is matched as a case-sensitive string.
| `kafkaBatchSize`
| `integer`
| `kafkaBatchSize` limits the maximum size of a request in bytes before being sent to a partition. Ignored when not using Kafka. Default: 1MB.
| `logLevel`
| `string`
| `logLevel` defines the log level for the Network Observability eBPF Agent
| `metrics`
| `object`
| `metrics` defines the eBPF agent configuration regarding metrics.
| `privileged`
| `boolean`
| Privileged mode for the eBPF Agent container. When ignored or set to `false`, the operator sets
granular capabilities (BPF, PERFMON, NET_ADMIN, SYS_RESOURCE) to the container.
If for some reason these capabilities cannot be set, such as if an old kernel version not knowing CAP_BPF
is in use, then you can turn on this mode for more global privileges.
Some agent features require the privileged mode, such as packet drops tracking (see `features`) and SR-IOV support.
| `resources`
| `object`
| `resources` are the compute resources required by this container.
For more information, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
| `sampling`
| `integer`
| Sampling rate of the flow reporter. 100 means one flow on 100 is sent. 0 or 1 means all flows are sampled.
|===
== .spec.agent.ebpf.advanced
Description::
+
--
`advanced` allows setting some aspects of the internal configuration of the eBPF agent.
This section is aimed mostly for debugging and fine-grained performance optimizations,
such as `GOGC` and `GOMAXPROCS` env vars. Set these values at your own risk.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `env`
| `object (string)`
| `env` allows passing custom environment variables to underlying components. Useful for passing
some very concrete performance-tuning options, such as `GOGC` and `GOMAXPROCS`, that should not be
publicly exposed as part of the FlowCollector descriptor, as they are only useful
in edge debug or support scenarios.
| `scheduling`
| `object`
| scheduling controls how the pods are scheduled on nodes.
|===
== .spec.agent.ebpf.advanced.scheduling
Description::
+
--
scheduling controls how the pods are scheduled on nodes.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `affinity`
| `object`
| If specified, the pod's scheduling constraints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling.
| `nodeSelector`
| `object (string)`
| `nodeSelector` allows scheduling of pods only onto nodes that have each of the specified labels.
For documentation, refer to https://kubernetes.io/docs/concepts/configuration/assign-pod-node/.
| `priorityClassName`
| `string`
| If specified, indicates the pod's priority. For documentation, refer to https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#how-to-use-priority-and-preemption.
If not specified, default priority is used, or zero if there is no default.
| `tolerations`
| `array`
| `tolerations` is a list of tolerations that allow the pod to schedule onto nodes with matching taints.
For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling.
|===
== .spec.agent.ebpf.advanced.scheduling.affinity
Description::
+
--
If specified, the pod's scheduling constraints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling.
--
Type::
`object`
== .spec.agent.ebpf.advanced.scheduling.tolerations
Description::
+
--
`tolerations` is a list of tolerations that allow the pod to schedule onto nodes with matching taints.
For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling.
--
Type::
`array`
== .spec.agent.ebpf.flowFilter
Description::
+
--
`flowFilter` defines the eBPF agent configuration regarding flow filtering.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `action`
| `string`
| `action` defines the action to perform on the flows that match the filter.
| `cidr`
| `string`
| `cidr` defines the IP CIDR to filter flows by.
Examples: `10.10.10.0/24` or `100:100:100:100::/64`
| `destPorts`
| `integer-or-string`
| `destPorts` defines the destination ports to filter flows by.
To filter a single port, set a single port as an integer value. For example: `destPorts: 80`.
To filter a range of ports, use a "start-end" range in string format. For example: `destPorts: "80-100"`.
| `direction`
| `string`
| `direction` defines the direction to filter flows by.
| `enable`
| `boolean`
| Set `enable` to `true` to enable the eBPF flow filtering feature.
| `icmpCode`
| `integer`
| `icmpCode`, for Internet Control Message Protocol (ICMP) traffic, defines the ICMP code to filter flows by.
| `icmpType`
| `integer`
| `icmpType`, for ICMP traffic, defines the ICMP type to filter flows by.
| `peerIP`
| `string`
| `peerIP` defines the IP address to filter flows by.
Example: `10.10.10.10`.
| `ports`
| `integer-or-string`
| `ports` defines the ports to filter flows by. It is used both for source and destination ports.
To filter a single port, set a single port as an integer value. For example: `ports: 80`.
To filter a range of ports, use a "start-end" range in string format. For example: `ports: "80-100"`.
| `protocol`
| `string`
| `protocol` defines the protocol to filter flows by.
| `sourcePorts`
| `integer-or-string`
| `sourcePorts` defines the source ports to filter flows by.
To filter a single port, set a single port as an integer value. For example: `sourcePorts: 80`.
To filter a range of ports, use a "start-end" range in string format. For example: `sourcePorts: "80-100"`.
|===
== .spec.agent.ebpf.metrics
Description::
+
--
`metrics` defines the eBPF agent configuration regarding metrics.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `disableAlerts`
| `array (string)`
| `disableAlerts` is a list of alerts that should be disabled.
Possible values are: +
`NetObservDroppedFlows`, which is triggered when the eBPF agent is dropping flows, such as when the BPF hashmap is full or the capacity limiter is being triggered. +
| `enable`
| `boolean`
| Set `enable` to `false` to disable eBPF agent metrics collection. It is enabled by default.
| `server`
| `object`
| Metrics server endpoint configuration for the Prometheus scraper.
|===
== .spec.agent.ebpf.metrics.server
Description::
+
--
Metrics server endpoint configuration for the Prometheus scraper.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `port`
| `integer`
| The metrics server HTTP port.
| `tls`
| `object`
| TLS configuration.
|===
== .spec.agent.ebpf.metrics.server.tls
Description::
+
--
TLS configuration.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `insecureSkipVerify`
| `boolean`
| `insecureSkipVerify` allows skipping client-side verification of the provided certificate.
If set to `true`, the `providedCaFile` field is ignored.
| `provided`
| `object`
| TLS configuration when `type` is set to `Provided`.
| `providedCaFile`
| `object`
| Reference to the CA file when `type` is set to `Provided`.
| `type`
| `string`
| Select the type of TLS configuration: +
- `Disabled` (default) to not configure TLS for the endpoint.
- `Provided` to manually provide cert file and a key file. [Unsupported (*)].
- `Auto` to use {product-title} auto generated certificate using annotations.
|===
== .spec.agent.ebpf.metrics.server.tls.provided
Description::
+
--
TLS configuration when `type` is set to `Provided`.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `certFile`
| `string`
| `certFile` defines the path to the certificate file name within the config map or secret.
| `certKey`
| `string`
| `certKey` defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary.
| `name`
| `string`
| Name of the config map or secret containing certificates.
| `namespace`
| `string`
| Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed.
If the namespace is different, the config map or the secret is copied so that it can be mounted as required.
| `type`
| `string`
| Type for the certificate reference: `configmap` or `secret`.
|===
== .spec.agent.ebpf.metrics.server.tls.providedCaFile
Description::
+
--
Reference to the CA file when `type` is set to `Provided`.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `file`
| `string`
| File name within the config map or secret.
| `name`
| `string`
| Name of the config map or secret containing the file.
| `namespace`
| `string`
| Namespace of the config map or secret containing the file. If omitted, the default is to use the same namespace as where Network Observability is deployed.
If the namespace is different, the config map or the secret is copied so that it can be mounted as required.
| `type`
| `string`
| Type for the file reference: "configmap" or "secret".
|===
== .spec.agent.ebpf.resources
Description::
+
--
`resources` are the compute resources required by this container.
For more information, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `limits`
| `integer-or-string`
| Limits describes the maximum amount of compute resources allowed.
More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
| `requests`
| `integer-or-string`
| Requests describes the minimum amount of compute resources required.
If Requests is omitted for a container, it defaults to Limits if that is explicitly specified,
otherwise to an implementation-defined value. Requests cannot exceed Limits.
More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
|===
== .spec.consolePlugin
Description::
+
--
`consolePlugin` defines the settings related to the {product-title} Console plugin, when available.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `advanced`
| `object`
| `advanced` allows setting some aspects of the internal configuration of the console plugin.
This section is aimed mostly for debugging and fine-grained performance optimizations,
such as `GOGC` and `GOMAXPROCS` env vars. Set these values at your own risk.
| `autoscaler`
| `object`
| `autoscaler` spec of a horizontal pod autoscaler to set up for the plugin Deployment. Refer to HorizontalPodAutoscaler documentation (autoscaling/v2).
| `enable`
| `boolean`
| Enables the console plugin deployment.
| `imagePullPolicy`
| `string`
| `imagePullPolicy` is the Kubernetes pull policy for the image defined above
| `logLevel`
| `string`
| `logLevel` for the console plugin backend
| `portNaming`
| `object`
| `portNaming` defines the configuration of the port-to-service name translation
| `quickFilters`
| `array`
| `quickFilters` configures quick filter presets for the Console plugin
| `replicas`
| `integer`
| `replicas` defines the number of replicas (pods) to start.
| `resources`
| `object`
| `resources`, in terms of compute resources, required by this container.
For more information, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
|===
== .spec.consolePlugin.advanced
Description::
+
--
`advanced` allows setting some aspects of the internal configuration of the console plugin.
This section is aimed mostly for debugging and fine-grained performance optimizations,
such as `GOGC` and `GOMAXPROCS` env vars. Set these values at your own risk.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `args`
| `array (string)`
| `args` allows passing custom arguments to underlying components. Useful for overriding
some parameters, such as a URL or a configuration path, that should not be
publicly exposed as part of the FlowCollector descriptor, as they are only useful
in edge debug or support scenarios.
| `env`
| `object (string)`
| `env` allows passing custom environment variables to underlying components. Useful for passing
some very concrete performance-tuning options, such as `GOGC` and `GOMAXPROCS`, that should not be
publicly exposed as part of the FlowCollector descriptor, as they are only useful
in edge debug or support scenarios.
| `port`
| `integer`
| `port` is the plugin service port. Do not use 9002, which is reserved for metrics.
| `register`
| `boolean`
| `register` allows, when set to `true`, to automatically register the provided console plugin with the {product-title} Console operator.
When set to `false`, you can still register it manually by editing console.operator.openshift.io/cluster with the following command:
`oc patch console.operator.openshift.io cluster --type='json' -p '[{"op": "add", "path": "/spec/plugins/-", "value": "netobserv-plugin"}]'`
| `scheduling`
| `object`
| `scheduling` controls how the pods are scheduled on nodes.
|===
== .spec.consolePlugin.advanced.scheduling
Description::
+
--
`scheduling` controls how the pods are scheduled on nodes.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `affinity`
| `object`
| If specified, the pod's scheduling constraints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling.
| `nodeSelector`
| `object (string)`
| `nodeSelector` allows scheduling of pods only onto nodes that have each of the specified labels.
For documentation, refer to https://kubernetes.io/docs/concepts/configuration/assign-pod-node/.
| `priorityClassName`
| `string`
| If specified, indicates the pod's priority. For documentation, refer to https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#how-to-use-priority-and-preemption.
If not specified, default priority is used, or zero if there is no default.
| `tolerations`
| `array`
| `tolerations` is a list of tolerations that allow the pod to schedule onto nodes with matching taints.
For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling.
|===
== .spec.consolePlugin.advanced.scheduling.affinity
Description::
+
--
If specified, the pod's scheduling constraints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling.
--
Type::
`object`
== .spec.consolePlugin.advanced.scheduling.tolerations
Description::
+
--
`tolerations` is a list of tolerations that allow the pod to schedule onto nodes with matching taints.
For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling.
--
Type::
`array`
== .spec.consolePlugin.autoscaler
Description::
+
--
`autoscaler` spec of a horizontal pod autoscaler to set up for the plugin Deployment. Refer to HorizontalPodAutoscaler documentation (autoscaling/v2).
--
Type::
`object`
== .spec.consolePlugin.portNaming
Description::
+
--
`portNaming` defines the configuration of the port-to-service name translation
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `enable`
| `boolean`
| Enable the console plugin port-to-service name translation
| `portNames`
| `object (string)`
| `portNames` defines additional port names to use in the console,
for example, `portNames: {"3100": "loki"}`.
|===
== .spec.consolePlugin.quickFilters
Description::
+
--
`quickFilters` configures quick filter presets for the Console plugin
--
Type::
`array`
== .spec.consolePlugin.quickFilters[]
Description::
+
--
`QuickFilter` defines preset configuration for Console's quick filters
--
Type::
`object`
Required::
- `filter`
- `name`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `default`
| `boolean`
| `default` defines whether this filter should be active by default or not
| `filter`
| `object (string)`
| `filter` is a set of keys and values to be set when this filter is selected. Each key can relate to a list of values using a coma-separated string,
for example, `filter: {"src_namespace": "namespace1,namespace2"}`.
| `name`
| `string`
| Name of the filter, that is displayed in the Console
|===
== .spec.consolePlugin.resources
Description::
+
--
`resources`, in terms of compute resources, required by this container.
For more information, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `limits`
| `integer-or-string`
| Limits describes the maximum amount of compute resources allowed.
More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
| `requests`
| `integer-or-string`
| Requests describes the minimum amount of compute resources required.
If Requests is omitted for a container, it defaults to Limits if that is explicitly specified,
otherwise to an implementation-defined value. Requests cannot exceed Limits.
More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
|===
== .spec.exporters
Description::
+
--
`exporters` define additional optional exporters for custom consumption or storage.
--
Type::
`array`
== .spec.exporters[]
Description::
+
--
`FlowCollectorExporter` defines an additional exporter to send enriched flows to.
--
Type::
`object`
Required::
- `type`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `ipfix`
| `object`
| IPFIX configuration, such as the IP address and port to send enriched IPFIX flows to.
| `kafka`
| `object`
| Kafka configuration, such as the address and topic, to send enriched flows to.
| `type`
| `string`
| `type` selects the type of exporters. The available options are `Kafka` and `IPFIX`.
|===
== .spec.exporters[].ipfix
Description::
+
--
IPFIX configuration, such as the IP address and port to send enriched IPFIX flows to.
--
Type::
`object`
Required::
- `targetHost`
- `targetPort`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `targetHost`
| `string`
| Address of the IPFIX external receiver
| `targetPort`
| `integer`
| Port for the IPFIX external receiver
| `transport`
| `string`
| Transport protocol (`TCP` or `UDP`) to be used for the IPFIX connection, defaults to `TCP`.
|===
== .spec.exporters[].kafka
Description::
+
--
Kafka configuration, such as the address and topic, to send enriched flows to.
--
Type::
`object`
Required::
- `address`
- `topic`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `address`
| `string`
| Address of the Kafka server
| `sasl`
| `object`
| SASL authentication configuration. [Unsupported (*)].
| `tls`
| `object`
| TLS client configuration. When using TLS, verify that the address matches the Kafka port used for TLS, generally 9093.
| `topic`
| `string`
| Kafka topic to use. It must exist. Network Observability does not create it.
|===
== .spec.exporters[].kafka.sasl
Description::
+
--
SASL authentication configuration. [Unsupported (*)].
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `clientIDReference`
| `object`
| Reference to the secret or config map containing the client ID
| `clientSecretReference`
| `object`
| Reference to the secret or config map containing the client secret
| `type`
| `string`
| Type of SASL authentication to use, or `Disabled` if SASL is not used
|===
== .spec.exporters[].kafka.sasl.clientIDReference
Description::
+
--
Reference to the secret or config map containing the client ID
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `file`
| `string`
| File name within the config map or secret.
| `name`
| `string`
| Name of the config map or secret containing the file.
| `namespace`
| `string`
| Namespace of the config map or secret containing the file. If omitted, the default is to use the same namespace as where Network Observability is deployed.
If the namespace is different, the config map or the secret is copied so that it can be mounted as required.
| `type`
| `string`
| Type for the file reference: "configmap" or "secret".
|===
== .spec.exporters[].kafka.sasl.clientSecretReference
Description::
+
--
Reference to the secret or config map containing the client secret
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `file`
| `string`
| File name within the config map or secret.
| `name`
| `string`
| Name of the config map or secret containing the file.
| `namespace`
| `string`
| Namespace of the config map or secret containing the file. If omitted, the default is to use the same namespace as where Network Observability is deployed.
If the namespace is different, the config map or the secret is copied so that it can be mounted as required.
| `type`
| `string`
| Type for the file reference: "configmap" or "secret".
|===
== .spec.exporters[].kafka.tls
Description::
+
--
TLS client configuration. When using TLS, verify that the address matches the Kafka port used for TLS, generally 9093.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `caCert`
| `object`
| `caCert` defines the reference of the certificate for the Certificate Authority
| `enable`
| `boolean`
| Enable TLS
| `insecureSkipVerify`
| `boolean`
| `insecureSkipVerify` allows skipping client-side verification of the server certificate.
If set to `true`, the `caCert` field is ignored.
| `userCert`
| `object`
| `userCert` defines the user certificate reference and is used for mTLS (you can ignore it when using one-way TLS)
|===
== .spec.exporters[].kafka.tls.caCert
Description::
+
--
`caCert` defines the reference of the certificate for the Certificate Authority
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `certFile`
| `string`
| `certFile` defines the path to the certificate file name within the config map or secret.
| `certKey`
| `string`
| `certKey` defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary.
| `name`
| `string`
| Name of the config map or secret containing certificates.
| `namespace`
| `string`
| Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed.
If the namespace is different, the config map or the secret is copied so that it can be mounted as required.
| `type`
| `string`
| Type for the certificate reference: `configmap` or `secret`.
|===
== .spec.exporters[].kafka.tls.userCert
Description::
+
--
`userCert` defines the user certificate reference and is used for mTLS (you can ignore it when using one-way TLS)
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `certFile`
| `string`
| `certFile` defines the path to the certificate file name within the config map or secret.
| `certKey`
| `string`
| `certKey` defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary.
| `name`
| `string`
| Name of the config map or secret containing certificates.
| `namespace`
| `string`
| Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed.
If the namespace is different, the config map or the secret is copied so that it can be mounted as required.
| `type`
| `string`
| Type for the certificate reference: `configmap` or `secret`.
|===
== .spec.kafka
Description::
+
--
Kafka configuration, allowing to use Kafka as a broker as part of the flow collection pipeline. Available when the `spec.deploymentModel` is `Kafka`.
--
Type::
`object`
Required::
- `address`
- `topic`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `address`
| `string`
| Address of the Kafka server
| `sasl`
| `object`
| SASL authentication configuration. [Unsupported (*)].
| `tls`
| `object`
| TLS client configuration. When using TLS, verify that the address matches the Kafka port used for TLS, generally 9093.
| `topic`
| `string`
| Kafka topic to use. It must exist. Network Observability does not create it.
|===
== .spec.kafka.sasl
Description::
+
--
SASL authentication configuration. [Unsupported (*)].
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `clientIDReference`
| `object`
| Reference to the secret or config map containing the client ID
| `clientSecretReference`
| `object`
| Reference to the secret or config map containing the client secret
| `type`
| `string`
| Type of SASL authentication to use, or `Disabled` if SASL is not used
|===
== .spec.kafka.sasl.clientIDReference
Description::
+
--
Reference to the secret or config map containing the client ID
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `file`
| `string`
| File name within the config map or secret.
| `name`
| `string`
| Name of the config map or secret containing the file.
| `namespace`
| `string`
| Namespace of the config map or secret containing the file. If omitted, the default is to use the same namespace as where Network Observability is deployed.
If the namespace is different, the config map or the secret is copied so that it can be mounted as required.
| `type`
| `string`
| Type for the file reference: "configmap" or "secret".
|===
== .spec.kafka.sasl.clientSecretReference
Description::
+
--
Reference to the secret or config map containing the client secret
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `file`
| `string`
| File name within the config map or secret.
| `name`
| `string`
| Name of the config map or secret containing the file.
| `namespace`
| `string`
| Namespace of the config map or secret containing the file. If omitted, the default is to use the same namespace as where Network Observability is deployed.
If the namespace is different, the config map or the secret is copied so that it can be mounted as required.
| `type`
| `string`
| Type for the file reference: "configmap" or "secret".
|===
== .spec.kafka.tls
Description::
+
--
TLS client configuration. When using TLS, verify that the address matches the Kafka port used for TLS, generally 9093.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `caCert`
| `object`
| `caCert` defines the reference of the certificate for the Certificate Authority
| `enable`
| `boolean`
| Enable TLS
| `insecureSkipVerify`
| `boolean`
| `insecureSkipVerify` allows skipping client-side verification of the server certificate.
If set to `true`, the `caCert` field is ignored.
| `userCert`
| `object`
| `userCert` defines the user certificate reference and is used for mTLS (you can ignore it when using one-way TLS)
|===
== .spec.kafka.tls.caCert
Description::
+
--
`caCert` defines the reference of the certificate for the Certificate Authority
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `certFile`
| `string`
| `certFile` defines the path to the certificate file name within the config map or secret.
| `certKey`
| `string`
| `certKey` defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary.
| `name`
| `string`
| Name of the config map or secret containing certificates.
| `namespace`
| `string`
| Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed.
If the namespace is different, the config map or the secret is copied so that it can be mounted as required.
| `type`
| `string`
| Type for the certificate reference: `configmap` or `secret`.
|===
== .spec.kafka.tls.userCert
Description::
+
--
`userCert` defines the user certificate reference and is used for mTLS (you can ignore it when using one-way TLS)
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `certFile`
| `string`
| `certFile` defines the path to the certificate file name within the config map or secret.
| `certKey`
| `string`
| `certKey` defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary.
| `name`
| `string`
| Name of the config map or secret containing certificates.
| `namespace`
| `string`
| Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed.
If the namespace is different, the config map or the secret is copied so that it can be mounted as required.
| `type`
| `string`
| Type for the certificate reference: `configmap` or `secret`.
|===
== .spec.loki
Description::
+
--
`loki`, the flow store, client settings.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `advanced`
| `object`
| `advanced` allows setting some aspects of the internal configuration of the Loki clients.
This section is aimed mostly for debugging and fine-grained performance optimizations.
| `enable`
| `boolean`
| Set `enable` to `true` to store flows in Loki.
The Console plugin can use either Loki or Prometheus as a data source for metrics (see also `spec.prometheus.querier`), or both.
Not all queries are transposable from Loki to Prometheus. Hence, if Loki is disabled, some features of the plugin are disabled as well,
such as getting per-pod information or viewing raw flows.
If both Prometheus and Loki are enabled, Prometheus takes precedence and Loki is used as a fallback for queries that Prometheus cannot handle.
If they are both disabled, the Console plugin is not deployed.
| `lokiStack`
| `object`
| Loki configuration for `LokiStack` mode. This is useful for an easy Loki Operator configuration.
It is ignored for other modes.
| `manual`
| `object`
| Loki configuration for `Manual` mode. This is the most flexible configuration.
It is ignored for other modes.
| `microservices`
| `object`
| Loki configuration for `Microservices` mode.
Use this option when Loki is installed using the microservices deployment mode (https://grafana.com/docs/loki/latest/fundamentals/architecture/deployment-modes/#microservices-mode).
It is ignored for other modes.
| `mode`
| `string`
| `mode` must be set according to the installation mode of Loki: +
- Use `LokiStack` when Loki is managed using the Loki Operator +
- Use `Monolithic` when Loki is installed as a monolithic workload +
- Use `Microservices` when Loki is installed as microservices, but without Loki Operator +
- Use `Manual` if none of the options above match your setup +
| `monolithic`
| `object`
| Loki configuration for `Monolithic` mode.
Use this option when Loki is installed using the monolithic deployment mode (https://grafana.com/docs/loki/latest/fundamentals/architecture/deployment-modes/#monolithic-mode).
It is ignored for other modes.
| `readTimeout`
| `string`
| `readTimeout` is the maximum console plugin loki query total time limit.
A timeout of zero means no timeout.
| `writeBatchSize`
| `integer`
| `writeBatchSize` is the maximum batch size (in bytes) of Loki logs to accumulate before sending.
| `writeBatchWait`
| `string`
| `writeBatchWait` is the maximum time to wait before sending a Loki batch.
| `writeTimeout`
| `string`
| `writeTimeout` is the maximum Loki time connection / request limit.
A timeout of zero means no timeout.
|===
== .spec.loki.advanced
Description::
+
--
`advanced` allows setting some aspects of the internal configuration of the Loki clients.
This section is aimed mostly for debugging and fine-grained performance optimizations.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `staticLabels`
| `object (string)`
| `staticLabels` is a map of common labels to set on each flow in Loki storage.
| `writeMaxBackoff`
| `string`
| `writeMaxBackoff` is the maximum backoff time for Loki client connection between retries.
| `writeMaxRetries`
| `integer`
| `writeMaxRetries` is the maximum number of retries for Loki client connections.
| `writeMinBackoff`
| `string`
| `writeMinBackoff` is the initial backoff time for Loki client connection between retries.
|===
== .spec.loki.lokiStack
Description::
+
--
Loki configuration for `LokiStack` mode. This is useful for an easy Loki Operator configuration.
It is ignored for other modes.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `name`
| `string`
| Name of an existing LokiStack resource to use.
| `namespace`
| `string`
| Namespace where this `LokiStack` resource is located. If omitted, it is assumed to be the same as `spec.namespace`.
|===
== .spec.loki.manual
Description::
+
--
Loki configuration for `Manual` mode. This is the most flexible configuration.
It is ignored for other modes.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `authToken`
| `string`
| `authToken` describes the way to get a token to authenticate to Loki. +
- `Disabled` does not send any token with the request. +
- `Forward` forwards the user token for authorization. +
- `Host` [deprecated (*)] - uses the local pod service account to authenticate to Loki. +
When using the Loki Operator, this must be set to `Forward`.
| `ingesterUrl`
| `string`
| `ingesterUrl` is the address of an existing Loki ingester service to push the flows to. When using the Loki Operator,
set it to the Loki gateway service with the `network` tenant set in path, for example
https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network.
| `querierUrl`
| `string`
| `querierUrl` specifies the address of the Loki querier service.
When using the Loki Operator, set it to the Loki gateway service with the `network` tenant set in path, for example
https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network.
| `statusTls`
| `object`
| TLS client configuration for Loki status URL.
| `statusUrl`
| `string`
| `statusUrl` specifies the address of the Loki `/ready`, `/metrics` and `/config` endpoints, in case it is different from the
Loki querier URL. If empty, the `querierUrl` value is used.
This is useful to show error messages and some context in the frontend.
When using the Loki Operator, set it to the Loki HTTP query frontend service, for example
https://loki-query-frontend-http.netobserv.svc:3100/.
`statusTLS` configuration is used when `statusUrl` is set.
| `tenantID`
| `string`
| `tenantID` is the Loki `X-Scope-OrgID` that identifies the tenant for each request.
When using the Loki Operator, set it to `network`, which corresponds to a special tenant mode.
| `tls`
| `object`
| TLS client configuration for Loki URL.
|===
== .spec.loki.manual.statusTls
Description::
+
--
TLS client configuration for Loki status URL.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `caCert`
| `object`
| `caCert` defines the reference of the certificate for the Certificate Authority
| `enable`
| `boolean`
| Enable TLS
| `insecureSkipVerify`
| `boolean`
| `insecureSkipVerify` allows skipping client-side verification of the server certificate.
If set to `true`, the `caCert` field is ignored.
| `userCert`
| `object`
| `userCert` defines the user certificate reference and is used for mTLS (you can ignore it when using one-way TLS)
|===
== .spec.loki.manual.statusTls.caCert
Description::
+
--
`caCert` defines the reference of the certificate for the Certificate Authority
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `certFile`
| `string`
| `certFile` defines the path to the certificate file name within the config map or secret.
| `certKey`
| `string`
| `certKey` defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary.
| `name`
| `string`
| Name of the config map or secret containing certificates.
| `namespace`
| `string`
| Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed.
If the namespace is different, the config map or the secret is copied so that it can be mounted as required.
| `type`
| `string`
| Type for the certificate reference: `configmap` or `secret`.
|===
== .spec.loki.manual.statusTls.userCert
Description::
+
--
`userCert` defines the user certificate reference and is used for mTLS (you can ignore it when using one-way TLS)
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `certFile`
| `string`
| `certFile` defines the path to the certificate file name within the config map or secret.
| `certKey`
| `string`
| `certKey` defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary.
| `name`
| `string`
| Name of the config map or secret containing certificates.
| `namespace`
| `string`
| Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed.
If the namespace is different, the config map or the secret is copied so that it can be mounted as required.
| `type`
| `string`
| Type for the certificate reference: `configmap` or `secret`.
|===
== .spec.loki.manual.tls
Description::
+
--
TLS client configuration for Loki URL.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `caCert`
| `object`
| `caCert` defines the reference of the certificate for the Certificate Authority
| `enable`
| `boolean`
| Enable TLS
| `insecureSkipVerify`
| `boolean`
| `insecureSkipVerify` allows skipping client-side verification of the server certificate.
If set to `true`, the `caCert` field is ignored.
| `userCert`
| `object`
| `userCert` defines the user certificate reference and is used for mTLS (you can ignore it when using one-way TLS)
|===
== .spec.loki.manual.tls.caCert
Description::
+
--
`caCert` defines the reference of the certificate for the Certificate Authority
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `certFile`
| `string`
| `certFile` defines the path to the certificate file name within the config map or secret.
| `certKey`
| `string`
| `certKey` defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary.
| `name`
| `string`
| Name of the config map or secret containing certificates.
| `namespace`
| `string`
| Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed.
If the namespace is different, the config map or the secret is copied so that it can be mounted as required.
| `type`
| `string`
| Type for the certificate reference: `configmap` or `secret`.
|===
== .spec.loki.manual.tls.userCert
Description::
+
--
`userCert` defines the user certificate reference and is used for mTLS (you can ignore it when using one-way TLS)
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `certFile`
| `string`
| `certFile` defines the path to the certificate file name within the config map or secret.
| `certKey`
| `string`
| `certKey` defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary.
| `name`
| `string`
| Name of the config map or secret containing certificates.
| `namespace`
| `string`
| Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed.
If the namespace is different, the config map or the secret is copied so that it can be mounted as required.
| `type`
| `string`
| Type for the certificate reference: `configmap` or `secret`.
|===
== .spec.loki.microservices
Description::
+
--
Loki configuration for `Microservices` mode.
Use this option when Loki is installed using the microservices deployment mode (https://grafana.com/docs/loki/latest/fundamentals/architecture/deployment-modes/#microservices-mode).
It is ignored for other modes.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `ingesterUrl`
| `string`
| `ingesterUrl` is the address of an existing Loki ingester service to push the flows to.
| `querierUrl`
| `string`
| `querierURL` specifies the address of the Loki querier service.
| `tenantID`
| `string`
| `tenantID` is the Loki `X-Scope-OrgID` header that identifies the tenant for each request.
| `tls`
| `object`
| TLS client configuration for Loki URL.
|===
== .spec.loki.microservices.tls
Description::
+
--
TLS client configuration for Loki URL.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `caCert`
| `object`
| `caCert` defines the reference of the certificate for the Certificate Authority
| `enable`
| `boolean`
| Enable TLS
| `insecureSkipVerify`
| `boolean`
| `insecureSkipVerify` allows skipping client-side verification of the server certificate.
If set to `true`, the `caCert` field is ignored.
| `userCert`
| `object`
| `userCert` defines the user certificate reference and is used for mTLS (you can ignore it when using one-way TLS)
|===
== .spec.loki.microservices.tls.caCert
Description::
+
--
`caCert` defines the reference of the certificate for the Certificate Authority
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `certFile`
| `string`
| `certFile` defines the path to the certificate file name within the config map or secret.
| `certKey`
| `string`
| `certKey` defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary.
| `name`
| `string`
| Name of the config map or secret containing certificates.
| `namespace`
| `string`
| Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed.
If the namespace is different, the config map or the secret is copied so that it can be mounted as required.
| `type`
| `string`
| Type for the certificate reference: `configmap` or `secret`.
|===
== .spec.loki.microservices.tls.userCert
Description::
+
--
`userCert` defines the user certificate reference and is used for mTLS (you can ignore it when using one-way TLS)
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `certFile`
| `string`
| `certFile` defines the path to the certificate file name within the config map or secret.
| `certKey`
| `string`
| `certKey` defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary.
| `name`
| `string`
| Name of the config map or secret containing certificates.
| `namespace`
| `string`
| Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed.
If the namespace is different, the config map or the secret is copied so that it can be mounted as required.
| `type`
| `string`
| Type for the certificate reference: `configmap` or `secret`.
|===
== .spec.loki.monolithic
Description::
+
--
Loki configuration for `Monolithic` mode.
Use this option when Loki is installed using the monolithic deployment mode (https://grafana.com/docs/loki/latest/fundamentals/architecture/deployment-modes/#monolithic-mode).
It is ignored for other modes.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `tenantID`
| `string`
| `tenantID` is the Loki `X-Scope-OrgID` header that identifies the tenant for each request.
| `tls`
| `object`
| TLS client configuration for Loki URL.
| `url`
| `string`
| `url` is the unique address of an existing Loki service that points to both the ingester and the querier.
|===
== .spec.loki.monolithic.tls
Description::
+
--
TLS client configuration for Loki URL.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `caCert`
| `object`
| `caCert` defines the reference of the certificate for the Certificate Authority
| `enable`
| `boolean`
| Enable TLS
| `insecureSkipVerify`
| `boolean`
| `insecureSkipVerify` allows skipping client-side verification of the server certificate.
If set to `true`, the `caCert` field is ignored.
| `userCert`
| `object`
| `userCert` defines the user certificate reference and is used for mTLS (you can ignore it when using one-way TLS)
|===
== .spec.loki.monolithic.tls.caCert
Description::
+
--
`caCert` defines the reference of the certificate for the Certificate Authority
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `certFile`
| `string`
| `certFile` defines the path to the certificate file name within the config map or secret.
| `certKey`
| `string`
| `certKey` defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary.
| `name`
| `string`
| Name of the config map or secret containing certificates.
| `namespace`
| `string`
| Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed.
If the namespace is different, the config map or the secret is copied so that it can be mounted as required.
| `type`
| `string`
| Type for the certificate reference: `configmap` or `secret`.
|===
== .spec.loki.monolithic.tls.userCert
Description::
+
--
`userCert` defines the user certificate reference and is used for mTLS (you can ignore it when using one-way TLS)
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `certFile`
| `string`
| `certFile` defines the path to the certificate file name within the config map or secret.
| `certKey`
| `string`
| `certKey` defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary.
| `name`
| `string`
| Name of the config map or secret containing certificates.
| `namespace`
| `string`
| Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed.
If the namespace is different, the config map or the secret is copied so that it can be mounted as required.
| `type`
| `string`
| Type for the certificate reference: `configmap` or `secret`.
|===
== .spec.processor
Description::
+
--
`processor` defines the settings of the component that receives the flows from the agent,
enriches them, generates metrics, and forwards them to the Loki persistence layer and/or any available exporter.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `addZone`
| `boolean`
| `addZone` allows availability zone awareness by labelling flows with their source and destination zones.
This feature requires the "topology.kubernetes.io/zone" label to be set on nodes.
| `advanced`
| `object`
| `advanced` allows setting some aspects of the internal configuration of the flow processor.
This section is aimed mostly for debugging and fine-grained performance optimizations,
such as `GOGC` and `GOMAXPROCS` env vars. Set these values at your own risk.
| `clusterName`
| `string`
| `clusterName` is the name of the cluster to appear in the flows data. This is useful in a multi-cluster context. When using {product-title}, leave empty to make it automatically determined.
| `imagePullPolicy`
| `string`
| `imagePullPolicy` is the Kubernetes pull policy for the image defined above
| `kafkaConsumerAutoscaler`
| `object`
| `kafkaConsumerAutoscaler` is the spec of a horizontal pod autoscaler to set up for `flowlogs-pipeline-transformer`, which consumes Kafka messages.
This setting is ignored when Kafka is disabled. Refer to HorizontalPodAutoscaler documentation (autoscaling/v2).
| `kafkaConsumerBatchSize`
| `integer`
| `kafkaConsumerBatchSize` indicates to the broker the maximum batch size, in bytes, that the consumer accepts. Ignored when not using Kafka. Default: 10MB.
| `kafkaConsumerQueueCapacity`
| `integer`
| `kafkaConsumerQueueCapacity` defines the capacity of the internal message queue used in the Kafka consumer client. Ignored when not using Kafka.
| `kafkaConsumerReplicas`
| `integer`
| `kafkaConsumerReplicas` defines the number of replicas (pods) to start for `flowlogs-pipeline-transformer`, which consumes Kafka messages.
This setting is ignored when Kafka is disabled.
| `logLevel`
| `string`
| `logLevel` of the processor runtime
| `logTypes`
| `string`
| `logTypes` defines the desired record types to generate. Possible values are: +
- `Flows` (default) to export regular network flows +
- `Conversations` to generate events for started conversations, ended conversations as well as periodic "tick" updates +
- `EndedConversations` to generate only ended conversations events +
- `All` to generate both network flows and all conversations events +
| `metrics`
| `object`
| `Metrics` define the processor configuration regarding metrics
| `multiClusterDeployment`
| `boolean`
| Set `multiClusterDeployment` to `true` to enable multi clusters feature. This adds `clusterName` label to flows data
| `resources`
| `object`
| `resources` are the compute resources required by this container.
For more information, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
| `subnetLabels`
| `object`
| `subnetLabels` allows to define custom labels on subnets and IPs or to enable automatic labelling of recognized subnets in {product-title}, which is used to identify cluster external traffic.
When a subnet matches the source or destination IP of a flow, a corresponding field is added: `SrcSubnetLabel` or `DstSubnetLabel`.
|===
== .spec.processor.advanced
Description::
+
--
`advanced` allows setting some aspects of the internal configuration of the flow processor.
This section is aimed mostly for debugging and fine-grained performance optimizations,
such as `GOGC` and `GOMAXPROCS` env vars. Set these values at your own risk.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `conversationEndTimeout`
| `string`
| `conversationEndTimeout` is the time to wait after a network flow is received, to consider the conversation ended.
This delay is ignored when a FIN packet is collected for TCP flows (see `conversationTerminatingTimeout` instead).
| `conversationHeartbeatInterval`
| `string`
| `conversationHeartbeatInterval` is the time to wait between "tick" events of a conversation
| `conversationTerminatingTimeout`
| `string`
| `conversationTerminatingTimeout` is the time to wait from detected FIN flag to end a conversation. Only relevant for TCP flows.
| `dropUnusedFields`
| `boolean`
| `dropUnusedFields` [deprecated (*)] this setting is not used anymore.
| `enableKubeProbes`
| `boolean`
| `enableKubeProbes` is a flag to enable or disable Kubernetes liveness and readiness probes
| `env`
| `object (string)`
| `env` allows passing custom environment variables to underlying components. Useful for passing
some very concrete performance-tuning options, such as `GOGC` and `GOMAXPROCS`, that should not be
publicly exposed as part of the FlowCollector descriptor, as they are only useful
in edge debug or support scenarios.
| `healthPort`
| `integer`
| `healthPort` is a collector HTTP port in the Pod that exposes the health check API
| `port`
| `integer`
| Port of the flow collector (host port).
By convention, some values are forbidden. It must be greater than 1024 and different from
4500, 4789 and 6081.
| `profilePort`
| `integer`
| `profilePort` allows setting up a Go pprof profiler listening to this port
| `scheduling`
| `object`
| scheduling controls how the pods are scheduled on nodes.
|===
== .spec.processor.advanced.scheduling
Description::
+
--
scheduling controls how the pods are scheduled on nodes.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `affinity`
| `object`
| If specified, the pod's scheduling constraints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling.
| `nodeSelector`
| `object (string)`
| `nodeSelector` allows scheduling of pods only onto nodes that have each of the specified labels.
For documentation, refer to https://kubernetes.io/docs/concepts/configuration/assign-pod-node/.
| `priorityClassName`
| `string`
| If specified, indicates the pod's priority. For documentation, refer to https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#how-to-use-priority-and-preemption.
If not specified, default priority is used, or zero if there is no default.
| `tolerations`
| `array`
| `tolerations` is a list of tolerations that allow the pod to schedule onto nodes with matching taints.
For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling.
|===
== .spec.processor.advanced.scheduling.affinity
Description::
+
--
If specified, the pod's scheduling constraints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling.
--
Type::
`object`
== .spec.processor.advanced.scheduling.tolerations
Description::
+
--
`tolerations` is a list of tolerations that allow the pod to schedule onto nodes with matching taints.
For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling.
--
Type::
`array`
== .spec.processor.kafkaConsumerAutoscaler
Description::
+
--
`kafkaConsumerAutoscaler` is the spec of a horizontal pod autoscaler to set up for `flowlogs-pipeline-transformer`, which consumes Kafka messages.
This setting is ignored when Kafka is disabled. Refer to HorizontalPodAutoscaler documentation (autoscaling/v2).
--
Type::
`object`
== .spec.processor.metrics
Description::
+
--
`Metrics` define the processor configuration regarding metrics
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `disableAlerts`
| `array (string)`
| `disableAlerts` is a list of alerts that should be disabled.
Possible values are: +
`NetObservNoFlows`, which is triggered when no flows are being observed for a certain period. +
`NetObservLokiError`, which is triggered when flows are being dropped due to Loki errors. +
| `includeList`
| `array (string)`
| `includeList` is a list of metric names to specify which ones to generate.
The names correspond to the names in Prometheus without the prefix. For example,
`namespace_egress_packets_total` shows up as `netobserv_namespace_egress_packets_total` in Prometheus.
Note that the more metrics you add, the bigger is the impact on Prometheus workload resources.
Metrics enabled by default are:
`namespace_flows_total`, `node_ingress_bytes_total`, `workload_ingress_bytes_total`, `namespace_drop_packets_total` (when `PacketDrop` feature is enabled),
`namespace_rtt_seconds` (when `FlowRTT` feature is enabled), `namespace_dns_latency_seconds` (when `DNSTracking` feature is enabled).
More information, with full list of available metrics: https://github.com/netobserv/network-observability-operator/blob/main/docs/Metrics.md
| `server`
| `object`
| Metrics server endpoint configuration for Prometheus scraper
|===
== .spec.processor.metrics.server
Description::
+
--
Metrics server endpoint configuration for Prometheus scraper
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `port`
| `integer`
| The metrics server HTTP port.
| `tls`
| `object`
| TLS configuration.
|===
== .spec.processor.metrics.server.tls
Description::
+
--
TLS configuration.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `insecureSkipVerify`
| `boolean`
| `insecureSkipVerify` allows skipping client-side verification of the provided certificate.
If set to `true`, the `providedCaFile` field is ignored.
| `provided`
| `object`
| TLS configuration when `type` is set to `Provided`.
| `providedCaFile`
| `object`
| Reference to the CA file when `type` is set to `Provided`.
| `type`
| `string`
| Select the type of TLS configuration: +
- `Disabled` (default) to not configure TLS for the endpoint.
- `Provided` to manually provide cert file and a key file. [Unsupported (*)].
- `Auto` to use {product-title} auto generated certificate using annotations.
|===
== .spec.processor.metrics.server.tls.provided
Description::
+
--
TLS configuration when `type` is set to `Provided`.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `certFile`
| `string`
| `certFile` defines the path to the certificate file name within the config map or secret.
| `certKey`
| `string`
| `certKey` defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary.
| `name`
| `string`
| Name of the config map or secret containing certificates.
| `namespace`
| `string`
| Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed.
If the namespace is different, the config map or the secret is copied so that it can be mounted as required.
| `type`
| `string`
| Type for the certificate reference: `configmap` or `secret`.
|===
== .spec.processor.metrics.server.tls.providedCaFile
Description::
+
--
Reference to the CA file when `type` is set to `Provided`.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `file`
| `string`
| File name within the config map or secret.
| `name`
| `string`
| Name of the config map or secret containing the file.
| `namespace`
| `string`
| Namespace of the config map or secret containing the file. If omitted, the default is to use the same namespace as where Network Observability is deployed.
If the namespace is different, the config map or the secret is copied so that it can be mounted as required.
| `type`
| `string`
| Type for the file reference: "configmap" or "secret".
|===
== .spec.processor.resources
Description::
+
--
`resources` are the compute resources required by this container.
For more information, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `limits`
| `integer-or-string`
| Limits describes the maximum amount of compute resources allowed.
More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
| `requests`
| `integer-or-string`
| Requests describes the minimum amount of compute resources required.
If Requests is omitted for a container, it defaults to Limits if that is explicitly specified,
otherwise to an implementation-defined value. Requests cannot exceed Limits.
More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
|===
== .spec.processor.subnetLabels
Description::
+
--
`subnetLabels` allows to define custom labels on subnets and IPs or to enable automatic labelling of recognized subnets in {product-title}, which is used to identify cluster external traffic.
When a subnet matches the source or destination IP of a flow, a corresponding field is added: `SrcSubnetLabel` or `DstSubnetLabel`.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `customLabels`
| `array`
| `customLabels` allows to customize subnets and IPs labelling, such as to identify cluster-external workloads or web services.
If you enable `openShiftAutoDetect`, `customLabels` can override the detected subnets in case they overlap.
| `openShiftAutoDetect`
| `boolean`
| `openShiftAutoDetect` allows, when set to `true`, to detect automatically the machines, pods and services subnets based on the
{product-title} install configuration and the Cluster Network Operator configuration. Indirectly, this is a way to accurately detect
external traffic: flows that are not labeled for those subnets are external to the cluster. Enabled by default on {product-title}.
|===
== .spec.processor.subnetLabels.customLabels
Description::
+
--
`customLabels` allows to customize subnets and IPs labelling, such as to identify cluster-external workloads or web services.
If you enable `openShiftAutoDetect`, `customLabels` can override the detected subnets in case they overlap.
--
Type::
`array`
== .spec.processor.subnetLabels.customLabels[]
Description::
+
--
SubnetLabel allows to label subnets and IPs, such as to identify cluster-external workloads or web services.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `cidrs`
| `array (string)`
| List of CIDRs, such as `["1.2.3.4/32"]`.
| `name`
| `string`
| Label name, used to flag matching flows.
|===
== .spec.prometheus
Description::
+
--
`prometheus` defines Prometheus settings, such as querier configuration used to fetch metrics from the Console plugin.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `querier`
| `object`
| Prometheus querying configuration, such as client settings, used in the Console plugin.
|===
== .spec.prometheus.querier
Description::
+
--
Prometheus querying configuration, such as client settings, used in the Console plugin.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `enable`
| `boolean`
| When `enable` is `true`, the Console plugin queries flow metrics from Prometheus instead of Loki whenever possible.
It is enbaled by default: set it to `false` to disable this feature.
The Console plugin can use either Loki or Prometheus as a data source for metrics (see also `spec.loki`), or both.
Not all queries are transposable from Loki to Prometheus. Hence, if Loki is disabled, some features of the plugin are disabled as well,
such as getting per-pod information or viewing raw flows.
If both Prometheus and Loki are enabled, Prometheus takes precedence and Loki is used as a fallback for queries that Prometheus cannot handle.
If they are both disabled, the Console plugin is not deployed.
| `manual`
| `object`
| Prometheus configuration for `Manual` mode.
| `mode`
| `string`
| `mode` must be set according to the type of Prometheus installation that stores Network Observability metrics: +
- Use `Auto` to try configuring automatically. In {product-title}, it uses the Thanos querier from {product-title} Cluster Monitoring +
- Use `Manual` for a manual setup +
| `timeout`
| `string`
| `timeout` is the read timeout for console plugin queries to Prometheus.
A timeout of zero means no timeout.
|===
== .spec.prometheus.querier.manual
Description::
+
--
Prometheus configuration for `Manual` mode.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `forwardUserToken`
| `boolean`
| Set `true` to forward logged in user token in queries to Prometheus
| `tls`
| `object`
| TLS client configuration for Prometheus URL.
| `url`
| `string`
| `url` is the address of an existing Prometheus service to use for querying metrics.
|===
== .spec.prometheus.querier.manual.tls
Description::
+
--
TLS client configuration for Prometheus URL.
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `caCert`
| `object`
| `caCert` defines the reference of the certificate for the Certificate Authority
| `enable`
| `boolean`
| Enable TLS
| `insecureSkipVerify`
| `boolean`
| `insecureSkipVerify` allows skipping client-side verification of the server certificate.
If set to `true`, the `caCert` field is ignored.
| `userCert`
| `object`
| `userCert` defines the user certificate reference and is used for mTLS (you can ignore it when using one-way TLS)
|===
== .spec.prometheus.querier.manual.tls.caCert
Description::
+
--
`caCert` defines the reference of the certificate for the Certificate Authority
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `certFile`
| `string`
| `certFile` defines the path to the certificate file name within the config map or secret.
| `certKey`
| `string`
| `certKey` defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary.
| `name`
| `string`
| Name of the config map or secret containing certificates.
| `namespace`
| `string`
| Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed.
If the namespace is different, the config map or the secret is copied so that it can be mounted as required.
| `type`
| `string`
| Type for the certificate reference: `configmap` or `secret`.
|===
== .spec.prometheus.querier.manual.tls.userCert
Description::
+
--
`userCert` defines the user certificate reference and is used for mTLS (you can ignore it when using one-way TLS)
--
Type::
`object`
[cols="1,1,1",options="header"]
|===
| Property | Type | Description
| `certFile`
| `string`
| `certFile` defines the path to the certificate file name within the config map or secret.
| `certKey`
| `string`
| `certKey` defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary.
| `name`
| `string`
| Name of the config map or secret containing certificates.
| `namespace`
| `string`
| Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed.
If the namespace is different, the config map or the secret is copied so that it can be mounted as required.
| `type`
| `string`
| Type for the certificate reference: `configmap` or `secret`.
|===