1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/rest_api/monitoring_apis/monitoring-apis-index.adoc
2026-01-27 22:07:18 +00:00

216 lines
7.3 KiB
Plaintext

// Automatically generated by 'openshift-apidocs-gen'. Do not edit.
:_mod-docs-content-type: ASSEMBLY
[id="monitoring-apis"]
= Monitoring APIs
:toc: macro
:toc-title:
toc::[]
== Alertmanager [monitoring.coreos.com/v1]
Description::
+
--
The `Alertmanager` custom resource definition (CRD) defines a desired [Alertmanager](https://prometheus.io/docs/alerting) setup to run in a Kubernetes cluster. It allows to specify many options such as the number of replicas, persistent storage and many more.
For each `Alertmanager` resource, the Operator deploys a `StatefulSet` in the same namespace. When there are two or more configured replicas, the Operator runs the Alertmanager instances in high-availability mode.
The resource defines via label and namespace selectors which `AlertmanagerConfig` objects should be associated to the deployed Alertmanager instances.
--
Type::
`object`
== AlertmanagerConfig [monitoring.coreos.com/v1beta1]
Description::
+
--
The `AlertmanagerConfig` custom resource definition (CRD) defines how `Alertmanager` objects process Prometheus alerts. It allows to specify alert grouping and routing, notification receivers and inhibition rules.
`Alertmanager` objects select `AlertmanagerConfig` objects using label and namespace selectors.
--
Type::
`object`
== AlertRelabelConfig [monitoring.openshift.io/v1]
Description::
+
--
AlertRelabelConfig defines a set of relabel configs for alerts.
Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer).
--
Type::
`object`
== AlertingRule [monitoring.openshift.io/v1]
Description::
+
--
AlertingRule represents a set of user-defined Prometheus rule groups containing
alerting rules. This resource is the supported method for cluster admins to
create alerts based on metrics recorded by the platform monitoring stack in
OpenShift, i.e. the Prometheus instance deployed to the openshift-monitoring
namespace. You might use this to create custom alerting rules not shipped with
OpenShift based on metrics from components such as the node_exporter, which
provides machine-level metrics such as CPU usage, or kube-state-metrics, which
provides metrics on Kubernetes usage.
The API is mostly compatible with the upstream PrometheusRule type from the
prometheus-operator. The primary difference being that recording rules are not
allowed here -- only alerting rules. For each AlertingRule resource created, a
corresponding PrometheusRule will be created in the openshift-monitoring
namespace. OpenShift requires admins to use the AlertingRule resource rather
than the upstream type in order to allow better OpenShift specific defaulting
and validation, while not modifying the upstream APIs directly.
You can find upstream API documentation for PrometheusRule resources here:
https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md
Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer).
--
Type::
`object`
== DataGather [insights.openshift.io/v1alpha2]
Description::
+
--
DataGather provides data gather configuration options and status for the particular Insights data gathering.
Compatibility level 4: No compatibility is provided, the API can change at any point for any reason. These capabilities should not be used by applications needing long term support.
--
Type::
`object`
== PodMonitor [monitoring.coreos.com/v1]
Description::
+
--
The `PodMonitor` custom resource definition (CRD) defines how `Prometheus` and `PrometheusAgent` can scrape metrics from a group of pods.
Among other things, it allows to specify:
* The pods to scrape via label selectors.
* The container ports to scrape.
* Authentication credentials to use.
* Target and metric relabeling.
`Prometheus` and `PrometheusAgent` objects select `PodMonitor` objects using label and namespace selectors.
--
Type::
`object`
== Probe [monitoring.coreos.com/v1]
Description::
+
--
The `Probe` custom resource definition (CRD) defines how to scrape metrics from prober exporters such as the [blackbox exporter](https://github.com/prometheus/blackbox_exporter).
The `Probe` resource needs 2 pieces of information:
* The list of probed addresses which can be defined statically or by discovering Kubernetes Ingress objects.
* The prober which exposes the availability of probed endpoints (over various protocols such HTTP, TCP, ICMP, ...) as Prometheus metrics.
`Prometheus` and `PrometheusAgent` objects select `Probe` objects using label and namespace selectors.
--
Type::
`object`
== Prometheus [monitoring.coreos.com/v1]
Description::
+
--
The `Prometheus` custom resource definition (CRD) defines a desired [Prometheus](https://prometheus.io/docs/prometheus) setup to run in a Kubernetes cluster. It allows to specify many options such as the number of replicas, persistent storage, and Alertmanagers where firing alerts should be sent and many more.
For each `Prometheus` resource, the Operator deploys one or several `StatefulSet` objects in the same namespace. The number of StatefulSets is equal to the number of shards which is 1 by default.
The resource defines via label and namespace selectors which `ServiceMonitor`, `PodMonitor`, `Probe` and `PrometheusRule` objects should be associated to the deployed Prometheus instances.
The Operator continuously reconciles the scrape and rules configuration and a sidecar container running in the Prometheus pods triggers a reload of the configuration when needed.
--
Type::
`object`
== PrometheusRule [monitoring.coreos.com/v1]
Description::
+
--
The `PrometheusRule` custom resource definition (CRD) defines [alerting](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) and [recording](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) rules to be evaluated by `Prometheus` or `ThanosRuler` objects.
`Prometheus` and `ThanosRuler` objects select `PrometheusRule` objects using label and namespace selectors.
--
Type::
`object`
== ServiceMonitor [monitoring.coreos.com/v1]
Description::
+
--
The `ServiceMonitor` custom resource definition (CRD) defines how `Prometheus` and `PrometheusAgent` can scrape metrics from a group of services.
Among other things, it allows to specify:
* The services to scrape via label selectors.
* The container ports to scrape.
* Authentication credentials to use.
* Target and metric relabeling.
`Prometheus` and `PrometheusAgent` objects select `ServiceMonitor` objects using label and namespace selectors.
--
Type::
`object`
== ThanosRuler [monitoring.coreos.com/v1]
Description::
+
--
The `ThanosRuler` custom resource definition (CRD) defines a desired [Thanos Ruler](https://github.com/thanos-io/thanos/blob/main/docs/components/rule.md) setup to run in a Kubernetes cluster.
A `ThanosRuler` instance requires at least one compatible Prometheus API endpoint (either Thanos Querier or Prometheus services).
The resource defines via label and namespace selectors which `PrometheusRule` objects should be associated to the deployed Thanos Ruler instances.
--
Type::
`object`
== NodeMetrics [metrics.k8s.io/v1beta1]
Description::
+
--
NodeMetrics sets resource usage metrics of a node.
--
Type::
`object`
== PodMetrics [metrics.k8s.io/v1beta1]
Description::
+
--
PodMetrics sets resource usage metrics of a pod.
--
Type::
`object`