mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
OBSDOCS-432: Documentation updates for changes to topology spread constraints for monitoring
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
bdbdba9022
commit
289974da4a
@@ -0,0 +1,180 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/monitoring/configuring-the-monitoring-stack.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="configuring-pod-topology-spread-constraints_{context}"]
|
||||
= Configuring pod topology spread constraints
|
||||
|
||||
You can configure pod topology spread constraints for
|
||||
ifndef::openshift-dedicated,openshift-rosa[]
|
||||
all the pods deployed by the Cluster Monitoring Operator
|
||||
endif::openshift-dedicated,openshift-rosa[]
|
||||
ifdef::openshift-dedicated,openshift-rosa[]
|
||||
all the pods for user-defined monitoring
|
||||
endif::openshift-dedicated,openshift-rosa[]
|
||||
to control how pod replicas are scheduled to nodes across zones.
|
||||
This ensures that the pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones.
|
||||
|
||||
You can configure pod topology spread constraints for monitoring pods by using
|
||||
ifndef::openshift-dedicated,openshift-rosa[]
|
||||
the `cluster-monitoring-config` or
|
||||
endif::openshift-dedicated,openshift-rosa[]
|
||||
the `user-workload-monitoring-config` config map.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
ifndef::openshift-dedicated,openshift-rosa[]
|
||||
* *If you are configuring pods for core {product-title} monitoring:*
|
||||
** You have access to the cluster as a user with the `cluster-admin` cluster role.
|
||||
** You have created the `cluster-monitoring-config` `ConfigMap` object.
|
||||
* *If you are configuring pods for user-defined monitoring:*
|
||||
** A cluster administrator has enabled monitoring for user-defined projects.
|
||||
** You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project.
|
||||
endif::openshift-dedicated,openshift-rosa[]
|
||||
ifdef::openshift-dedicated,openshift-rosa[]
|
||||
* You have access to the cluster as a user with the `dedicated-admin` role.
|
||||
* The `user-workload-monitoring-config` `ConfigMap` object exists. This object is created by default when the cluster is created.
|
||||
endif::openshift-dedicated,openshift-rosa[]
|
||||
|
||||
* You have installed the OpenShift CLI (`oc`).
|
||||
|
||||
.Procedure
|
||||
|
||||
ifndef::openshift-dedicated,openshift-rosa[]
|
||||
* *To configure pod topology spread constraints for core {product-title} monitoring:*
|
||||
|
||||
. Edit the `cluster-monitoring-config` config map in the `openshift-monitoring` project:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
|
||||
----
|
||||
|
||||
. Add the following settings under the `data/config.yaml` field to configure pod topology spread constraints:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: cluster-monitoring-config
|
||||
namespace: openshift-monitoring
|
||||
data:
|
||||
config.yaml: |
|
||||
<component>: # <1>
|
||||
topologySpreadConstraints:
|
||||
- maxSkew: <n> # <2>
|
||||
topologyKey: <key> # <3>
|
||||
whenUnsatisfiable: <value> # <4>
|
||||
labelSelector: # <5>
|
||||
<match_option>
|
||||
----
|
||||
<1> Specify a name of the component for which you want to set up pod topology spread constraints.
|
||||
<2> Specify a numeric value for `maxSkew`, which defines the degree to which pods are allowed to be unevenly distributed.
|
||||
<3> Specify a key of node labels for `topologyKey`.
|
||||
Nodes that have a label with this key and identical values are considered to be in the same topology.
|
||||
The scheduler tries to put a balanced number of pods into each domain.
|
||||
<4> Specify a value for `whenUnsatisfiable`.
|
||||
Available options are `DoNotSchedule` and `ScheduleAnyway`.
|
||||
Specify `DoNotSchedule` if you want the `maxSkew` value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum.
|
||||
Specify `ScheduleAnyway` if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew.
|
||||
<5> Specify `labelSelector` to find matching pods.
|
||||
Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain.
|
||||
+
|
||||
.Example configuration for Prometheus
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: cluster-monitoring-config
|
||||
namespace: openshift-monitoring
|
||||
data:
|
||||
config.yaml: |
|
||||
prometheusK8s:
|
||||
topologySpreadConstraints:
|
||||
- maxSkew: 1
|
||||
topologyKey: monitoring
|
||||
whenUnsatisfiable: DoNotSchedule
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: prometheus
|
||||
----
|
||||
|
||||
. Save the file to apply the changes automatically.
|
||||
+
|
||||
[WARNING]
|
||||
====
|
||||
When you save changes to the `cluster-monitoring-config` config map, the pods and other resources in the `openshift-monitoring` project might be redeployed.
|
||||
The running monitoring processes in that project might also restart.
|
||||
====
|
||||
|
||||
* *To configure pod topology spread constraints for user-defined monitoring:*
|
||||
endif::openshift-dedicated,openshift-rosa[]
|
||||
|
||||
. Edit the `user-workload-monitoring-config` config map in the `openshift-user-workload-monitoring` project:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
|
||||
----
|
||||
|
||||
. Add the following settings under the `data/config.yaml` field to configure pod topology spread constraints:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: user-workload-monitoring-config
|
||||
namespace: openshift-user-workload-monitoring
|
||||
data:
|
||||
config.yaml: |
|
||||
<component>: # <1>
|
||||
topologySpreadConstraints:
|
||||
- maxSkew: <n> # <2>
|
||||
topologyKey: <key> # <3>
|
||||
whenUnsatisfiable: <value> # <4>
|
||||
labelSelector: # <5>
|
||||
<match_option>
|
||||
----
|
||||
<1> Specify a name of the component for which you want to set up pod topology spread constraints.
|
||||
<2> Specify a numeric value for `maxSkew`, which defines the degree to which pods are allowed to be unevenly distributed.
|
||||
<3> Specify a key of node labels for `topologyKey`.
|
||||
Nodes that have a label with this key and identical values are considered to be in the same topology.
|
||||
The scheduler tries to put a balanced number of pods into each domain.
|
||||
<4> Specify a value for `whenUnsatisfiable`.
|
||||
Available options are `DoNotSchedule` and `ScheduleAnyway`.
|
||||
Specify `DoNotSchedule` if you want the `maxSkew` value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum.
|
||||
Specify `ScheduleAnyway` if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew.
|
||||
<5> Specify `labelSelector` to find matching pods.
|
||||
Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain.
|
||||
+
|
||||
.Example configuration for Thanos Ruler
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: user-workload-monitoring-config
|
||||
namespace: openshift-user-workload-monitoring
|
||||
data:
|
||||
config.yaml: |
|
||||
thanosRuler:
|
||||
topologySpreadConstraints:
|
||||
- maxSkew: 1
|
||||
topologyKey: monitoring
|
||||
whenUnsatisfiable: ScheduleAnyway
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: thanos-ruler
|
||||
----
|
||||
|
||||
. Save the file to apply the changes automatically.
|
||||
+
|
||||
[WARNING]
|
||||
====
|
||||
When you save changes to the `user-workload-monitoring-config` config map, the pods and other resources in the `openshift-user-workload-monitoring` project might be redeployed.
|
||||
The running monitoring processes in that project might also restart.
|
||||
====
|
||||
@@ -1,69 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/monitoring/configuring-the-monitoring-stack.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="setting-up-pod-topology-spread-constraints-for-alertmanager_{context}"]
|
||||
= Setting up pod topology spread constraints for Alertmanager
|
||||
|
||||
For core {product-title} platform monitoring, you can set up pod topology spread constraints for Alertmanager to fine tune how pod replicas are scheduled to nodes across zones.
|
||||
Doing so helps ensure that Alertmanager pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones.
|
||||
|
||||
You configure pod topology spread constraints for Alertmanager in the `cluster-monitoring-config` config map.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You have access to the cluster as a user with the `cluster-admin` cluster role.
|
||||
* You have created the `cluster-monitoring-config` `ConfigMap` object.
|
||||
* You have installed the OpenShift CLI (`oc`).
|
||||
|
||||
.Procedure
|
||||
|
||||
. Edit the `cluster-monitoring-config` `ConfigMap` object in the `openshift-monitoring` namespace:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
|
||||
----
|
||||
|
||||
. Add values for the following settings under `data/config.yaml/alertmanagermain` to configure pod topology spread constraints:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: cluster-monitoring-config
|
||||
namespace: openshift-monitoring
|
||||
data:
|
||||
config.yaml: |
|
||||
alertmanagerMain:
|
||||
topologySpreadConstraints:
|
||||
- maxSkew: 1 <1>
|
||||
topologyKey: monitoring <2>
|
||||
whenUnsatisfiable: DoNotSchedule <3>
|
||||
labelSelector:
|
||||
matchLabels: <4>
|
||||
app.kubernetes.io/name: alertmanager
|
||||
----
|
||||
<1> Specify a numeric value for `maxSkew`, which defines the degree to which pods are allowed to be unevenly distributed.
|
||||
This field is required, and the value must be greater than zero.
|
||||
The value specified has a different effect depending on what value you specify for `whenUnsatisfiable`.
|
||||
<2> Specify a key of node labels for `topologyKey`.
|
||||
This field is required.
|
||||
Nodes that have a label with this key and identical values are considered to be in the same topology.
|
||||
The scheduler will try to put a balanced number of pods into each domain.
|
||||
<3> Specify a value for `whenUnsatisfiable`.
|
||||
This field is required.
|
||||
Available options are `DoNotSchedule` and `ScheduleAnyway`.
|
||||
Specify `DoNotSchedule` if you want the `maxSkew` value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum.
|
||||
Specify `ScheduleAnyway` if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew.
|
||||
<4> Specify a value for `matchLabels`. This value is used to identify the set of matching pods to which to apply the constraints.
|
||||
|
||||
. Save the file to apply the changes automatically.
|
||||
+
|
||||
[WARNING]
|
||||
====
|
||||
When you save changes to the `cluster-monitoring-config` config map, the pods and other resources in the `openshift-monitoring` project might be redeployed.
|
||||
The running monitoring processes in that project might also restart.
|
||||
====
|
||||
@@ -1,69 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/monitoring/configuring-the-monitoring-stack.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="setting-up-pod-topology-spread-constraints-for-prometheus_{context}"]
|
||||
= Setting up pod topology spread constraints for Prometheus
|
||||
|
||||
For core {product-title} platform monitoring, you can set up pod topology spread constraints for Prometheus to fine tune how pod replicas are scheduled to nodes across zones.
|
||||
Doing so helps ensure that Prometheus pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones.
|
||||
|
||||
You configure pod topology spread constraints for Prometheus in the `cluster-monitoring-config` config map.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You have access to the cluster as a user with the `cluster-admin` cluster role.
|
||||
* You have created the `cluster-monitoring-config` `ConfigMap` object.
|
||||
* You have installed the OpenShift CLI (`oc`).
|
||||
|
||||
.Procedure
|
||||
|
||||
. Edit the `cluster-monitoring-config` `ConfigMap` object in the `openshift-monitoring` namespace:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
|
||||
----
|
||||
|
||||
. Add values for the following settings under `data/config.yaml/prometheusK8s` to configure pod topology spread constraints:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: cluster-monitoring-config
|
||||
namespace: openshift-monitoring
|
||||
data:
|
||||
config.yaml: |
|
||||
prometheusK8s:
|
||||
topologySpreadConstraints:
|
||||
- maxSkew: 1 <1>
|
||||
topologyKey: monitoring <2>
|
||||
whenUnsatisfiable: DoNotSchedule <3>
|
||||
labelSelector:
|
||||
matchLabels: <4>
|
||||
app.kubernetes.io/name: prometheus
|
||||
----
|
||||
<1> Specify a numeric value for `maxSkew`, which defines the degree to which pods are allowed to be unevenly distributed.
|
||||
This field is required, and the value must be greater than zero.
|
||||
The value specified has a different effect depending on what value you specify for `whenUnsatisfiable`.
|
||||
<2> Specify a key of node labels for `topologyKey`.
|
||||
This field is required.
|
||||
Nodes that have a label with this key and identical values are considered to be in the same topology.
|
||||
The scheduler will try to put a balanced number of pods into each domain.
|
||||
<3> Specify a value for `whenUnsatisfiable`.
|
||||
This field is required.
|
||||
Available options are `DoNotSchedule` and `ScheduleAnyway`.
|
||||
Specify `DoNotSchedule` if you want the `maxSkew` value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum.
|
||||
Specify `ScheduleAnyway` if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew.
|
||||
<4> Specify a value for `matchLabels`. This value is used to identify the set of matching pods to which to apply the constraints.
|
||||
|
||||
. Save the file to apply the changes automatically.
|
||||
+
|
||||
[WARNING]
|
||||
====
|
||||
When you save changes to the `cluster-monitoring-config` config map, the pods and other resources in the `openshift-monitoring` project might be redeployed.
|
||||
The running monitoring processes in that project might also restart.
|
||||
====
|
||||
@@ -1,67 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * observability/monitoring/configuring-the-monitoring-stack.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="setting-up-pod-topology-spread-constraints-for-thanos-ruler_{context}"]
|
||||
= Setting up pod topology spread constraints for Thanos Ruler
|
||||
|
||||
For user-defined monitoring, you can set up pod topology spread constraints for Thanos Ruler to fine tune how pod replicas are scheduled to nodes across zones.
|
||||
Doing so helps ensure that Thanos Ruler pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones.
|
||||
|
||||
You configure pod topology spread constraints for Thanos Ruler in the `user-workload-monitoring-config` config map.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
ifndef::openshift-dedicated,openshift-rosa[]
|
||||
* A cluster administrator has enabled monitoring for user-defined projects.
|
||||
* You have access to the cluster as a user with the `cluster-admin` cluster role, or as a user with the `user-workload-monitoring-config-edit` role in the `openshift-user-workload-monitoring` project.
|
||||
* You have created the `user-workload-monitoring-config` `ConfigMap` object.
|
||||
endif::openshift-dedicated,openshift-rosa[]
|
||||
ifdef::openshift-dedicated,openshift-rosa[]
|
||||
* You have access to the cluster as a user with the `dedicated-admin` role.
|
||||
* The `user-workload-monitoring-config` `ConfigMap` object exists. This object is created by default when the cluster is created.
|
||||
endif::openshift-dedicated,openshift-rosa[]
|
||||
* You have installed the OpenShift CLI (`oc`).
|
||||
|
||||
.Procedure
|
||||
|
||||
. Edit the `user-workload-monitoring-config` config map in the `openshift-user-workload-monitoring` namespace:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config
|
||||
----
|
||||
|
||||
. Add values for the following settings under `data/config.yaml/thanosRuler` to configure pod topology spread constraints:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: user-workload-monitoring-config
|
||||
namespace: openshift-user-workload-monitoring
|
||||
data:
|
||||
config.yaml: |
|
||||
thanosRuler:
|
||||
topologySpreadConstraints:
|
||||
- maxSkew: 1 <1>
|
||||
topologyKey: monitoring <2>
|
||||
whenUnsatisfiable: ScheduleAnyway <3>
|
||||
labelSelector:
|
||||
matchLabels: <4>
|
||||
app.kubernetes.io/name: thanos-ruler
|
||||
----
|
||||
<1> Specify a numeric value for `maxSkew`, which defines the degree to which pods are allowed to be unevenly distributed. This field is required, and the value must be greater than zero. The value specified has a different effect depending on what value you specify for `whenUnsatisfiable`.
|
||||
<2> Specify a key of node labels for `topologyKey`. This field is required. Nodes that have a label with this key and identical values are considered to be in the same topology. The scheduler will try to put a balanced number of pods into each domain.
|
||||
<3> Specify a value for `whenUnsatisfiable`. This field is required. Available options are `DoNotSchedule` and `ScheduleAnyway`. Specify `DoNotSchedule` if you want the `maxSkew` value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum. Specify `ScheduleAnyway` if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew.
|
||||
<4> Specify a value for `matchLabels`. This value is used to identify the set of matching pods to which to apply the constraints.
|
||||
|
||||
. Save the file to apply the changes automatically.
|
||||
+
|
||||
[WARNING]
|
||||
====
|
||||
When you save changes to the `user-workload-monitoring-config` config map, the pods and other resources in the `openshift-user-workload-monitoring` project might be redeployed.
|
||||
The running monitoring processes in that project might also restart.
|
||||
====
|
||||
@@ -3,17 +3,17 @@
|
||||
// * observability/monitoring/configuring-the-monitoring-stack.adoc
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="configuring_pod_topology_spread_constraintsfor_monitoring_{context}"]
|
||||
= Configuring pod topology spread constraints for monitoring
|
||||
[id="using-pod-topology-spread-constraints-for-monitoring_{context}"]
|
||||
= Using pod topology spread constraints for monitoring
|
||||
|
||||
You can use pod topology spread constraints to control how
|
||||
ifndef::openshift-dedicated,openshift-rosa[]
|
||||
Prometheus, Thanos Ruler, and Alertmanager
|
||||
the monitoring pods
|
||||
endif::openshift-dedicated,openshift-rosa[]
|
||||
ifdef::openshift-dedicated,openshift-rosa[]
|
||||
Thanos Ruler
|
||||
the pods for user-defined monitoring
|
||||
endif::openshift-dedicated,openshift-rosa[]
|
||||
pods are spread across a network topology when {product-title} pods are deployed in multiple availability zones.
|
||||
are spread across a network topology when {product-title} pods are deployed in multiple availability zones.
|
||||
|
||||
Pod topology spread constraints are suitable for controlling pod scheduling within hierarchical topologies in which nodes are spread across different infrastructure levels, such as regions and zones within those regions.
|
||||
Additionally, by being able to schedule pods in different zones, you can improve network latency in certain scenarios.
|
||||
@@ -101,7 +101,7 @@ ifndef::openshift-dedicated,openshift-rosa[]
|
||||
* xref:../../nodes/scheduling/nodes-scheduler-pod-affinity.adoc[Placing pods relative to other pods using affinity and anti-affinity rules]
|
||||
* xref:../../nodes/scheduling/nodes-scheduler-pod-topology-spread-constraints.adoc[Controlling pod placement by using pod topology spread constraints]
|
||||
endif::openshift-dedicated,openshift-rosa[]
|
||||
* xref:../../observability/monitoring/configuring-the-monitoring-stack.adoc#configuring_pod_topology_spread_constraintsfor_monitoring_configuring-the-monitoring-stack[Configuring pod topology spread constraints for monitoring]
|
||||
* xref:../../observability/monitoring/configuring-the-monitoring-stack.adoc#using-pod-topology-spread-constraints-for-monitoring_configuring-the-monitoring-stack[Using pod topology spread constraints for monitoring]
|
||||
* link:https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector[Kubernetes documentation about node selectors]
|
||||
|
||||
include::modules/monitoring-moving-monitoring-components-to-different-nodes.adoc[leveloffset=+2]
|
||||
@@ -256,8 +256,8 @@ ifndef::openshift-dedicated,openshift-rosa[]
|
||||
* xref:../../observability/monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects[Enabling monitoring for user-defined projects]
|
||||
endif::openshift-dedicated,openshift-rosa[]
|
||||
|
||||
// Configuring topology spread constraints for monitoring components
|
||||
include::modules/monitoring-configuring-pod-topology-spread-constraints-for-monitoring.adoc[leveloffset=1]
|
||||
// Using topology spread constraints for monitoring components
|
||||
include::modules/monitoring-using-pod-topology-spread-constraints-for-monitoring.adoc[leveloffset=1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
@@ -268,12 +268,8 @@ ifndef::openshift-dedicated,openshift-rosa[]
|
||||
endif::openshift-dedicated,openshift-rosa[]
|
||||
* link:https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/[Kubernetes Pod Topology Spread Constraints documentation]
|
||||
|
||||
ifndef::openshift-dedicated,openshift-rosa[]
|
||||
include::modules/monitoring-setting-up-pod-topology-spread-constraints-for-prometheus.adoc[leveloffset=2]
|
||||
include::modules/monitoring-setting-up-pod-topology-spread-constraints-for-alertmanager.adoc[leveloffset=2]
|
||||
endif::openshift-dedicated,openshift-rosa[]
|
||||
|
||||
include::modules/monitoring-setting-up-pod-topology-spread-constraints-for-thanos-ruler.adoc[leveloffset=2]
|
||||
// Configuring pod topology spread constraints
|
||||
include::modules/monitoring-configuring-pod-topology-spread-constraints.adoc[leveloffset=2]
|
||||
|
||||
// Setting log levels for monitoring components
|
||||
include::modules/monitoring-setting-log-levels-for-monitoring-components.adoc[leveloffset=+1]
|
||||
|
||||
Reference in New Issue
Block a user