mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
303 lines
9.1 KiB
Plaintext
303 lines
9.1 KiB
Plaintext
//Module included in the following assemblies:
|
|
//
|
|
// * observability/cluster_observability_operator/cluster-observability-operator-overview.adoc
|
|
|
|
:_mod-docs-content-type: PROCEDURE
|
|
[id="server-side-apply_{context}"]
|
|
= Using Server-Side Apply to customize Prometheus resources
|
|
|
|
Server-Side Apply is a feature that enables collaborative management of Kubernetes resources. The control plane tracks how different users and controllers manage fields within a Kubernetes object. It introduces the concept of field managers and tracks ownership of fields. This centralized control provides conflict detection and resolution, and reduces the risk of unintended overwrites.
|
|
|
|
Compared to Client-Side Apply, it is more declarative, and tracks field management instead of last applied state.
|
|
|
|
Server-Side Apply:: Declarative configuration management by updating a resource's state without needing to delete and recreate it.
|
|
|
|
Field management:: Users can specify which fields of a resource they want to update, without affecting the other fields.
|
|
|
|
Managed fields:: Kubernetes stores metadata about who manages each field of an object in the `managedFields` field within metadata.
|
|
|
|
Conflicts:: If multiple managers try to modify the same field, a conflict occurs. The applier can choose to overwrite, relinquish control, or share management.
|
|
|
|
Merge strategy:: Server-Side Apply merges fields based on the actor who manages them.
|
|
|
|
.Procedure
|
|
|
|
. Add a `MonitoringStack` resource using the following configuration:
|
|
+
|
|
.Example `MonitoringStack` object
|
|
+
|
|
[source,yaml]
|
|
----
|
|
apiVersion: monitoring.rhobs/v1alpha1
|
|
kind: MonitoringStack
|
|
metadata:
|
|
labels:
|
|
coo: example
|
|
name: sample-monitoring-stack
|
|
namespace: coo-demo
|
|
spec:
|
|
logLevel: debug
|
|
retention: 1d
|
|
resourceSelector:
|
|
matchLabels:
|
|
app: demo
|
|
----
|
|
|
|
. A Prometheus resource named `sample-monitoring-stack` is generated in the `coo-demo` namespace. Retrieve the managed fields of the generated Prometheus resource by running the following command:
|
|
+
|
|
[source,terminal]
|
|
----
|
|
$ oc -n coo-demo get Prometheus.monitoring.rhobs -oyaml --show-managed-fields
|
|
----
|
|
+
|
|
.Example output
|
|
[source,yaml]
|
|
----
|
|
managedFields:
|
|
- apiVersion: monitoring.rhobs/v1
|
|
fieldsType: FieldsV1
|
|
fieldsV1:
|
|
f:metadata:
|
|
f:labels:
|
|
f:app.kubernetes.io/managed-by: {}
|
|
f:app.kubernetes.io/name: {}
|
|
f:app.kubernetes.io/part-of: {}
|
|
f:ownerReferences:
|
|
k:{"uid":"81da0d9a-61aa-4df3-affc-71015bcbde5a"}: {}
|
|
f:spec:
|
|
f:additionalScrapeConfigs: {}
|
|
f:affinity:
|
|
f:podAntiAffinity:
|
|
f:requiredDuringSchedulingIgnoredDuringExecution: {}
|
|
f:alerting:
|
|
f:alertmanagers: {}
|
|
f:arbitraryFSAccessThroughSMs: {}
|
|
f:logLevel: {}
|
|
f:podMetadata:
|
|
f:labels:
|
|
f:app.kubernetes.io/component: {}
|
|
f:app.kubernetes.io/part-of: {}
|
|
f:podMonitorSelector: {}
|
|
f:replicas: {}
|
|
f:resources:
|
|
f:limits:
|
|
f:cpu: {}
|
|
f:memory: {}
|
|
f:requests:
|
|
f:cpu: {}
|
|
f:memory: {}
|
|
f:retention: {}
|
|
f:ruleSelector: {}
|
|
f:rules:
|
|
f:alert: {}
|
|
f:securityContext:
|
|
f:fsGroup: {}
|
|
f:runAsNonRoot: {}
|
|
f:runAsUser: {}
|
|
f:serviceAccountName: {}
|
|
f:serviceMonitorSelector: {}
|
|
f:thanos:
|
|
f:baseImage: {}
|
|
f:resources: {}
|
|
f:version: {}
|
|
f:tsdb: {}
|
|
manager: observability-operator
|
|
operation: Apply
|
|
- apiVersion: monitoring.rhobs/v1
|
|
fieldsType: FieldsV1
|
|
fieldsV1:
|
|
f:status:
|
|
.: {}
|
|
f:availableReplicas: {}
|
|
f:conditions:
|
|
.: {}
|
|
k:{"type":"Available"}:
|
|
.: {}
|
|
f:lastTransitionTime: {}
|
|
f:observedGeneration: {}
|
|
f:status: {}
|
|
f:type: {}
|
|
k:{"type":"Reconciled"}:
|
|
.: {}
|
|
f:lastTransitionTime: {}
|
|
f:observedGeneration: {}
|
|
f:status: {}
|
|
f:type: {}
|
|
f:paused: {}
|
|
f:replicas: {}
|
|
f:shardStatuses:
|
|
.: {}
|
|
k:{"shardID":"0"}:
|
|
.: {}
|
|
f:availableReplicas: {}
|
|
f:replicas: {}
|
|
f:shardID: {}
|
|
f:unavailableReplicas: {}
|
|
f:updatedReplicas: {}
|
|
f:unavailableReplicas: {}
|
|
f:updatedReplicas: {}
|
|
manager: PrometheusOperator
|
|
operation: Update
|
|
subresource: status
|
|
----
|
|
|
|
. Check the `metadata.managedFields` values, and observe that some fields in `metadata` and `spec` are managed by the `MonitoringStack` resource.
|
|
|
|
. Modify a field that is not controlled by the `MonitoringStack` resource:
|
|
|
|
.. Change `spec.enforcedSampleLimit`, which is a field not set by the `MonitoringStack` resource. Create the file `prom-spec-edited.yaml`:
|
|
+
|
|
.`prom-spec-edited.yaml`
|
|
+
|
|
[source,yaml]
|
|
----
|
|
apiVersion: monitoring.rhobs/v1
|
|
kind: Prometheus
|
|
metadata:
|
|
name: sample-monitoring-stack
|
|
namespace: coo-demo
|
|
spec:
|
|
enforcedSampleLimit: 1000
|
|
----
|
|
|
|
.. Apply the YAML by running the following command:
|
|
+
|
|
[source,terminal]
|
|
----
|
|
$ oc apply -f ./prom-spec-edited.yaml --server-side
|
|
----
|
|
+
|
|
[NOTE]
|
|
====
|
|
You must use the `--server-side` flag.
|
|
====
|
|
|
|
.. Get the changed Prometheus object and note that there is one more section in `managedFields` which has `spec.enforcedSampleLimit`:
|
|
+
|
|
[source,terminal]
|
|
----
|
|
$ oc get prometheus -n coo-demo
|
|
----
|
|
+
|
|
.Example output
|
|
[source,yaml]
|
|
----
|
|
managedFields: <1>
|
|
- apiVersion: monitoring.rhobs/v1
|
|
fieldsType: FieldsV1
|
|
fieldsV1:
|
|
f:metadata:
|
|
f:labels:
|
|
f:app.kubernetes.io/managed-by: {}
|
|
f:app.kubernetes.io/name: {}
|
|
f:app.kubernetes.io/part-of: {}
|
|
f:spec:
|
|
f:enforcedSampleLimit: {} <2>
|
|
manager: kubectl
|
|
operation: Apply
|
|
----
|
|
<1> `managedFields`
|
|
<2> `spec.enforcedSampleLimit`
|
|
|
|
. Modify a field that is managed by the `MonitoringStack` resource:
|
|
.. Change `spec.LogLevel`, which is a field managed by the `MonitoringStack` resource, using the following YAML configuration:
|
|
+
|
|
[source,yaml]
|
|
----
|
|
# changing the logLevel from debug to info
|
|
apiVersion: monitoring.rhobs/v1
|
|
kind: Prometheus
|
|
metadata:
|
|
name: sample-monitoring-stack
|
|
namespace: coo-demo
|
|
spec:
|
|
logLevel: info <1>
|
|
----
|
|
<1> `spec.logLevel` has been added
|
|
|
|
.. Apply the YAML by running the following command:
|
|
+
|
|
[source,terminal]
|
|
----
|
|
$ oc apply -f ./prom-spec-edited.yaml --server-side
|
|
----
|
|
+
|
|
.Example output
|
|
+
|
|
[source,terminal]
|
|
----
|
|
error: Apply failed with 1 conflict: conflict with "observability-operator": .spec.logLevel
|
|
Please review the fields above--they currently have other managers. Here
|
|
are the ways you can resolve this warning:
|
|
* If you intend to manage all of these fields, please re-run the apply
|
|
command with the `--force-conflicts` flag.
|
|
* If you do not intend to manage all of the fields, please edit your
|
|
manifest to remove references to the fields that should keep their
|
|
current managers.
|
|
* You may co-own fields by updating your manifest to match the existing
|
|
value; in this case, you'll become the manager if the other manager(s)
|
|
stop managing the field (remove it from their configuration).
|
|
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
|
|
----
|
|
|
|
.. Notice that the field `spec.logLevel` cannot be changed using Server-Side Apply, because it is already managed by `observability-operator`.
|
|
|
|
.. Use the `--force-conflicts` flag to force the change.
|
|
+
|
|
[source,terminal]
|
|
----
|
|
$ oc apply -f ./prom-spec-edited.yaml --server-side --force-conflicts
|
|
----
|
|
+
|
|
.Example output
|
|
+
|
|
[source,terminal]
|
|
----
|
|
prometheus.monitoring.rhobs/sample-monitoring-stack serverside-applied
|
|
----
|
|
+
|
|
With `--force-conflicts` flag, the field can be forced to change, but since the same field is also managed by the `MonitoringStack` resource, the Observability Operator detects the change, and reverts it back to the value set by the `MonitoringStack` resource.
|
|
+
|
|
[NOTE]
|
|
====
|
|
Some Prometheus fields generated by the `MonitoringStack` resource are influenced by the fields in the `MonitoringStack` `spec` stanza, for example, `logLevel`. These can be changed by changing the `MonitoringStack` `spec`.
|
|
====
|
|
|
|
.. To change the `logLevel` in the Prometheus object, apply the following YAML to change the `MonitoringStack` resource:
|
|
+
|
|
[source,yaml]
|
|
----
|
|
apiVersion: monitoring.rhobs/v1alpha1
|
|
kind: MonitoringStack
|
|
metadata:
|
|
name: sample-monitoring-stack
|
|
labels:
|
|
coo: example
|
|
spec:
|
|
logLevel: info
|
|
----
|
|
|
|
.. To confirm that the change has taken place, query for the log level by running the following command:
|
|
+
|
|
[source,terminal]
|
|
----
|
|
$ oc -n coo-demo get Prometheus.monitoring.rhobs -o=jsonpath='{.items[0].spec.logLevel}'
|
|
----
|
|
+
|
|
.Example output
|
|
+
|
|
[source,terminal]
|
|
----
|
|
info
|
|
----
|
|
|
|
|
|
[NOTE]
|
|
====
|
|
. If a new version of an Operator generates a field that was previously generated and controlled by an actor, the value set by the actor will be overridden.
|
|
+
|
|
For example, you are managing a field `enforcedSampleLimit` which is not generated by the `MonitoringStack` resource. If the Observability Operator is upgraded, and the new version of the Operator generates a value for `enforcedSampleLimit`, this will overide the value you have previously set.
|
|
|
|
. The `Prometheus` object generated by the `MonitoringStack` resource may contain some fields which are not explicitly set by the monitoring stack. These fields appear because they have default values.
|
|
====
|