1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-06 06:46:26 +01:00
Files
openshift-docs/modules/infrastructure-moving-logging.adoc

237 lines
7.3 KiB
Plaintext

// Module included in the following assemblies:
//
// * machine_management/creating-infrastructure-machinesets.adoc
// * logging/cluster-logging-moving.adoc
[id="infrastructure-moving-logging_{context}"]
= Moving the cluster logging resources
You can configure the Cluster Logging Operator to deploy the pods
for any or all of the Cluster Logging components, Elasticsearch, Kibana, and Curator to different nodes.
You cannot move the Cluster Logging Operator pod from its installed location.
For example, you can move the Elasticsearch pods to a separate node because of
high CPU, memory, and disk requirements.
[NOTE]
====
You should set your MachineSet to use at least 6 replicas.
====
.Prerequisites
* Cluster logging and Elasticsearch must be installed. These features are not installed by default.
.Procedure
. Edit the Cluster Logging Custom Resource in the `openshift-logging` project:
+
[source,terminal]
----
$ oc edit ClusterLogging instance
----
+
[source,yaml]
----
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
....
spec:
collection:
logs:
fluentd:
resources: null
type: fluentd
curation:
curator:
nodeSelector: <1>
node-role.kubernetes.io/infra: ''
resources: null
schedule: 30 3 * * *
type: curator
logStore:
elasticsearch:
nodeCount: 3
nodeSelector: <1>
node-role.kubernetes.io/infra: ''
redundancyPolicy: SingleRedundancy
resources:
limits:
cpu: 500m
memory: 16Gi
requests:
cpu: 500m
memory: 16Gi
storage: {}
type: elasticsearch
managementState: Managed
visualization:
kibana:
nodeSelector: <1>
node-role.kubernetes.io/infra: '' <1>
proxy:
resources: null
replicas: 1
resources: null
type: kibana
....
----
<1> Add a `nodeSelector` parameter with the appropriate value to the component you want to move. You can use a `nodeSelector` in the format shown or use `<key>: <value>` pairs, based on the value specified for the node.
.Verification steps
To verify that a component has moved, you can use the `oc get pod -o wide` command.
For example:
* You want to move the Kibana pod from the `ip-10-0-147-79.us-east-2.compute.internal` node:
+
[source,terminal]
----
$ oc get pod kibana-5b8bdf44f9-ccpq9 -o wide
----
+
.Example output
[source,terminal]
----
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>
----
* You want to move the Kibana Pod to the `ip-10-0-139-48.us-east-2.compute.internal` node, a dedicated infrastructure node:
+
[source,terminal]
----
$ oc get nodes
----
+
.Example output
[source,terminal]
----
NAME STATUS ROLES AGE VERSION
ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.19.0
ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.19.0
ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.19.0
ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.19.0
ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.19.0
ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.19.0
ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.19.0
----
+
Note that the node has a `node-role.kubernetes.io/infra: ''` label:
+
[source,terminal]
----
$ oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml
----
+
.Example output
[source,yaml]
----
kind: Node
apiVersion: v1
metadata:
name: ip-10-0-139-48.us-east-2.compute.internal
selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal
uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751
resourceVersion: '39083'
creationTimestamp: '2020-04-13T19:07:55Z'
labels:
node-role.kubernetes.io/infra: ''
....
----
* To move the Kibana pod, edit the Cluster Logging CR to add a node selector:
+
[source,yaml]
----
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
....
spec:
....
visualization:
kibana:
nodeSelector: <1>
node-role.kubernetes.io/infra: '' <1>
proxy:
resources: null
replicas: 1
resources: null
type: kibana
----
<1> Add a node selector to match the label in the node specification.
* After you save the CR, the current Kibana pod is terminated and new pod is deployed:
+
[source,terminal]
----
$ oc get pods
----
+
.Example output
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m
elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m
elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m
elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m
fluentd-42dzz 1/1 Running 0 28m
fluentd-d74rq 1/1 Running 0 28m
fluentd-m5vr9 1/1 Running 0 28m
fluentd-nkxl7 1/1 Running 0 28m
fluentd-pdvqb 1/1 Running 0 28m
fluentd-tflh6 1/1 Running 0 28m
kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s
kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s
----
* The new pod is on the `ip-10-0-139-48.us-east-2.compute.internal` node:
+
[source,terminal]
----
$ oc get pod kibana-7d85dcffc8-bfpfp -o wide
----
+
.Example output
[source,terminal]
----
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>
----
* After a few moments, the original Kibana pod is removed.
+
[source,terminal]
----
$ oc get pods
----
+
.Example output
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m
elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m
elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m
elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m
fluentd-42dzz 1/1 Running 0 29m
fluentd-d74rq 1/1 Running 0 29m
fluentd-m5vr9 1/1 Running 0 29m
fluentd-nkxl7 1/1 Running 0 29m
fluentd-pdvqb 1/1 Running 0 29m
fluentd-tflh6 1/1 Running 0 29m
kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s
----