mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-07 09:46:53 +01:00
CLO - ClusterLogging Status messages when cluster logging is not functional
This commit is contained in:
@@ -747,6 +747,9 @@ Topics:
|
||||
- Name: Viewing Elasticsearch status
|
||||
File: efk-logging-elasticsearch-status
|
||||
Distros: openshift-enterprise,openshift-origin
|
||||
- Name: Viewing cluster logging status
|
||||
File: efk-logging-cluster-status
|
||||
Distros: openshift-enterprise,openshift-origin
|
||||
- Name: Manually rolling out Elasticsearch
|
||||
File: efk-logging-manual-rollout
|
||||
Distros: openshift-enterprise,openshift-origin
|
||||
|
||||
@@ -49,3 +49,4 @@ modules/efk-logging-elasticsearch-persistent-storage-persistent-dynamic.adoc[lev
|
||||
modules/efk-logging-elasticsearch-persistent-storage-local.adoc[leveloffset=+2]
|
||||
////
|
||||
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
:context: efk-logging-elasticsearch
|
||||
[id="efk-logging-clo-status"]
|
||||
:context: efk-logging-cluster-status
|
||||
[id="efk-logging-cluster-status"]
|
||||
= Viewing cluster logging status
|
||||
include::modules/common-attributes.adoc[]
|
||||
|
||||
86
modules/efk-logging-clo-status-comp.adoc
Normal file
86
modules/efk-logging-clo-status-comp.adoc
Normal file
@@ -0,0 +1,86 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * logging/efk-logging-cluster-status.adoc
|
||||
|
||||
[id="efk-logging-clo-status-example-{context}"]
|
||||
= Viewing the status of cluster logging components
|
||||
|
||||
You can view the status for a number of cluster logging components.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Cluster logging and Elasticsearch must be installed.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Change to the `openshift-logging` project.
|
||||
+
|
||||
----
|
||||
$ oc project openshift-logging
|
||||
----
|
||||
|
||||
. View the status of the cluster logging deployment:
|
||||
+
|
||||
----
|
||||
$ oc describe deployment cluster-logging-operator
|
||||
----
|
||||
+
|
||||
The output includes the following status information:
|
||||
+
|
||||
----
|
||||
Name: cluster-logging-operator
|
||||
|
||||
....
|
||||
|
||||
Conditions:
|
||||
Type Status Reason
|
||||
---- ------ ------
|
||||
Available True MinimumReplicasAvailable
|
||||
Progressing True NewReplicaSetAvailable
|
||||
|
||||
....
|
||||
|
||||
Events:
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Normal ScalingReplicaSet 62m deployment-controller Scaled up replica set cluster-logging-operator-574b8987df to 1----
|
||||
----
|
||||
|
||||
. View the status of the cluster logging ReplicaSet:
|
||||
|
||||
.. Get the name of a ReplicaSet:
|
||||
+
|
||||
----
|
||||
$ oc get replicaset
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
cluster-logging-operator-574b8987df 1 1 1 159m
|
||||
elasticsearch-cdm-uhr537yu-1-6869694fb 1 1 1 157m
|
||||
elasticsearch-cdm-uhr537yu-2-857b6d676f 1 1 1 156m
|
||||
elasticsearch-cdm-uhr537yu-3-5b6fdd8cfd 1 1 1 155m
|
||||
kibana-5bd5544f87 1 1 1 157m
|
||||
----
|
||||
|
||||
.. Get the status of the ReplicaSet:
|
||||
+
|
||||
----
|
||||
$ oc describe replicaset cluster-logging-operator-574b8987df
|
||||
----
|
||||
+
|
||||
The output includes the following status information:
|
||||
+
|
||||
----
|
||||
Name: cluster-logging-operator-574b8987df
|
||||
|
||||
....
|
||||
|
||||
Replicas: 1 current / 1 desired
|
||||
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
|
||||
|
||||
....
|
||||
|
||||
Events:
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Normal SuccessfulCreate 66m replicaset-controller Created pod: cluster-logging-operator-574b8987df-qjhqv----
|
||||
----
|
||||
|
||||
256
modules/efk-logging-clo-status.adoc
Normal file
256
modules/efk-logging-clo-status.adoc
Normal file
@@ -0,0 +1,256 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * logging/efk-logging-cluster-status.adoc
|
||||
|
||||
[id="efk-logging-clo-status_{context}"]
|
||||
= Viewing the status of the Cluster Logging Operator
|
||||
|
||||
You can view the status of your Cluster Logging Operator.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Cluster logging and Elasticsearch must be installed.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Change to the `openshift-logging` project.
|
||||
+
|
||||
----
|
||||
$ oc project openshift-logging
|
||||
----
|
||||
|
||||
. To view the cluster logging status:
|
||||
|
||||
.. Get the cluster logging status:
|
||||
+
|
||||
----
|
||||
$ oc get clusterlogging instance -o yaml
|
||||
----
|
||||
+
|
||||
The output includes information similar to the following:
|
||||
+
|
||||
----
|
||||
apiVersion: logging.openshift.io/v1
|
||||
kind: ClusterLogging
|
||||
|
||||
....
|
||||
|
||||
status: <1>
|
||||
collection:
|
||||
logs:
|
||||
fluentdStatus:
|
||||
daemonSet: fluentd <2>
|
||||
nodes:
|
||||
fluentd-2rhqp: ip-10-0-169-13.ec2.internal
|
||||
fluentd-6fgjh: ip-10-0-165-244.ec2.internal
|
||||
fluentd-6l2ff: ip-10-0-128-218.ec2.internal
|
||||
fluentd-54nx5: ip-10-0-139-30.ec2.internal
|
||||
fluentd-flpnn: ip-10-0-147-228.ec2.internal
|
||||
fluentd-n2frh: ip-10-0-157-45.ec2.internal
|
||||
pods:
|
||||
failed: []
|
||||
notReady: []
|
||||
ready:
|
||||
- fluentd-2rhqp
|
||||
- fluentd-54nx5
|
||||
- fluentd-6fgjh
|
||||
- fluentd-6l2ff
|
||||
- fluentd-flpnn
|
||||
- fluentd-n2frh
|
||||
rsyslogStatus:
|
||||
Nodes: null
|
||||
daemonSet: ""
|
||||
pods: null
|
||||
curation: <3>
|
||||
curatorStatus:
|
||||
- cronJobs: curator
|
||||
schedules: 30 3 * * *
|
||||
suspended: false
|
||||
logstore: <4>
|
||||
elasticsearchStatus:
|
||||
- ShardAllocationEnabled: all
|
||||
cluster:
|
||||
activePrimaryShards: 5
|
||||
activeShards: 5
|
||||
initializingShards: 0
|
||||
numDataNodes: 1
|
||||
numNodes: 1
|
||||
pendingTasks: 0
|
||||
relocatingShards: 0
|
||||
status: green
|
||||
unassignedShards: 0
|
||||
clusterName: elasticsearch
|
||||
nodeConditions:
|
||||
elasticsearch-cdm-mkkdys93-1:
|
||||
nodeCount: 1
|
||||
pods:
|
||||
client:
|
||||
failed:
|
||||
notReady:
|
||||
ready:
|
||||
- elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c
|
||||
data:
|
||||
failed:
|
||||
notReady:
|
||||
ready:
|
||||
- elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c
|
||||
master:
|
||||
failed:
|
||||
notReady:
|
||||
ready:
|
||||
- elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c
|
||||
visualization: <5>
|
||||
kibanaStatus:
|
||||
- deployment: kibana
|
||||
pods:
|
||||
failed: []
|
||||
notReady: []
|
||||
ready:
|
||||
- kibana-7fb4fd4cc9-f2nls
|
||||
replicaSets:
|
||||
- kibana-7fb4fd4cc9
|
||||
replicas: 1
|
||||
----
|
||||
<1> In the output, the cluster status fields appear in the `status` stanza.
|
||||
<2> Information on the Fluentd pods.
|
||||
<3> Information on the Curator pods.
|
||||
<4> Information on the Elasticsearch pods, including Elasticsearch cluster health, `green`, `yellow`, or `red`.
|
||||
<5> Information on the Kibana pods.
|
||||
|
||||
|
||||
[id="efk-logging-clo-status-message_{context}"]
|
||||
== Example condition messages
|
||||
|
||||
The following are examples of some condition messages from the `Status.Nodes` section of the cluster logging instance.
|
||||
|
||||
|
||||
// https://github.com/openshift/elasticsearch-operator/pull/92
|
||||
|
||||
A status message similar to the following indicates a node has exceeded the configured low watermark and no shard will be allocated to this node:
|
||||
|
||||
----
|
||||
nodes:
|
||||
- conditions:
|
||||
- lastTransitionTime: 2019-03-15T15:57:22Z
|
||||
message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not
|
||||
be allocated on this node.
|
||||
reason: Disk Watermark Low
|
||||
status: "True"
|
||||
type: NodeStorage
|
||||
deploymentName: example-elasticsearch-clientdatamaster-0-1
|
||||
upgradeStatus: {}
|
||||
----
|
||||
|
||||
A status message similar to the following indicates a node has exceeded the configured high watermark and shard will be relocated to other nodes:
|
||||
|
||||
----
|
||||
nodes:
|
||||
- conditions:
|
||||
- lastTransitionTime: 2019-03-15T16:04:45Z
|
||||
message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated
|
||||
from this node.
|
||||
reason: Disk Watermark High
|
||||
status: "True"
|
||||
type: NodeStorage
|
||||
deploymentName: cluster-logging-operator
|
||||
upgradeStatus: {}
|
||||
----
|
||||
|
||||
A status message similar to the following indicates the Elasticsearch node selector in the CR does not match any nodes in the cluster:
|
||||
|
||||
----
|
||||
Elasticsearch Status:
|
||||
Shard Allocation Enabled: shard allocation unknown
|
||||
Cluster:
|
||||
Active Primary Shards: 0
|
||||
Active Shards: 0
|
||||
Initializing Shards: 0
|
||||
Num Data Nodes: 0
|
||||
Num Nodes: 0
|
||||
Pending Tasks: 0
|
||||
Relocating Shards: 0
|
||||
Status: cluster health unknown
|
||||
Unassigned Shards: 0
|
||||
Cluster Name: elasticsearch
|
||||
Node Conditions:
|
||||
elasticsearch-cdm-mkkdys93-1:
|
||||
Last Transition Time: 2019-06-26T03:37:32Z
|
||||
Message: 0/5 nodes are available: 5 node(s) didn't match node selector.
|
||||
Reason: Unschedulable
|
||||
Status: True
|
||||
Type: Unschedulable
|
||||
elasticsearch-cdm-mkkdys93-2:
|
||||
Node Count: 2
|
||||
Pods:
|
||||
Client:
|
||||
Failed:
|
||||
Not Ready:
|
||||
elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49
|
||||
elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl
|
||||
Ready:
|
||||
Data:
|
||||
Failed:
|
||||
Not Ready:
|
||||
elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49
|
||||
elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl
|
||||
Ready:
|
||||
Master:
|
||||
Failed:
|
||||
Not Ready:
|
||||
elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49
|
||||
elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl
|
||||
Ready:
|
||||
----
|
||||
|
||||
A status message similar to the following indicates that the requested PVC could not bind to PV:
|
||||
|
||||
----
|
||||
Node Conditions:
|
||||
elasticsearch-cdm-mkkdys93-1:
|
||||
Last Transition Time: 2019-06-26T03:37:32Z
|
||||
Message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times)
|
||||
Reason: Unschedulable
|
||||
Status: True
|
||||
Type: Unschedulable
|
||||
----
|
||||
|
||||
A status message similar to the following indicates that the Curator pod cannot be scheduled because the node selector did not match any nodes:
|
||||
|
||||
----
|
||||
Curation:
|
||||
Curator Status:
|
||||
Cluster Condition:
|
||||
curator-1561518900-cjx8d:
|
||||
Last Transition Time: 2019-06-26T03:20:08Z
|
||||
Reason: Completed
|
||||
Status: True
|
||||
Type: ContainerTerminated
|
||||
curator-1561519200-zqxxj:
|
||||
Last Transition Time: 2019-06-26T03:20:01Z
|
||||
Message: 0/5 nodes are available: 1 Insufficient cpu, 5 node(s) didn't match node selector.
|
||||
Reason: Unschedulable
|
||||
Status: True
|
||||
Type: Unschedulable
|
||||
Cron Jobs: curator
|
||||
Schedules: */5 * * * *
|
||||
Suspended: false
|
||||
----
|
||||
|
||||
A status message similar to the following indicates that the Fluentd pods cannot be scheduled because the node selector did not match any nodes:
|
||||
|
||||
----
|
||||
Status:
|
||||
Collection:
|
||||
Logs:
|
||||
Fluentd Status:
|
||||
Daemon Set: fluentd
|
||||
Nodes:
|
||||
Pods:
|
||||
Failed:
|
||||
Not Ready:
|
||||
Ready:
|
||||
Rsyslog Status:
|
||||
Nodes: <nil>
|
||||
Daemon Set:
|
||||
Pods: <nil>
|
||||
----
|
||||
Reference in New Issue
Block a user