mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 21:46:22 +01:00
183 lines
8.2 KiB
Plaintext
183 lines
8.2 KiB
Plaintext
// Module included in the following assemblies:
|
|
//
|
|
// * metering/metering-troubleshooting-debugging.adoc
|
|
|
|
[id="metering-debugging_{context}"]
|
|
|
|
= Debugging metering
|
|
Debugging metering is much easier when you interact directly with the various components. The sections below detail how you can connect and query Presto and Hive, as well as view the dashboards of the Presto and HDFS components.
|
|
|
|
[NOTE]
|
|
====
|
|
All of the commands in this section assume you have installed metering through OperatorHub in the `openshift-metering` namespace.
|
|
====
|
|
|
|
== Get reporting operator logs
|
|
The command below will follow the logs of the `reporting-operator`.
|
|
|
|
[source,bash]
|
|
----
|
|
$ kubectl -n openshift-metering logs -f "$(kubectl -n openshift-metering get pods -l app=reporting-operator -o name | cut -c 5-)" -c reporting-operator
|
|
----
|
|
|
|
== Query Presto using presto-cli
|
|
The following command opens an interactive presto-cli session where you can query Presto. This session runs in the same container as Presto and launches an additional Java instance, which can create memory limits for the Pod. If this occurs, you should increase the memory request and limits of the Presto Pod.
|
|
|
|
By default, Presto is configured to communicate using TLS. You must to run the following command to run Presto queries:
|
|
|
|
[source,bash]
|
|
----
|
|
$ kubectl -n openshift-metering exec -it "$(kubectl -n openshift-metering get pods -l app=presto,presto=coordinator -o name | cut -d/ -f2)" -- /usr/local/bin/presto-cli --server https://presto:8080 --catalog hive --schema default --user root --keystore-path /opt/presto/tls/keystore.pem
|
|
----
|
|
|
|
Once you run this command, a prompt appears where you can run queries. Use the `show tables from metering;` query to view the list of tables:
|
|
|
|
[source,bash]
|
|
----
|
|
$ presto:default> show tables from metering;
|
|
|
|
Table
|
|
|
|
datasource_your_namespace_cluster_cpu_capacity_raw
|
|
datasource_your_namespace_cluster_cpu_usage_raw
|
|
datasource_your_namespace_cluster_memory_capacity_raw
|
|
datasource_your_namespace_cluster_memory_usage_raw
|
|
datasource_your_namespace_node_allocatable_cpu_cores
|
|
datasource_your_namespace_node_allocatable_memory_bytes
|
|
datasource_your_namespace_node_capacity_cpu_cores
|
|
datasource_your_namespace_node_capacity_memory_bytes
|
|
datasource_your_namespace_node_cpu_allocatable_raw
|
|
datasource_your_namespace_node_cpu_capacity_raw
|
|
datasource_your_namespace_node_memory_allocatable_raw
|
|
datasource_your_namespace_node_memory_capacity_raw
|
|
datasource_your_namespace_persistentvolumeclaim_capacity_bytes
|
|
datasource_your_namespace_persistentvolumeclaim_capacity_raw
|
|
datasource_your_namespace_persistentvolumeclaim_phase
|
|
datasource_your_namespace_persistentvolumeclaim_phase_raw
|
|
datasource_your_namespace_persistentvolumeclaim_request_bytes
|
|
datasource_your_namespace_persistentvolumeclaim_request_raw
|
|
datasource_your_namespace_persistentvolumeclaim_usage_bytes
|
|
datasource_your_namespace_persistentvolumeclaim_usage_raw
|
|
datasource_your_namespace_persistentvolumeclaim_usage_with_phase_raw
|
|
datasource_your_namespace_pod_cpu_request_raw
|
|
datasource_your_namespace_pod_cpu_usage_raw
|
|
datasource_your_namespace_pod_limit_cpu_cores
|
|
datasource_your_namespace_pod_limit_memory_bytes
|
|
datasource_your_namespace_pod_memory_request_raw
|
|
datasource_your_namespace_pod_memory_usage_raw
|
|
datasource_your_namespace_pod_persistentvolumeclaim_request_info
|
|
datasource_your_namespace_pod_request_cpu_cores
|
|
datasource_your_namespace_pod_request_memory_bytes
|
|
datasource_your_namespace_pod_usage_cpu_cores
|
|
datasource_your_namespace_pod_usage_memory_bytes
|
|
(32 rows)
|
|
|
|
Query 20190503_175727_00107_3venm, FINISHED, 1 node
|
|
Splits: 19 total, 19 done (100.00%)
|
|
0:02 [32 rows, 2.23KB] [19 rows/s, 1.37KB/s]
|
|
|
|
presto:default>
|
|
----
|
|
|
|
== Query Hive using beeline
|
|
The following opens an interactive beeline session where you can query Hive. This session runs in the same container as Hive and launches an additional Java instance, which can create memory limits for the Pod. If this occurs, you should increase the memory request and limits of the Hive Pod.
|
|
|
|
[source,bash]
|
|
----
|
|
$ kubectl -n openshift-metering exec -it $(kubectl -n openshift-metering get pods -l app=hive,hive=server -o name | cut -d/ -f2) -c hiveserver2 -- beeline -u 'jdbc:hive2://127.0.0.1:10000/default;auth=noSasl'
|
|
----
|
|
|
|
Once you run this command, a prompt appears where you can run queries. Use the `show tables;` query to view the list of tables:
|
|
|
|
[source,bash]
|
|
----
|
|
$ 0: jdbc:hive2://127.0.0.1:10000/default> show tables from metering;
|
|
+----------------------------------------------------+
|
|
| tab_name |
|
|
+----------------------------------------------------+
|
|
| datasource_your_namespace_cluster_cpu_capacity_raw |
|
|
| datasource_your_namespace_cluster_cpu_usage_raw |
|
|
| datasource_your_namespace_cluster_memory_capacity_raw |
|
|
| datasource_your_namespace_cluster_memory_usage_raw |
|
|
| datasource_your_namespace_node_allocatable_cpu_cores |
|
|
| datasource_your_namespace_node_allocatable_memory_bytes |
|
|
| datasource_your_namespace_node_capacity_cpu_cores |
|
|
| datasource_your_namespace_node_capacity_memory_bytes |
|
|
| datasource_your_namespace_node_cpu_allocatable_raw |
|
|
| datasource_your_namespace_node_cpu_capacity_raw |
|
|
| datasource_your_namespace_node_memory_allocatable_raw |
|
|
| datasource_your_namespace_node_memory_capacity_raw |
|
|
| datasource_your_namespace_persistentvolumeclaim_capacity_bytes |
|
|
| datasource_your_namespace_persistentvolumeclaim_capacity_raw |
|
|
| datasource_your_namespace_persistentvolumeclaim_phase |
|
|
| datasource_your_namespace_persistentvolumeclaim_phase_raw |
|
|
| datasource_your_namespace_persistentvolumeclaim_request_bytes |
|
|
| datasource_your_namespace_persistentvolumeclaim_request_raw |
|
|
| datasource_your_namespace_persistentvolumeclaim_usage_bytes |
|
|
| datasource_your_namespace_persistentvolumeclaim_usage_raw |
|
|
| datasource_your_namespace_persistentvolumeclaim_usage_with_phase_raw |
|
|
| datasource_your_namespace_pod_cpu_request_raw |
|
|
| datasource_your_namespace_pod_cpu_usage_raw |
|
|
| datasource_your_namespace_pod_limit_cpu_cores |
|
|
| datasource_your_namespace_pod_limit_memory_bytes |
|
|
| datasource_your_namespace_pod_memory_request_raw |
|
|
| datasource_your_namespace_pod_memory_usage_raw |
|
|
| datasource_your_namespace_pod_persistentvolumeclaim_request_info |
|
|
| datasource_your_namespace_pod_request_cpu_cores |
|
|
| datasource_your_namespace_pod_request_memory_bytes |
|
|
| datasource_your_namespace_pod_usage_cpu_cores |
|
|
| datasource_your_namespace_pod_usage_memory_bytes |
|
|
+----------------------------------------------------+
|
|
32 rows selected (13.101 seconds)
|
|
0: jdbc:hive2://127.0.0.1:10000/default>
|
|
----
|
|
|
|
== Port-forward to Hive web UI
|
|
Run the following command:
|
|
|
|
[source,bash]
|
|
----
|
|
$ kubectl -n openshift-metering port-forward hive-server-0 10002
|
|
----
|
|
You can now open http://127.0.0.1:10002 in your browser window to view the Hive web interface.
|
|
|
|
== Port-forward to hdfs
|
|
To the namenode:
|
|
|
|
[source,bash]
|
|
----
|
|
$ kubectl -n openshift-metering port-forward hdfs-namenode-0 9870
|
|
----
|
|
|
|
You can now open http://127.0.0.1:9870 in your browser window to view the HDFS web interface.
|
|
|
|
To the first datanode:
|
|
|
|
[source,bash]
|
|
----
|
|
$ kubectl -n openshift-metering port-forward hdfs-datanode-0 9864
|
|
----
|
|
|
|
To check other datanodes, run the above command, replacing `hdfs-datanode-0` with the Pod you want to view information on.
|
|
|
|
== Metering Ansible Operator
|
|
Metering uses the Ansible Operator to watch and reconcile resources in a cluster environment. When debugging a failed metering installation, it can be helpful to view the Ansible logs or status of your `MeteringConfig` custom resource.
|
|
|
|
=== Accessing ansible logs
|
|
In the default installation, the metering Operator is deployed as a Pod. In this case, we can check the logs of the ansible container within this Pod:
|
|
|
|
[source,bash]
|
|
----
|
|
$ kubectl -n openshift-metering logs $(kubectl -n openshift-metering get pods -l app=metering-operator -o name | cut -d/ -f2) -c ansible
|
|
----
|
|
|
|
Alternatively, you can view the logs of the Operator container (replace `-c ansible` with `-c operator`) for condensed output.
|
|
|
|
=== Checking the MeteringConfig Status
|
|
It can be helpful to view the `.status` field of your `MeteringConfig` custom resource to debug any recent failures. You can do this with the following command:
|
|
|
|
[source,bash]
|
|
----
|
|
$ kubectl -n openshift-metering get meteringconfig operator-metering -o json | jq '.status'
|
|
----
|