1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Remove deprecated assemblies - OSDOCS-15979

This commit is contained in:
Max Bridges
2025-10-07 10:54:32 -04:00
parent a84c093fa4
commit 26b125c170
15 changed files with 0 additions and 781 deletions

View File

@@ -1,20 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="metering-about-configuring"]
= About configuring metering
include::_attributes/common-attributes.adoc[]
:context: metering-about-configuring
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
The `MeteringConfig` custom resource specifies all the configuration details for your metering installation. When you first install the metering stack, a default `MeteringConfig` custom resource is generated. Use the examples in the documentation to modify this default file. Keep in mind the following key points:
* At a minimum, you need to xref:../../metering/configuring_metering/metering-configure-persistent-storage.adoc#metering-configure-persistent-storage[configure persistent storage] and xref:../../metering/configuring_metering/metering-configure-hive-metastore.adoc#metering-configure-hive-metastore[configure the Hive metastore].
* Most default configuration settings work, but larger deployments or highly customized deployments should review all configuration options carefully.
* Some configuration options can not be modified after installation.
For configuration options that can be modified after installation, make the changes in your `MeteringConfig` custom resource and reapply the file.

View File

@@ -1,175 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="metering-common-config-options"]
= Common configuration options
include::_attributes/common-attributes.adoc[]
:context: metering-common-config-options
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
== Resource requests and limits
You can adjust the CPU, memory, or storage resource requests and/or limits for pods and volumes. The `default-resource-limits.yaml` below provides an example of setting resource request and limits for each component.
[source,yaml]
----
apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
reporting-operator:
spec:
resources:
limits:
cpu: 1
memory: 500Mi
requests:
cpu: 500m
memory: 100Mi
presto:
spec:
coordinator:
resources:
limits:
cpu: 4
memory: 4Gi
requests:
cpu: 2
memory: 2Gi
worker:
replicas: 0
resources:
limits:
cpu: 8
memory: 8Gi
requests:
cpu: 4
memory: 2Gi
hive:
spec:
metastore:
resources:
limits:
cpu: 4
memory: 2Gi
requests:
cpu: 500m
memory: 650Mi
storage:
class: null
create: true
size: 5Gi
server:
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 500m
memory: 500Mi
----
== Node selectors
You can run the metering components on specific sets of nodes. Set the `nodeSelector` on a metering component to control where the component is scheduled. The `node-selectors.yaml` file below provides an example of setting node selectors for each component.
[NOTE]
====
Add the `openshift.io/node-selector: ""` namespace annotation to the metering namespace YAML file before configuring specific node selectors for the operand pods. Specify `""` as the annotation value.
====
[source,yaml]
----
apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
reporting-operator:
spec:
nodeSelector:
"node-role.kubernetes.io/infra": "" <1>
presto:
spec:
coordinator:
nodeSelector:
"node-role.kubernetes.io/infra": "" <1>
worker:
nodeSelector:
"node-role.kubernetes.io/infra": "" <1>
hive:
spec:
metastore:
nodeSelector:
"node-role.kubernetes.io/infra": "" <1>
server:
nodeSelector:
"node-role.kubernetes.io/infra": "" <1>
----
<1> Add a `nodeSelector` parameter with the appropriate value to the component you want to move. You can use a `nodeSelector` in the format shown or use key-value pairs, based on the value specified for the node.
[NOTE]
====
Add the `openshift.io/node-selector: ""` namespace annotation to the metering namespace YAML file before configuring specific node selectors for the operand pods. When the `openshift.io/node-selector` annotation is set on the project, the value is used in preference to the value of the `spec.defaultNodeSelector` field in the cluster-wide `Scheduler` object.
====
.Verification
You can verify the metering node selectors by performing any of the following checks:
* Verify that all pods for metering are correctly scheduled on the IP of the node that is configured in the `MeteringConfig` custom resource:
+
--
. Check all pods in the `openshift-metering` namespace:
+
[source,terminal]
----
$ oc --namespace openshift-metering get pods -o wide
----
+
The output shows the `NODE` and corresponding `IP` for each pod running in the `openshift-metering` namespace.
+
.Example output
[source,terminal]
----
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
hive-metastore-0 1/2 Running 0 4m33s 10.129.2.26 ip-10-0-210-167.us-east-2.compute.internal <none> <none>
hive-server-0 2/3 Running 0 4m21s 10.128.2.26 ip-10-0-150-175.us-east-2.compute.internal <none> <none>
metering-operator-964b4fb55-4p699 2/2 Running 0 7h30m 10.131.0.33 ip-10-0-189-6.us-east-2.compute.internal <none> <none>
nfs-server 1/1 Running 0 7h30m 10.129.2.24 ip-10-0-210-167.us-east-2.compute.internal <none> <none>
presto-coordinator-0 2/2 Running 0 4m8s 10.131.0.35 ip-10-0-189-6.us-east-2.compute.internal <none> <none>
reporting-operator-869b854c78-8g2x5 1/2 Running 0 7h27m 10.128.2.25 ip-10-0-150-175.us-east-2.compute.internal <none> <none>
----
+
. Compare the nodes in the `openshift-metering` namespace to each node `NAME` in your cluster:
+
[source,terminal]
----
$ oc get nodes
----
+
.Example output
[source,terminal]
----
NAME STATUS ROLES AGE VERSION
ip-10-0-147-106.us-east-2.compute.internal Ready master 14h v1.33.4
ip-10-0-150-175.us-east-2.compute.internal Ready worker 14h v1.33.4
ip-10-0-175-23.us-east-2.compute.internal Ready master 14h v1.33.4
ip-10-0-189-6.us-east-2.compute.internal Ready worker 14h v1.33.4
ip-10-0-205-158.us-east-2.compute.internal Ready master 14h v1.33.4
ip-10-0-210-167.us-east-2.compute.internal Ready worker 14h v1.33.4
----
--
* Verify that the node selector configuration in the `MeteringConfig` custom resource does not interfere with the cluster-wide node selector configuration such that no metering operand pods are scheduled.
** Check the cluster-wide `Scheduler` object for the `spec.defaultNodeSelector` field, which shows where pods are scheduled by default:
+
[source,terminal]
----
$ oc get schedulers.config.openshift.io cluster -o yaml
----

View File

@@ -1,116 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="metering-configure-aws-billing-correlation"]
= Configure AWS billing correlation
include::_attributes/common-attributes.adoc[]
:context: metering-configure-aws-billing-correlation
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
Metering can correlate cluster usage information with https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/billing-reports-costusage.html[AWS detailed billing information], attaching a dollar amount to resource usage. For clusters running in EC2, you can enable this by modifying the example `aws-billing.yaml` file below.
[source,yaml]
----
apiVersion: metering.openshift.io/v1
kind: MeteringConfig
metadata:
name: "operator-metering"
spec:
openshift-reporting:
spec:
awsBillingReportDataSource:
enabled: true
# Replace these with where your AWS billing reports are
# stored in S3.
bucket: "<your-aws-cost-report-bucket>" <1>
prefix: "<path/to/report>"
region: "<your-buckets-region>"
reporting-operator:
spec:
config:
aws:
secretName: "<your-aws-secret>" <2>
presto:
spec:
config:
aws:
secretName: "<your-aws-secret>" <2>
hive:
spec:
config:
aws:
secretName: "<your-aws-secret>" <2>
----
To enable AWS billing correlation, first ensure the AWS Cost and Usage Reports are enabled. For more information, see https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/billing-reports-gettingstarted-turnonreports.html[Turning on the AWS Cost and Usage Report] in the AWS documentation.
<1> Update the bucket, prefix, and region to the location of your AWS Detailed billing report.
<2> All `secretName` fields should be set to the name of a secret in the metering namespace containing AWS credentials in the `data.aws-access-key-id` and `data.aws-secret-access-key` fields. See the example secret file below for more details.
[source,yaml]
----
apiVersion: v1
kind: Secret
metadata:
name: <your-aws-secret>
data:
aws-access-key-id: "dGVzdAo="
aws-secret-access-key: "c2VjcmV0Cg=="
----
To store data in S3, the `aws-access-key-id` and `aws-secret-access-key` credentials must have read and write access to the bucket. For an example of an IAM policy granting the required permissions, see the `aws/read-write.json` file below.
[source,json]
----
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:GetObject",
"s3:HeadBucket",
"s3:ListBucket",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::operator-metering-data/*", <1>
"arn:aws:s3:::operator-metering-data" <1>
]
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:GetObject",
"s3:HeadBucket",
"s3:ListBucket",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::operator-metering-data/*", <1>
"arn:aws:s3:::operator-metering-data" <1>
]
}
]
}
----
<1> Replace `operator-metering-data` with the name of your bucket.
This can be done either preinstallation or postinstallation. Disabling it postinstallation can cause errors in the Reporting Operator.

View File

@@ -1,18 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="metering-configure-hive-metastore"]
= Configuring the Hive metastore
include::_attributes/common-attributes.adoc[]
:context: metering-configure-hive-metastore
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
Hive metastore is responsible for storing all the metadata about the database tables created in Presto and Hive. By default, the metastore stores this information in a local embedded Derby database in a persistent volume attached to the pod.
Generally, the default configuration of the Hive metastore works for small clusters, but users may wish to improve performance or move storage requirements out of cluster by using a dedicated SQL database for storing the Hive metastore data.
include::modules/metering-configure-persistentvolumes.adoc[leveloffset=+1]
include::modules/metering-use-mysql-or-postgresql-for-hive.adoc[leveloffset=+1]

View File

@@ -1,22 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="metering-configure-persistent-storage"]
= Configuring persistent storage
include::_attributes/common-attributes.adoc[]
:context: metering-configure-persistent-storage
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
Metering requires persistent storage to persist data collected by the Metering Operator and to store the results of reports. A number of different storage providers and storage formats are supported. Select your storage provider and modify the example configuration files to configure persistent storage for your metering installation.
include::modules/metering-store-data-in-s3.adoc[leveloffset=+1]
include::modules/metering-store-data-in-s3-compatible.adoc[leveloffset=+1]
include::modules/metering-store-data-in-azure.adoc[leveloffset=+1]
include::modules/metering-store-data-in-gcp.adoc[leveloffset=+1]
include::modules/metering-store-data-in-shared-volumes.adoc[leveloffset=+1]

View File

@@ -1,16 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="metering-configure-reporting-operator"]
= Configuring the Reporting Operator
include::_attributes/common-attributes.adoc[]
:context: metering-configure-reporting-operator
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
The Reporting Operator is responsible for collecting data from Prometheus, storing the metrics in Presto, running report queries against Presto, and exposing their results via an HTTP API. Configuring the Reporting Operator is primarily done in your `MeteringConfig` custom resource.
include::modules/metering-prometheus-connection.adoc[leveloffset=+1]
include::modules/metering-exposing-the-reporting-api.adoc[leveloffset=+1]

View File

@@ -1,12 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="about-metering"]
= About Metering
include::_attributes/common-attributes.adoc[]
:context: about-metering
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
include::modules/metering-overview.adoc[leveloffset=+1]

View File

@@ -1,62 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="installing-metering"]
= Installing metering
include::_attributes/common-attributes.adoc[]
:context: installing-metering
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
Review the following sections before installing metering into your cluster.
To get started installing metering, first install the Metering Operator from the software catalog. Next, configure your instance of metering by creating a `MeteringConfig` custom resource (CR). Installing the Metering Operator creates a default `MeteringConfig` resource that you can modify using the examples in the documentation. After creating your `MeteringConfig` resource, install the metering stack. Last, verify your installation.
include::modules/metering-install-prerequisites.adoc[leveloffset=+1]
include::modules/metering-install-operator.adoc[leveloffset=+1]
// Including this content directly in the assembly because the workflow requires linking off to the config docs, and we don't current link
// inside of modules - klamenzo 2019-09-23
[id="metering-install-metering-stack_{context}"]
== Installing the metering stack
After adding the Metering Operator to your cluster you can install the components of metering by installing the metering stack.
== Prerequisites
* Review the xref:../metering/configuring_metering/metering-about-configuring.adoc#metering-about-configuring[configuration options]
* Create a `MeteringConfig` resource. You can begin the following process to generate a default `MeteringConfig` resource, then use the examples in the documentation to modify this default file for your specific installation. Review the following topics to create your `MeteringConfig` resource:
** For configuration options, review xref:../metering/configuring_metering/metering-about-configuring.adoc#metering-about-configuring[About configuring metering].
** At a minimum, you need to xref:../metering/configuring_metering/metering-configure-persistent-storage.adoc#metering-configure-persistent-storage[configure persistent storage] and xref:../metering/configuring_metering/metering-configure-hive-metastore.adoc#metering-configure-hive-metastore[configure the Hive metastore].
[IMPORTANT]
====
There can only be one `MeteringConfig` resource in the `openshift-metering` namespace. Any other configuration is not supported.
====
.Procedure
. From the web console, ensure you are on the *Operator Details* page for the Metering Operator in the `openshift-metering` project. You can navigate to this page by clicking *Ecosystem* -> *Installed Operators*, then selecting the Metering Operator.
. Under *Provided APIs*, click *Create Instance* on the Metering Configuration card. This opens a YAML editor with the default `MeteringConfig` resource file where you can define your configuration.
+
[NOTE]
====
For example configuration files and all supported configuration options, review the xref:../metering/configuring_metering/metering-about-configuring.adoc#metering-about-configuring[configuring metering documentation].
====
. Enter your `MeteringConfig` resource into the YAML editor and click *Create*.
The `MeteringConfig` resource begins to create the necessary resources for your metering stack. You can now move on to verifying your installation.
include::modules/metering-install-verify.adoc[leveloffset=+1]
[role="_additional-resources"]
[id="metering-install-additional-resources_{context}"]
== Additional resources
* For more information on configuration steps and available storage platforms, see xref:../metering/configuring_metering/metering-configure-persistent-storage.adoc#metering-configure-persistent-storage[Configuring persistent storage].
* For the steps to configure Hive, see xref:../metering/configuring_metering/metering-configure-hive-metastore.adoc#metering-configure-hive-metastore[Configuring the Hive metastore].

View File

@@ -1,21 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="metering-troubleshooting-debugging"]
= Troubleshooting and debugging metering
include::_attributes/common-attributes.adoc[]
:context: metering-troubleshooting-debugging
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
Use the following sections to help troubleshoot and debug specific issues with metering.
In addition to the information in this section, be sure to review the following topics:
* xref:../metering/metering-installing-metering.adoc#metering-install-prerequisites_installing-metering[Prerequisites for installing metering].
* xref:../metering/configuring_metering/metering-about-configuring.adoc#metering-about-configuring[About configuring metering]
include::modules/metering-troubleshooting.adoc[leveloffset=+1]
include::modules/metering-debugging.adoc[leveloffset=+1]

View File

@@ -1,31 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
:context: metering-uninstall
[id="metering-uninstall"]
= Uninstalling metering
include::_attributes/common-attributes.adoc[]
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
You can remove metering from your {product-title} cluster.
[NOTE]
====
Metering does not manage or delete Amazon S3 bucket data. After uninstalling metering, you must manually clean up S3 buckets that were used to store metering data.
====
[id="metering-remove"]
== Removing the Metering Operator from your cluster
Remove the Metering Operator from your cluster by following the documentation on xref:../operators/admin/olm-deleting-operators-from-cluster.adoc#olm-deleting-operators-from-a-cluster[deleting Operators from a cluster].
[NOTE]
====
Removing the Metering Operator from your cluster does not remove its custom resource definitions or managed resources. See the following sections on xref:../metering/metering-uninstall.adoc#metering-uninstall_metering-uninstall[Uninstalling a metering namespace] and xref:../metering/metering-uninstall.adoc#metering-uninstall-crds_metering-uninstall[Uninstalling metering custom resource definitions] for steps to remove any remaining metering components.
====
include::modules/metering-uninstall.adoc[leveloffset=+1]
include::modules/metering-uninstall-crds.adoc[leveloffset=+1]

View File

@@ -1,148 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="upgrading-metering"]
= Upgrading metering
include::_attributes/common-attributes.adoc[]
:context: upgrading-metering
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
You can upgrade metering to {product-version} by updating the Metering Operator subscription.
== Prerequisites
* The cluster is updated to {product-version}.
* The xref:../metering/metering-installing-metering.adoc#metering-install-operator_installing-metering[Metering Operator] is installed from the software catalog.
+
[NOTE]
====
You must upgrade the Metering Operator to {product-version} manually. Metering does not upgrade automatically if you selected the "Automatic" *Approval Strategy* in a previous installation.
====
* The xref:../metering/configuring_metering/metering-about-configuring.adoc#metering-about-configuring[MeteringConfig custom resource] is configured.
* The xref:../metering/metering-installing-metering.adoc#metering-install-metering-stack_installing-metering[metering stack] is installed.
* Ensure that metering status is healthy by checking that all pods are ready.
[IMPORTANT]
====
Potential data loss can occur if you modify your metering storage configuration after installing or upgrading metering.
====
.Procedure
. Click *Ecosystem* -> *Installed Operators* from the web console.
. Select the `openshift-metering` project.
. Click *Metering Operator*.
. Click *Subscription* -> *Channel*.
. In the *Change Subscription Update Channel* window, select *{product-version}* and click *Save*.
+
[NOTE]
====
Wait several seconds to allow the subscription to update before proceeding to the next step.
====
. Click *Ecosystem* -> *Installed Operators*.
+
The Metering Operator is shown as 4.9. For example:
+
----
Metering
4.9.0-202107012112.p0 provided by Red Hat, Inc
----
.Verification
You can verify the metering upgrade by performing any of the following checks:
* Check the Metering Operator cluster service version (CSV) for the new metering version. This can be done through either the web console or CLI.
+
--
.Procedure (UI)
. Navigate to *Ecosystem* -> *Installed Operators* in the metering namespace.
. Click *Metering Operator*.
. Click *Subscription* for *Subscription Details*.
. Check the *Installed Version* for the upgraded metering version. The *Starting Version* shows the metering version prior to upgrading.
.Procedure (CLI)
* Check the Metering Operator CSV:
+
[source,terminal]
----
$ oc get csv | grep metering
----
+
.Example output for metering upgrade from 4.8 to 4.9
[source,terminal]
----
NAME DISPLAY VERSION REPLACES PHASE
metering-operator.4.9.0-202107012112.p0 Metering 4.9.0-202107012112.p0 metering-operator.4.8.0-202007012112.p0 Succeeded
----
--
* Check that all required pods in the `openshift-metering` namespace are created. This can be done through either the web console or CLI.
+
--
[NOTE]
====
Many pods rely on other components to function before they themselves can be considered ready. Some pods may restart if other pods take too long to start. This is to be expected during the Metering Operator upgrade.
====
.Procedure (UI)
* Navigate to *Workloads* -> *Pods* in the metering namespace and verify that pods are being created. This can take several minutes after upgrading the metering stack.
.Procedure (CLI)
* Check that all required pods in the `openshift-metering` namespace are created:
+
[source,terminal]
----
$ oc -n openshift-metering get pods
----
.Example output
[source,terminal]
+
----
NAME READY STATUS RESTARTS AGE
hive-metastore-0 2/2 Running 0 3m28s
hive-server-0 3/3 Running 0 3m28s
metering-operator-68dd64cfb6-2k7d9 2/2 Running 0 5m17s
presto-coordinator-0 2/2 Running 0 3m9s
reporting-operator-5588964bf8-x2tkn 2/2 Running 0 2m40s
----
--
* Verify that the `ReportDataSource` resources are importing new data, indicated by a valid timestamp in the `NEWEST METRIC` column. This might take several minutes. Filter out the "-raw" `ReportDataSource` resources, which do not import data:
+
[source,terminal]
----
$ oc get reportdatasources -n openshift-metering | grep -v raw
----
+
Timestamps in the `NEWEST METRIC` column indicate that `ReportDataSource` resources are beginning to import new data.
+
.Example output
[source,terminal]
----
NAME EARLIEST METRIC NEWEST METRIC IMPORT START IMPORT END LAST IMPORT TIME AGE
node-allocatable-cpu-cores 2021-07-01T21:10:00Z 2021-07-02T19:52:00Z 2021-07-01T19:11:00Z 2021-07-02T19:52:00Z 2021-07-02T19:56:44Z 23h
node-allocatable-memory-bytes 2021-07-01T21:10:00Z 2021-07-02T19:52:00Z 2021-07-01T19:11:00Z 2021-07-02T19:52:00Z 2021-07-02T19:52:07Z 23h
node-capacity-cpu-cores 2021-07-01T21:10:00Z 2021-07-02T19:52:00Z 2021-07-01T19:11:00Z 2021-07-02T19:52:00Z 2021-07-02T19:56:52Z 23h
node-capacity-memory-bytes 2021-07-01T21:10:00Z 2021-07-02T19:57:00Z 2021-07-01T19:10:00Z 2021-07-02T19:57:00Z 2021-07-02T19:57:03Z 23h
persistentvolumeclaim-capacity-bytes 2021-07-01T21:09:00Z 2021-07-02T19:52:00Z 2021-07-01T19:11:00Z 2021-07-02T19:52:00Z 2021-07-02T19:56:46Z 23h
persistentvolumeclaim-phase 2021-07-01T21:10:00Z 2021-07-02T19:52:00Z 2021-07-01T19:11:00Z 2021-07-02T19:52:00Z 2021-07-02T19:52:36Z 23h
persistentvolumeclaim-request-bytes 2021-07-01T21:10:00Z 2021-07-02T19:57:00Z 2021-07-01T19:10:00Z 2021-07-02T19:57:00Z 2021-07-02T19:57:03Z 23h
persistentvolumeclaim-usage-bytes 2021-07-01T21:09:00Z 2021-07-02T19:52:00Z 2021-07-01T19:11:00Z 2021-07-02T19:52:00Z 2021-07-02T19:52:02Z 23h
pod-limit-cpu-cores 2021-07-01T21:10:00Z 2021-07-02T19:57:00Z 2021-07-01T19:10:00Z 2021-07-02T19:57:00Z 2021-07-02T19:57:02Z 23h
pod-limit-memory-bytes 2021-07-01T21:10:00Z 2021-07-02T19:58:00Z 2021-07-01T19:11:00Z 2021-07-02T19:58:00Z 2021-07-02T19:59:06Z 23h
pod-persistentvolumeclaim-request-info 2021-07-01T21:10:00Z 2021-07-02T19:52:00Z 2021-07-01T19:11:00Z 2021-07-02T19:52:00Z 2021-07-02T19:52:07Z 23h
pod-request-cpu-cores 2021-07-01T21:10:00Z 2021-07-02T19:58:00Z 2021-07-01T19:11:00Z 2021-07-02T19:58:00Z 2021-07-02T19:58:57Z 23h
pod-request-memory-bytes 2021-07-01T21:10:00Z 2021-07-02T19:52:00Z 2021-07-01T19:11:00Z 2021-07-02T19:52:00Z 2021-07-02T19:55:32Z 23h
pod-usage-cpu-cores 2021-07-01T21:09:00Z 2021-07-02T19:52:00Z 2021-07-01T19:11:00Z 2021-07-02T19:52:00Z 2021-07-02T19:54:55Z 23h
pod-usage-memory-bytes 2021-07-01T21:08:00Z 2021-07-02T19:52:00Z 2021-07-01T19:11:00Z 2021-07-02T19:52:00Z 2021-07-02T19:55:00Z 23h
report-ns-pvc-usage 5h36m
report-ns-pvc-usage-hourly
----
After all pods are ready and you have verified that new data is being imported, metering continues to collect data and report on your cluster. Review a previously xref:../metering/reports/metering-about-reports.adoc#metering-example-report-with-schedule_metering-about-reports[scheduled report] or create a xref:../metering/reports/metering-about-reports.adoc#metering-example-report-without-schedule_metering-about-reports[run-once metering report] to confirm the metering upgrade.

View File

@@ -1,22 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="metering-usage-examples"]
= Examples of using metering
include::_attributes/common-attributes.adoc[]
:context: metering-usage-examples
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
Use the following example reports to get started measuring capacity, usage, and utilization in your cluster. These examples showcase the various types of reports metering offers, along with a selection of the predefined queries.
== Prerequisites
* xref:../metering/metering-installing-metering.adoc#metering-install-operator_installing-metering[Install metering]
* Review the details about xref:../metering/metering-using-metering#using-metering[writing and viewing reports].
include::modules/metering-cluster-capacity-examples.adoc[leveloffset=+1]
include::modules/metering-cluster-usage-examples.adoc[leveloffset=+1]
include::modules/metering-cluster-utilization-examples.adoc[leveloffset=+1]

View File

@@ -1,19 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="using-metering"]
= Using Metering
include::_attributes/common-attributes.adoc[]
:context: using-metering
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
== Prerequisites
* xref:../metering/metering-installing-metering.adoc#metering-install-operator_installing-metering[Install Metering]
* Review the details about the available options that can be configured for a xref:../metering/reports/metering-about-reports.adoc#metering-about-reports[report] and how they function.
include::modules/metering-writing-reports.adoc[leveloffset=+1]
include::modules/metering-viewing-report-results.adoc[leveloffset=+1]

View File

@@ -1,16 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="metering-about-reports"]
= About Reports
include::_attributes/common-attributes.adoc[]
:context: metering-about-reports
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
A `Report` custom resource provides a method to manage periodic Extract Transform and Load (ETL) jobs using SQL queries. Reports are composed from other metering resources, such as `ReportQuery` resources that provide the actual SQL query to run, and `ReportDataSource` resources that define the data available to the `ReportQuery` and `Report` resources.
Many use cases are addressed by the predefined `ReportQuery` and `ReportDataSource` resources that come installed with metering. Therefore, you do not need to define your own unless you have a use case that is not covered by these predefined resources.
include::modules/metering-reports.adoc[leveloffset=+1]

View File

@@ -1,83 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
[id="metering-storage-locations"]
= Storage locations
include::_attributes/common-attributes.adoc[]
:context: metering-storage-locations
toc::[]
:FeatureName: Metering
include::modules/deprecated-feature.adoc[leveloffset=+1]
A `StorageLocation` custom resource configures where data will be stored by the Reporting Operator. This includes the data collected from Prometheus, and the results produced by generating a `Report` custom resource.
You only need to configure a `StorageLocation` custom resource if you want to store data in multiple locations, like multiple S3 buckets or both S3 and HDFS, or if you wish to access a database in Hive and Presto that was not created by metering. For most users this is not a requirement, and the xref:../../metering/configuring_metering/metering-about-configuring.adoc#metering-about-configuring[documentation on configuring metering] is sufficient to configure all necessary storage components.
== Storage location examples
The following example shows the built-in local storage option, and is configured to use Hive. By default, data is stored wherever Hive is configured to use storage, such as HDFS, S3, or a `ReadWriteMany` persistent volume claim (PVC).
.Local storage example
[source,yaml]
----
apiVersion: metering.openshift.io/v1
kind: StorageLocation
metadata:
name: hive
labels:
operator-metering: "true"
spec:
hive: <1>
databaseName: metering <2>
unmanagedDatabase: false <3>
----
<1> If the `hive` section is present, then the `StorageLocation` resource will be configured to store data in Presto by creating the table using the Hive server. Only `databaseName` and `unmanagedDatabase` are required fields.
<2> The name of the database within hive.
<3> If `true`, the `StorageLocation` resource will not be actively managed, and the `databaseName` is expected to already exist in Hive. If `false`, the Reporting Operator will create the database in Hive.
The following example uses an AWS S3 bucket for storage. The prefix is appended to the bucket name when constructing the path to use.
.Remote storage example
[source,yaml]
----
apiVersion: metering.openshift.io/v1
kind: StorageLocation
metadata:
name: example-s3-storage
labels:
operator-metering: "true"
spec:
hive:
databaseName: example_s3_storage
unmanagedDatabase: false
location: "s3a://bucket-name/path/within/bucket" <1>
----
<1> Optional: The filesystem URL for Presto and Hive to use for the database. This can be an `hdfs://` or `s3a://` filesystem URL.
There are additional optional fields that can be specified in the `hive` section:
* `defaultTableProperties`: Contains configuration options for creating tables using Hive.
* `fileFormat`: The file format used for storing files in the filesystem. See the link:https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-StorageFormatsStorageFormatsRowFormat,StorageFormat,andSerDe[Hive Documentation on File Storage Format] for a list of options and more details.
* `rowFormat`: Controls the link:https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-RowFormats&SerDe[ Hive row format]. This controls how Hive serializes and deserializes rows. See the link:https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-RowFormats&SerDe[Hive Documentation on Row Formats and SerDe] for more details.
== Default storage location
If an annotation `storagelocation.metering.openshift.io/is-default` exists and is set to `true` on a `StorageLocation` resource, then that resource becomes the default storage resource. Any components with a storage configuration option where the storage location is not specified will use the default storage resource. There can be only one default storage resource. If more than one resource with the annotation exists, an error is logged because the Reporting Operator cannot determine the default.
.Default storage example
[source,yaml]
----
apiVersion: metering.openshift.io/v1
kind: StorageLocation
metadata:
name: example-s3-storage
labels:
operator-metering: "true"
annotations:
storagelocation.metering.openshift.io/is-default: "true"
spec:
hive:
databaseName: example_s3_storage
unmanagedDatabase: false
location: "s3a://bucket-name/path/within/bucket"
----