1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-06 06:46:26 +01:00
Files
openshift-docs/modules/otel-install-cli.adoc
2023-12-06 20:23:27 +00:00

180 lines
4.1 KiB
Plaintext

// Module included in the following assemblies:
//
//* otel/otel-installing.adoc
:_content-type: PROCEDURE
[id="installing-otel-by-using-the-cli_{context}"]
= Installing the {OTELShortName} by using the CLI
You can install the {OTELShortName} from the command line.
.Prerequisites
* An active {oc-first} session by a cluster administrator with the `cluster-admin` role.
+
[TIP]
====
* Ensure that your {oc-first} version is up to date and matches your {product-title} version.
* Run `oc login`:
+
[source,terminal]
----
$ oc login --username=<your_username>
----
====
.Procedure
. Install the {OTELOperator}:
.. Create a project for the {OTELOperator} by running the following command:
+
[source,terminal]
----
$ oc apply -f - << EOF
apiVersion: project.openshift.io/v1
kind: Project
metadata:
labels:
kubernetes.io/metadata.name: openshift-opentelemetry-operator
openshift.io/cluster-monitoring: "true"
name: openshift-opentelemetry-operator
EOF
----
.. Create an Operator group by running the following command:
+
[source,terminal]
----
$ oc apply -f - << EOF
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-opentelemetry-operator
namespace: openshift-opentelemetry-operator
spec:
upgradeStrategy: Default
EOF
----
.. Create a subscription by running the following command:
+
[source,terminal]
----
$ oc apply -f - << EOF
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: opentelemetry-product
namespace: openshift-opentelemetry-operator
spec:
channel: stable
installPlanApproval: Automatic
name: opentelemetry-product
source: redhat-operators
sourceNamespace: openshift-marketplace
EOF
----
.. Check the Operator status by running the following command:
+
[source,terminal]
----
$ oc get csv -n openshift-opentelemetry-operator
----
. Create a project of your choice for the OpenTelemetry Collector instance that you will create in a subsequent step:
** To create a project without metadata, run the following command:
+
[source,terminal]
----
$ oc new-project <project_of_opentelemetry_collector_instance>
----
** To create a project with metadata, run the following command:
+
[source,terminal]
----
$ oc apply -f - << EOF
apiVersion: project.openshift.io/v1
kind: Project
metadata:
name: <project_of_opentelemetry_collector_instance>
EOF
----
. Create an OpenTelemetry Collector instance in the project that you created for it.
+
[NOTE]
====
You can create multiple OpenTelemetry Collector instances in separate projects on the same cluster.
====
+
.. Customize the `OpenTelemetry Collector` custom resource (CR) with the OTLP, Jaeger, and Zipkin receivers and the debug exporter:
+
[source,yaml]
----
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: otel
namespace: <project_of_opentelemetry_collector_instance>
spec:
mode: deployment
config: |
receivers:
otlp:
protocols:
grpc:
http:
jaeger:
protocols:
grpc:
thrift_binary:
thrift_compact:
thrift_http:
zipkin:
processors:
batch:
memory_limiter:
check_interval: 1s
limit_percentage: 50
spike_limit_percentage: 30
exporters:
debug:
service:
pipelines:
traces:
receivers: [otlp,jaeger,zipkin]
processors: [memory_limiter,batch]
exporters: [debug]
----
.. Apply the customized CR by running the following command:
+
[source,terminal]
----
$ oc apply -f - << EOF
<OpenTelemetryCollector_custom_resource>
EOF
----
.Verification
. Verify that the `status.phase` of the OpenTelemetry Collector pod is `Running` and the `conditions` are `type: Ready` by running the following command:
+
[source,terminal]
----
$ oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> -o yaml
----
. Get the OpenTelemetry Collector service by running the following command:
+
[source,terminal]
----
$ oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name>
----