1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 21:46:22 +01:00

Fix GitHub issue #32713: update the new logging operator name

This commit is contained in:
Bo Liu
2021-05-23 10:33:04 +08:00
committed by Rolfe Dlugy-Hegwer
parent 5c092f7ee1
commit 9f560427e7
33 changed files with 70 additions and 70 deletions

View File

@@ -7,15 +7,15 @@ toc::[]
You can install OpenShift Logging by deploying
the OpenShift Elasticsearch and Cluster Logging Operators. The OpenShift Elasticsearch Operator
the OpenShift Elasticsearch and Red Hat OpenShift Logging Operators. The OpenShift Elasticsearch Operator
creates and manages the Elasticsearch cluster used by OpenShift Logging.
The Cluster Logging Operator creates and manages the components of the logging stack.
The Red Hat OpenShift Logging Operator creates and manages the components of the logging stack.
The process for deploying OpenShift Logging to {product-title} involves:
* Reviewing the xref:../logging/config/cluster-logging-storage-considerations#cluster-logging-storage[OpenShift Logging storage considerations].
* Installing the OpenShift Elasticsearch Operator and Cluster Logging Operator using the {product-title} xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-console_cluster-logging-deploying[web console] or xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-cli_cluster-logging-deploying[CLI].
* Installing the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator using the {product-title} xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-console_cluster-logging-deploying[web console] or xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-cli_cluster-logging-deploying[CLI].
// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference

View File

@@ -14,7 +14,7 @@ To send logs to other log aggregators, you use the {product-title} Cluster Log F
To send audit logs to the internal log store, use the Cluster Log Forwarder as described in xref:../logging/config/cluster-logging-log-store.adoc#cluster-logging-elasticsearch-audit_cluster-logging-store[Forward audit logs to the log store].
====
When you forward logs externally, the Cluster Logging Operator creates or modifies a Fluentd config map to send logs using your desired protocols. You are responsible for configuring the protocol on the external log aggregator.
When you forward logs externally, the Red Hat OpenShift Logging Operator creates or modifies a Fluentd config map to send logs using your desired protocols. You are responsible for configuring the protocol on the external log aggregator.
Alternatively, you can create a config map to use the xref:../logging/cluster-logging-external.html#cluster-logging-collector-legacy-fluentd_cluster-logging-external[Fluentd *forward* protocol] or the xref:../logging/cluster-logging-external.html#cluster-logging-collector-legacy-syslog_cluster-logging-external[syslog protocol] to send logs to external systems. However, these methods for forwarding logs are deprecated in {product-title} and will be removed in a future release.

View File

@@ -8,7 +8,7 @@ toc::[]
OpenShift Logging is configurable using a `ClusterLogging` custom resource (CR) deployed
in the `openshift-logging` project.
The Cluster Logging Operator watches for changes to `ClusterLogging` CR,
The Red Hat OpenShift Logging Operator watches for changes to `ClusterLogging` CR,
creates any missing logging components, and adjusts the logging environment accordingly.
The `ClusterLogging` CR is based on the `ClusterLogging` custom resource definition (CRD), which defines a complete OpenShift Logging environment
@@ -77,5 +77,5 @@ The Rsyslog log collector is currently a Technology Preview feature.
[IMPORTANT]
====
The logging routes are managed by the Cluster Logging Operator and cannot be modified by the user.
The logging routes are managed by the Red Hat OpenShift Logging Operator and cannot be modified by the user.
====

View File

@@ -1,6 +1,6 @@
:context: dedicated-cluster-deploying
[id="dedicated-cluster-deploying"]
= Installing the Cluster Logging Operator and OpenShift Elasticsearch Operator
= Installing the Red Hat OpenShift Logging Operator and OpenShift Elasticsearch Operator
include::modules/common-attributes.adoc[]
toc::[]

View File

@@ -15,7 +15,7 @@ for two days.
OpenShift Logging is configurable using a `ClusterLogging` custom resource (CR)
deployed in the `openshift-logging` project namespace.
The Cluster Logging Operator watches for changes to `ClusterLogging` CR, creates
The Red Hat OpenShift Logging Operator watches for changes to `ClusterLogging` CR, creates
any missing logging components, and adjusts the logging environment accordingly.
The `ClusterLogging` CR is based on the `ClusterLogging` custom resource

View File

@@ -5,7 +5,7 @@ include::modules/common-attributes.adoc[]
toc::[]
You can view the status of the Cluster Logging Operator and for a number of OpenShift Logging components.
You can view the status of the Red Hat OpenShift Logging Operator and for a number of OpenShift Logging components.
// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference

View File

@@ -128,7 +128,7 @@ spec:
----
<1> The CR name must be `instance`.
<2> The CR must be installed to the `openshift-logging` namespace.
<3> The Cluster Logging Operator management state. When set to `unmanaged` the operator is in an unsupported state and will not get updates.
<3> The Red Hat OpenShift Logging Operator management state. When set to `unmanaged` the operator is in an unsupported state and will not get updates.
<4> Settings for the log store, including retention policy, the number of nodes, the resource requests and limits, and the storage class.
<5> Settings for the visualizer, including the resource requests and limits, and the number of pod replicas.
<6> Settings for curation, including the resource requests and limits, and curation schedule.

View File

@@ -16,7 +16,7 @@ Elasticsearch organizes the log data from Fluentd into datastores, or _indices_,
The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes.
====
The Cluster Logging Operator and companion OpenShift Elasticsearch Operator ensure that each Elasticsearch node is deployed using a unique deployment that includes its own storage volume.
The Red Hat OpenShift Logging Operator and companion OpenShift Elasticsearch Operator ensure that each Elasticsearch node is deployed using a unique deployment that includes its own storage volume.
You can use a `ClusterLogging` custom resource (CR) to increase the number of Elasticsearch nodes, as needed.
Refer to the link:https://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html[Elasticsearch documentation] for considerations involved in configuring storage.

View File

@@ -14,14 +14,14 @@
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
{product-title} cluster administrators can deploy OpenShift Logging using
the {product-title} web console or CLI to install the OpenShift Elasticsearch
Operator and Cluster Logging Operator. When the operators are installed, you create
Operator and Red Hat OpenShift Logging Operator. When the operators are installed, you create
a `ClusterLogging` custom resource (CR) to schedule OpenShift Logging pods and
other resources necessary to support OpenShift Logging. The operators are
responsible for deploying, upgrading, and maintaining OpenShift Logging.
endif::openshift-enterprise,openshift-webscale,openshift-origin[]
ifdef::openshift-dedicated[]
{product-title} administrators can deploy the Cluster Logging Operator and the
{product-title} administrators can deploy the Red Hat OpenShift Logging Operator and the
OpenShift Elasticsearch Operator by using the {product-title} web console and can configure logging in the
`openshift-logging` namespace. Configuring logging will deploy Elasticsearch,
Fluentd, and Kibana in the `openshift-logging` namespace. The operators are
@@ -29,7 +29,7 @@ responsible for deploying, upgrading, and maintaining OpenShift Logging.
endif::openshift-dedicated[]
The `ClusterLogging` CR defines a complete OpenShift Logging environment that includes all the components
of the logging stack to collect, store and visualize logs. The Cluster Logging Operator watches the OpenShift Logging
of the logging stack to collect, store and visualize logs. The Red Hat OpenShift Logging Operator watches the OpenShift Logging
CR and adjusts the logging deployment accordingly.
Administrators and application developers can view the logs of the projects for which they have view access.

View File

@@ -3,9 +3,9 @@
// * logging/cluster-logging-cluster-status.adoc
[id="cluster-logging-clo-status_{context}"]
= Viewing the status of the Cluster Logging Operator
= Viewing the status of the Red Hat OpenShift Logging Operator
You can view the status of your Cluster Logging Operator.
You can view the status of your Red Hat OpenShift Logging Operator.
.Prerequisites

View File

@@ -111,7 +111,7 @@ To use Mutual TLS (mTLS) authentication, see the link:https://docs.fluentd.org/o
$ oc create configmap secure-forward --from-file=secure-forward.conf -n openshift-logging
----
The Cluster Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd pods to force them to redeploy.
The Red Hat OpenShift Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd pods to force them to redeploy.
[source,terminal]
----

View File

@@ -109,7 +109,7 @@ rfc 3164 <5>
$ oc create configmap syslog --from-file=syslog.conf -n openshift-logging
----
The Cluster Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd
The Red Hat OpenShift Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd
pods to force them to redeploy.
[source,terminal]

View File

@@ -9,7 +9,7 @@ You can optionally forward logs to an external Elasticsearch instance in additio
To configure log forwarding to an external Elasticsearch instance, create a `ClusterLogForwarder` custom resource (CR) with an output to that instance and a pipeline that uses the output. The external Elasticsearch output can use the HTTP (insecure) or HTTPS (secure HTTP) connection.
To forward logs to both an external and the internal Elasticsearch instance, create outputs and pipelines to the external instance and a pipeline that uses the `default` output to forward logs to the internal instance. You do not need to create a `default` output. If you do configure a `default` output, you receive an error message because the `default` output is reserved for the Cluster Logging Operator.
To forward logs to both an external and the internal Elasticsearch instance, create outputs and pipelines to the external instance and a pipeline that uses the `default` output to forward logs to the internal instance. You do not need to create a `default` output. If you do configure a `default` output, you receive an error message because the `default` output is reserved for the Red Hat OpenShift Logging Operator.
[NOTE]
====
@@ -79,7 +79,7 @@ spec:
$ oc create -f <file-name>.yaml
----
The Cluster Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd
The Red Hat OpenShift Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd
pods to force them to redeploy.
[source,terminal]

View File

@@ -77,7 +77,7 @@ spec:
$ oc create -f <file-name>.yaml
----
The Cluster Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd
The Red Hat OpenShift Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd
pods to force them to redeploy.
[source,terminal]

View File

@@ -106,7 +106,7 @@ spec:
$ oc create -f <file-name>.yaml
----
The Cluster Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd
The Red Hat OpenShift Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd
pods to force them to redeploy.
[source,terminal]

View File

@@ -92,7 +92,7 @@ spec:
$ oc create -f <file-name>.yaml
----
The Cluster Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd
The Red Hat OpenShift Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd
pods to force them to redeploy.
[source,terminal]

View File

@@ -18,7 +18,7 @@ Forwarding cluster logs to external third-party systems requires a combination o
* `kafka`. A Kafka broker. The `kafka` output can use a TCP or TLS connection.
* `default`. The internal {product-title} Elasticsearch instance. You are not required to configure the default output. If you do configure a `default` output, you receive an error message because the `default` output is reserved for the Cluster Logging Operator.
* `default`. The internal {product-title} Elasticsearch instance. You are not required to configure the default output. If you do configure a `default` output, you receive an error message because the `default` output is reserved for the Red Hat OpenShift Logging Operator.
--
+
If the output URL scheme requires TLS (HTTPS, TLS, or UDPS), then TLS server-side authentication is enabled. To also enable client authentication, the output must name a secret in the `openshift-logging` project. The secret must have keys of: *tls.crt*, *tls.key*, and *ca-bundle.crt* that point to the respective certificates that they represent.

View File

@@ -5,7 +5,7 @@
[id="cluster-logging-deploy-cli_{context}"]
= Installing OpenShift Logging using the CLI
You can use the {product-title} CLI to install the OpenShift Elasticsearch and Cluster Logging Operators.
You can use the {product-title} CLI to install the OpenShift Elasticsearch and Red Hat OpenShift Logging Operators.
.Prerequisites
@@ -27,7 +27,7 @@ endif::[]
.Procedure
To install the OpenShift Elasticsearch Operator and Cluster Logging Operator using the CLI:
To install the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator using the CLI:
. Create a Namespace for the OpenShift Elasticsearch Operator.
@@ -68,9 +68,9 @@ For example:
$ oc create -f eo-namespace.yaml
----
. Create a Namespace for the Cluster Logging Operator:
. Create a Namespace for the Red Hat OpenShift Logging Operator:
.. Create a Namespace object YAML file (for example, `clo-namespace.yaml`) for the Cluster Logging Operator:
.. Create a Namespace object YAML file (for example, `olo-namespace.yaml`) for the Red Hat OpenShift Logging Operator:
+
[source,yaml]
----
@@ -95,7 +95,7 @@ For example:
+
[source,terminal]
----
$ oc create -f clo-namespace.yaml
$ oc create -f olo-namespace.yaml
----
. Install the OpenShift Elasticsearch Operator by creating the following objects:
@@ -190,9 +190,9 @@ openshift-authentication elasticsearch-operator.5
+
There should be an OpenShift Elasticsearch Operator in each Namespace. The version number might be different than shown.
. Install the Cluster Logging Operator by creating the following objects:
. Install the Red Hat OpenShift Logging Operator by creating the following objects:
.. Create an OperatorGroup object YAML file (for example, `clo-og.yaml`) for the Cluster Logging Operator:
.. Create an Operator Group object YAML file (for example, `olo-og.yaml`) for the Red Hat OpenShift Logging Operator:
+
[source,yaml]
----
@@ -218,11 +218,11 @@ For example:
+
[source,terminal]
----
$ oc create -f clo-og.yaml
$ oc create -f olo-og.yaml
----
.. Create a Subscription object YAML file (for example, `clo-sub.yaml`) to
subscribe a Namespace to the Cluster Logging Operator.
.. Create a Subscription object YAML file (for example, `olo-sub.yaml`) to
subscribe a Namespace to the Red Hat OpenShift Logging Operator.
+
[source,yaml]
----
@@ -250,14 +250,14 @@ For example:
+
[source,terminal]
----
$ oc create -f clo-sub.yaml
$ oc create -f olo-sub.yaml
----
+
The Cluster Logging Operator is installed to the `openshift-logging` Namespace.
The Red Hat OpenShift Logging Operator is installed to the `openshift-logging` Namespace.
.. Verify the Operator installation.
+
There should be a Cluster Logging Operator in the `openshift-logging` Namespace. The Version number might be different than shown.
There should be a Red Hat OpenShift Logging Operator in the `openshift-logging` Namespace. The Version number might be different than shown.
+
[source,terminal]
----
@@ -275,7 +275,7 @@ openshift-logging clusterlogging.5.0.0-202
. Create a OpenShift Logging instance:
.. Create an instance object YAML file (for example, `clo-instance.yaml`) for the Cluster Logging Operator:
.. Create an instance object YAML file (for example, `olo-instance.yaml`) for the Red Hat OpenShift Logging Operator:
+
[NOTE]
====
@@ -430,7 +430,7 @@ For example:
+
[source,terminal]
----
$ oc create -f clo-instance.yaml
$ oc create -f olo-instance.yaml
----
+
This creates the OpenShift Logging components, the `Elasticsearch` custom resource and components, and the Kibana interface.

View File

@@ -5,7 +5,7 @@
[id="cluster-logging-deploy-console_{context}"]
= Installing OpenShift Logging using the web console
You can use the {product-title} web console to install the OpenShift Elasticsearch and Cluster Logging Operators.
You can use the {product-title} web console to install the OpenShift Elasticsearch and Red Hat OpenShift Logging Operators.
.Prerequisites
@@ -27,7 +27,7 @@ endif::[]
.Procedure
To install the OpenShift Elasticsearch Operator and Cluster Logging Operator using the {product-title} web console:
To install the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator using the {product-title} web console:
. Install the OpenShift Elasticsearch Operator:
@@ -64,11 +64,11 @@ scrapes the `openshift-operators-redhat` namespace.
.. Ensure that *OpenShift Elasticsearch Operator* is listed in all projects with a *Status* of *Succeeded*.
. Install the Cluster Logging Operator:
. Install the Red Hat OpenShift Logging Operator:
.. In the {product-title} web console, click *Operators* -> *OperatorHub*.
.. Choose *Cluster Logging* from the list of available Operators, and click *Install*.
.. Choose *Red Hat OpenShift Logging* from the list of available Operators, and click *Install*.
.. Ensure that the *A specific namespace on the cluster* is selected under *Installation Mode*.
@@ -90,9 +90,9 @@ scrapes the `openshift-logging` namespace.
.. Click *Install*.
.. Verify that the Cluster Logging Operator installed by switching to the *Operators* → *Installed Operators* page.
.. Verify that the Red Hat OpenShift Logging Operator installed by switching to the *Operators* → *Installed Operators* page.
.. Ensure that *Cluster Logging* is listed in the *openshift-logging* project with a *Status* of *Succeeded*.
.. Ensure that *Red Hat OpenShift Logging* is listed in the *openshift-logging* project with a *Status* of *Succeeded*.
+
If the Operator does not appear as installed, to troubleshoot further:
+
@@ -107,7 +107,7 @@ the *Status* column for any errors or failures.
.. On the *Custom Resource Definitions* page, click *ClusterLogging*.
.. On the *Custom Resource Definition Overview* page, select *View Instances* from the *Actions* menu.
.. On the *Custom Resource Definition details* page, select *View Instances* from the *Actions* menu.
.. On the *ClusterLoggings* page, click *Create ClusterLogging*.
+

View File

@@ -7,11 +7,11 @@
If you are deploying OpenShift Logging into a cluster that uses multitenant isolation mode, projects are isolated from other projects. As a result, network traffic is not allowed between pods or services in different projects.
Because the OpenShift Elasticsearch Operator and the Cluster Logging Operator are installed in different projects, you must explicitly allow access between the `openshift-operators-redhat` and `openshift-logging` projects. How you allow this access depends on how you configured multitenant isolation mode.
Because the OpenShift Elasticsearch Operator and the Red Hat OpenShift Logging Operator are installed in different projects, you must explicitly allow access between the `openshift-operators-redhat` and `openshift-logging` projects. How you allow this access depends on how you configured multitenant isolation mode.
.Procedure
To allow traffic between the OpenShift Elasticsearch Operator and the Cluster Logging Operator, perform one of the following:
To allow traffic between the OpenShift Elasticsearch Operator and the Red Hat OpenShift Logging Operator, perform one of the following:
* If you configured multitenant isolation mode with the OpenShift SDN CNI plug-in set to the *Multitenant* mode, use the following command to join the two projects:
+

View File

@@ -71,7 +71,7 @@ spec:
----
Elasticsearch storage::
You can configure a persistent storage class and size for the Elasticsearch cluster using the `storageClass` `name` and `size` parameters. The Cluster Logging Operator creates a persistent volume claim (PVC) for each data node in the Elasticsearch cluster based on these parameters.
You can configure a persistent storage class and size for the Elasticsearch cluster using the `storageClass` `name` and `size` parameters. The Red Hat OpenShift Logging Operator creates a persistent volume claim (PVC) for each data node in the Elasticsearch cluster based on these parameters.
----
spec:

View File

@@ -66,7 +66,7 @@ By default, logs are retained for seven days.
. You can verify the settings in the `Elasticsearch` custom resource (CR).
+
For example, the Cluster Logging Operator updated the following
For example, the Red Hat OpenShift Logging Operator updated the following
`Elasticsearch` CR to configure a retention policy that includes settings
to roll over active indices for the infrastructure logs every eight hours and
the rolled-ver indices are deleted seven days after rollover. {product-title} checks

View File

@@ -5,9 +5,9 @@
[id="cluster-logging-maintenance-support-about_{context}"]
= About unsupported configurations
The supported way of configuring OpenShift Logging is by configuring it using the options described in this documentation. Do not use other configurations, as they are unsupported. Configuration paradigms might change across {product-title} releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will disappear because the OpenShift Elasticsearch Operator and Cluster Logging Operator reconcile any differences. The Operators reverse everything to the defined state by default and by design.
The supported way of configuring OpenShift Logging is by configuring it using the options described in this documentation. Do not use other configurations, as they are unsupported. Configuration paradigms might change across {product-title} releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will disappear because the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator reconcile any differences. The Operators reverse everything to the defined state by default and by design.
[NOTE]
====
If you _must_ perform configurations not described in the {product-title} documentation, you _must_ set your Cluster Logging Operator or OpenShift Elasticsearch Operator to *Unmanaged*. An unmanaged OpenShift Logging environment is _not supported_ and does not receive updates until you return OpenShift Logging to *Managed*.
If you _must_ perform configurations not described in the {product-title} documentation, you _must_ set your Red Hat OpenShift Logging Operator or OpenShift Elasticsearch Operator to *Unmanaged*. An unmanaged OpenShift Logging environment is _not supported_ and does not receive updates until you return OpenShift Logging to *Managed*.
====

View File

@@ -5,7 +5,7 @@
[id="cluster-logging-maintenance-support-list_{context}"]
= Unsupported configurations
You must set the Cluster Logging Operator to the unmanaged state to modify the following components:
You must set the Red Hat OpenShift Logging Operator to the unmanaged state to modify the following components:
* the Curator cron job

View File

@@ -38,11 +38,11 @@ To remove OpenShift Logging:
.. Click the Options menu {kebab} next to *Elasticsearch* and select *Delete Custom Resource Definition*.
. Optional: Remove the Cluster Logging Operator and OpenShift Elasticsearch Operator:
. Optional: Remove the Red Hat OpenShift Logging Operator and OpenShift Elasticsearch Operator:
.. Switch to the *Operators* -> *Installed Operators* page.
.. Click the Options menu {kebab} next to the Cluster Logging Operator and select *Uninstall Operator*.
.. Click the Options menu {kebab} next to the Red Hat OpenShift Logging Operator and select *Uninstall Operator*.
.. Click the Options menu {kebab} next to the OpenShift Elasticsearch Operator and select *Uninstall Operator*.

View File

@@ -66,7 +66,7 @@ Wait for the *Status* field to report *Succeeded*.
.. Wait for a few seconds, then click *Operators* -> *Installed Operators*.
+
Verify that the Cluster Logging Operator version is 5.0.x.
Verify that the Red Hat OpenShift Logging Operator version is 5.0.x.
+
Wait for the *Status* field to report *Succeeded*.

View File

@@ -7,15 +7,15 @@
= Installing OpenShift Logging and OpenShift Elasticsearch Operators
You can use the {product-title} console to install OpenShift Logging by deploying instances of
the OpenShift Logging and OpenShift Elasticsearch Operators. The Cluster Logging Operator
the OpenShift Logging and OpenShift Elasticsearch Operators. The Red Hat OpenShift Logging Operator
creates and manages the components of the logging stack. The OpenShift Elasticsearch Operator
creates and manages the Elasticsearch cluster used by OpenShift Logging.
[NOTE]
====
The OpenShift Logging solution requires that you install both the
Cluster Logging Operator and OpenShift Elasticsearch Operator. When you deploy an instance
of the Cluster Logging Operator, it also deploys an instance of the OpenShift Elasticsearch
Red Hat OpenShift Logging Operator and OpenShift Elasticsearch Operator. When you deploy an instance
of the Red Hat OpenShift Logging Operator, it also deploys an instance of the OpenShift Elasticsearch
Operator.
====
@@ -33,16 +33,16 @@ production deployments.
.. In the {product-title} web console, click *Operators* -> *OperatorHub*.
.. Choose *Elasticsearch* from the list of available Operators, and click *Install*.
.. Choose *OpenShift Elasticsearch Operator* from the list of available Operators, and click *Install*.
.. On the *Install Operator* page, under *A specific namespace on the cluster* select *openshift-logging*.
Then, click *Install*.
. Install the Cluster Logging Operator from the OperatorHub:
. Install the Red Hat OpenShift Logging Operator from the OperatorHub:
.. In the {product-title} web console, click *Operators* -> *OperatorHub*.
.. Choose *Cluster Logging* from the list of available Operators, and click *Install*.
.. Choose *Red Hat OpenShift Logging* from the list of available Operators, and click *Install*.
.. On the *Install Operator* page, under *A specific namespace on the cluster* select *openshift-logging*.
Then, click *Install*.
@@ -51,7 +51,7 @@ Then, click *Install*.
.. Switch to the *Operators* → *Installed Operators* page.
.. Ensure that *Cluster Logging* and *Elasticsearch* Operators are listed in the
.. Ensure that *Red Hat OpenShift Logging* and *OpenShift Elasticsearch* Operators are listed in the
*openshift-logging* project with a *Status* of *InstallSucceeded*.
+
[NOTE]
@@ -71,7 +71,7 @@ the *Status* column for any errors or failures.
.. Switch to the *Operators* → *Installed Operators* page.
.. Click the installed *Cluster Logging* Operator.
.. Click the installed *Red Hat OpenShift Logging* Operator.
.. Under the *Overview* tab, click *Create Instance* . Paste the following YAML
definition into the window that displays.

View File

@@ -107,7 +107,7 @@ $ oc adm must-gather \
<1> The default {product-title} `must-gather` image
<2> The must-gather image for {VirtProductName}
+
You can use the `must-gather` tool with additional arguments to gather data that is specifically related to OpenShift Logging and the Cluster Logging Operator in your cluster. For OpenShift Logging, run the following command:
You can use the `must-gather` tool with additional arguments to gather data that is specifically related to OpenShift Logging and the Red Hat OpenShift Logging Operator in your cluster. For OpenShift Logging, run the following command:
+
[source,terminal]
----

View File

@@ -6,7 +6,7 @@
[id="infrastructure-moving-logging_{context}"]
= Moving OpenShift Logging resources
You can configure the Cluster Logging Operator to deploy the pods for any or all of the OpenShift Logging components, Elasticsearch, Kibana, and Curator to different nodes. You cannot move the Cluster Logging Operator pod from its installed location.
You can configure the Red Hat OpenShift Logging Operator to deploy the pods for any or all of the OpenShift Logging components, Elasticsearch, Kibana, and Curator to different nodes. You cannot move the Red Hat OpenShift Logging Operator pod from its installed location.
For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements.

View File

@@ -5,4 +5,4 @@
[id="nodes-cluster-resource-override-deploy_{context}"]
= Installing the Cluster Resource Override Operator
You can use the {product-title} console or CLI to install the Cluster Logging Operator.
You can use the {product-title} console or CLI to install the Red Hat OpenShift Logging Operator.

View File

@@ -72,7 +72,7 @@ ifdef::olm-admin[]
*** *All namespaces on the cluster (default)* installs the Operator in the default `openshift-operators` namespace to watch and be made available to all namespaces in the cluster. This option is not always available.
*** *A specific namespace on the cluster* allows you to choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace.
ifdef::openshift-dedicated[]
If you are installing the Cluster Logging Operator, choose this option to select the `openshift-logging` namespace.
If you are installing the Red Hat OpenShift Logging Operator, choose this option to select the `openshift-logging` namespace.
endif::[]
endif::[]
ifdef::olm-user[]

View File

@@ -15,4 +15,4 @@ access to logs:
To save your logs for further audit and analysis, you can enable the `cluster-logging` add-on
feature to collect, manage, and view system, container, and audit logs.
You can deploy, manage, and upgrade OpenShift Logging through the OpenShift Elasticsearch Operator
and Cluster Logging Operator.
and Red Hat OpenShift Logging Operator.

View File

@@ -21,7 +21,7 @@ An Operator can be set to an unmanaged state using the following methods:
+
Individual Operators have a `managementState` parameter in their configuration.
This can be accessed in different ways, depending on the Operator. For example,
the Cluster Logging Operator accomplishes this by modifying a custom resource
the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource
(CR) that it manages, while the Cluster Samples Operator uses a cluster-wide
configuration resource.
+