mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
Removing prereq to get name of CL CR
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
d6720eacb5
commit
8d801d8182
@@ -7,16 +7,6 @@
|
||||
|
||||
Each component specification allows the component to target a specific node.
|
||||
|
||||
.Prerequisite
|
||||
|
||||
If needed, get the name of the Cluster Logging Custom Resource in the `openshift-logging` project:
|
||||
|
||||
----
|
||||
$ oc get ClusterLogging
|
||||
NAME AGE
|
||||
instance 112m
|
||||
----
|
||||
|
||||
.Procedure
|
||||
|
||||
Edit the the Cluster Logging Custom Resource (CR) in the `openshift-logging` project:
|
||||
|
||||
@@ -8,26 +8,20 @@
|
||||
You can specify the schedule for Curator using the cluster logging Custom Resource
|
||||
created by the cluster logging installation.
|
||||
|
||||
.Prerequisite
|
||||
.Prerequisites
|
||||
|
||||
* Cluster logging and Elasticsearch must be installed.
|
||||
|
||||
* If needed, get the name of the Cluster Logging Custom Resource in the `openshift-logging` project:
|
||||
+
|
||||
----
|
||||
$ oc get ClusterLogging
|
||||
NAME AGE
|
||||
instance 112m
|
||||
----
|
||||
|
||||
.Procedure
|
||||
|
||||
To configure the Curator schedule, edit the Cluster Logging Custom Resource:
|
||||
To configure the Curator schedule:
|
||||
|
||||
. Edit the Cluster Logging Custom Resource in the `openshift-logging` project:
|
||||
+
|
||||
----
|
||||
$ oc edit clusterlogging instance
|
||||
----
|
||||
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: "logging.openshift.io/v1"
|
||||
@@ -42,9 +36,8 @@ metadata:
|
||||
schedule: 30 3 * * * <1>
|
||||
type: curator
|
||||
----
|
||||
|
||||
<1> Specify the schedule for Curator in link:https://en.wikipedia.org/wiki/Cron[cron format].
|
||||
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
The time zone is set based on the host node where the Curator pod runs.
|
||||
|
||||
@@ -14,22 +14,18 @@ For example, if you want to increase redundancy, and use the `FullRedundancy` or
|
||||
The maximum number of Elasticsearch master nodes is three. If you specify a `nodeCount` greater than `3`, {product-title} creates three Elasticsearch nodes that are Master-eligible nodes, with the master, client, and data roles. The additional Elasticsearch nodes are created as Data-only nodes, using client and data roles. Master nodes perform cluster-wide actions such as creating or deleting an index, shard allocation, and tracking nodes. Data nodes hold the shards and perform data-related operations such as CRUD, search, and aggregations. Data-related operations are I/O-, memory-, and CPU-intensive. It is important to monitor these resources and to add more Data nodes if the current nodes are overloaded.
|
||||
====
|
||||
|
||||
.Prerequisite
|
||||
.Prerequisites
|
||||
|
||||
* Cluster logging and Elasticsearch must be installed.
|
||||
|
||||
* If needed, get the name of the Elasticsearch Custom Resource in the `openshift-logging` project:
|
||||
+
|
||||
----
|
||||
$ oc get ClusterLogging
|
||||
NAME AGE
|
||||
instance 112m
|
||||
----
|
||||
|
||||
.Procedure
|
||||
|
||||
. To scale up the cluster, edit the Elasticsearch Custom Resource (CR) to add a number of nodes of a specific type:
|
||||
+
|
||||
----
|
||||
$ oc edit ClusterLogging instance
|
||||
----
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: "logging.openshift.io/v1"
|
||||
@@ -50,7 +46,11 @@ metadata:
|
||||
<1> Specify the number of Elasticsearch nodes. This example adds two nodes to the default 3. The new nodes will be Data-only nodes.
|
||||
|
||||
////
|
||||
. To scale down, edit the Cluster Logging Custom Resource (CR) to reduce the number of nodes of a specific type:
|
||||
. To scale down, edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project to reduce the number of nodes of a specific type:
|
||||
+
|
||||
----
|
||||
oc edit clusterlogging instance
|
||||
----
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
|
||||
@@ -21,22 +21,18 @@ Use this mode when you are more concerned with performance than safety, or have
|
||||
implemented your own disk/PVC backup/restore strategy.
|
||||
|
||||
|
||||
.Prerequisite
|
||||
.Prerequisites
|
||||
|
||||
* Cluster logging and Elasticsearch must be installed.
|
||||
|
||||
* If needed, get the name of the Cluster Logging Custom Resource in the `openshift-logging` project:
|
||||
+
|
||||
----
|
||||
$ oc get ClusterLogging
|
||||
NAME AGE
|
||||
instance 112m
|
||||
----
|
||||
|
||||
.Procedure
|
||||
|
||||
Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project:
|
||||
|
||||
. Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project:
|
||||
+
|
||||
----
|
||||
oc edit clusterlogging instance
|
||||
----
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: "logging.openshift.io/v1"
|
||||
|
||||
@@ -9,26 +9,20 @@ Each component specification allows for adjustments to both the CPU and memory l
|
||||
You should not have to manually adjust these values as the Elasticsearch
|
||||
Operator sets values sufficient for your environment.
|
||||
|
||||
.Prerequisite
|
||||
.Prerequisites
|
||||
|
||||
* Cluster logging and Elasticsearch must be installed.
|
||||
|
||||
* If needed, get the name of the Cluster Logging Custom Resource in the `openshift-logging` project:
|
||||
+
|
||||
----
|
||||
$ oc get ClusterLogging
|
||||
NAME AGE
|
||||
instance 112m
|
||||
----
|
||||
|
||||
.Procedure
|
||||
|
||||
Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project:
|
||||
|
||||
[source,yaml]
|
||||
. Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project:
|
||||
+
|
||||
----
|
||||
$ oc edit ClusterLogging instance
|
||||
|
||||
----
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: "logging.openshift.io/v1"
|
||||
kind: "ClusterLogging"
|
||||
metadata:
|
||||
@@ -46,6 +40,5 @@ spec:
|
||||
cpu: "100m"
|
||||
memory: "1Gi"
|
||||
----
|
||||
|
||||
<1> Specify the CPU and memory limits as needed. If you leave these values blank,
|
||||
the Elasticsearch Operator sets default values that should be sufficient for most deployments.
|
||||
|
||||
@@ -13,18 +13,10 @@ deployment in which all of a pod's data is lost upon restart.
|
||||
When using emptyDir, you will lose data if Elasticsearch is restarted or redeployed.
|
||||
====
|
||||
|
||||
.Prerequisite
|
||||
.Prerequisites
|
||||
|
||||
* Cluster logging and Elasticsearch must be installed.
|
||||
|
||||
* If needed, get the name of the Cluster Logging Custom Resource in the `openshift-logging` project:
|
||||
+
|
||||
----
|
||||
$ oc get ClusterLogging
|
||||
NAME AGE
|
||||
instance 112m
|
||||
----
|
||||
|
||||
.Procedure
|
||||
|
||||
. Edit the Cluster Logging CR to specify emptyDir:
|
||||
|
||||
@@ -11,14 +11,6 @@ Elasticsearch requires persistent storage. The faster the storage, the faster t
|
||||
|
||||
* Cluster logging and Elasticsearch must be installed.
|
||||
|
||||
* If needed, get the name of the Cluster Logging Custom Resource in the `openshift-logging` project:
|
||||
+
|
||||
----
|
||||
$ oc get ClusterLogging
|
||||
NAME AGE
|
||||
instance 112m
|
||||
----
|
||||
|
||||
.Procedure
|
||||
|
||||
. Edit the Cluster Logging CR to specify that each data node in the cluster is bound to a Persistent Volume Claim. This example requests 200G of General Purpose SSD (gp2) storage.
|
||||
|
||||
@@ -25,26 +25,20 @@ https://access.redhat.com/support/offerings/techpreview/.
|
||||
endif::[]
|
||||
====
|
||||
|
||||
.Prerequisite
|
||||
.Prerequisites
|
||||
|
||||
* Set cluster logging to the unmanaged state. In managed state, the Cluster Logging Operator reverts changes made to the `logging-curator` configuration map.
|
||||
|
||||
* If needed, get the name of the Cluster Logging Custom Resource in the `openshift-logging` project:
|
||||
+
|
||||
----
|
||||
$ oc get ClusterLogging
|
||||
NAME AGE
|
||||
instance 112m
|
||||
----
|
||||
|
||||
.Procedure
|
||||
|
||||
Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project:
|
||||
|
||||
[source,yaml]
|
||||
. Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project:
|
||||
+
|
||||
----
|
||||
$ oc edit ClusterLogging instance
|
||||
|
||||
----
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: "logging.openshift.io/v1"
|
||||
kind: "ClusterLogging"
|
||||
metadata:
|
||||
@@ -59,6 +53,5 @@ nodeSpec:
|
||||
logs:
|
||||
type: "fluentd" <1>
|
||||
----
|
||||
|
||||
<1> Set the log collector to `fluentd`, the default, or `rsyslog`.
|
||||
|
||||
|
||||
@@ -7,20 +7,14 @@
|
||||
|
||||
Each component specification allows for adjustments to both the CPU and memory limits.
|
||||
|
||||
.Prerequisite
|
||||
|
||||
If needed, get the name of the Cluster Logging Custom Resource in the `openshift-logging` project:
|
||||
|
||||
----
|
||||
$ oc get ClusterLogging
|
||||
NAME AGE
|
||||
instance 112m
|
||||
----
|
||||
|
||||
.Procedure
|
||||
|
||||
Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project:
|
||||
|
||||
. Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project:
|
||||
+
|
||||
----
|
||||
$ oc edit ClusterLogging instance
|
||||
----
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
$ oc edit ClusterLogging instance
|
||||
@@ -44,5 +38,4 @@ spec:
|
||||
cpu: 250m
|
||||
memory: 1Gi
|
||||
----
|
||||
|
||||
<1> Specify the CPU and memory limits as needed. The values shown are the default values.
|
||||
|
||||
@@ -7,24 +7,16 @@
|
||||
|
||||
Each component specification allows for adjustments to both the CPU and memory limits.
|
||||
|
||||
.Prerequisite
|
||||
|
||||
* If needed, get the name of the Cluster Logging Custom Resource from the `openshift-logging` project:
|
||||
+
|
||||
----
|
||||
$ oc get ClusterLogging
|
||||
NAME AGE
|
||||
instance 112m
|
||||
----
|
||||
|
||||
.Procedure
|
||||
|
||||
Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project:
|
||||
|
||||
[source,yaml]
|
||||
. Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project:
|
||||
+
|
||||
----
|
||||
$ oc edit ClusterLogging instance
|
||||
|
||||
----
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: "logging.openshift.io/v1"
|
||||
kind: "ClusterLogging"
|
||||
metadata:
|
||||
@@ -52,6 +44,5 @@ spec:
|
||||
memory: 100Mi
|
||||
|
||||
----
|
||||
|
||||
<1> Specify the CPU and memory limits to allocate for each node.
|
||||
<2> Specify the CPU and memory limits to allocate to the Kibana proxy.
|
||||
|
||||
@@ -7,20 +7,14 @@
|
||||
|
||||
You can scale the Kibana deployment for redundancy.
|
||||
|
||||
.Prerequisite
|
||||
|
||||
If needed, get the name of the Cluster Logging Custom Resource in the `openshift-logging` project:
|
||||
..Procedure
|
||||
|
||||
. Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project:
|
||||
+
|
||||
----
|
||||
$ oc get ClusterLogging
|
||||
NAME AGE
|
||||
instance 112m
|
||||
$ oc edit ClusterLogging instance
|
||||
----
|
||||
|
||||
.Procedure
|
||||
|
||||
Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project:
|
||||
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
$ oc edit ClusterLogging instance
|
||||
@@ -38,6 +32,5 @@ spec:
|
||||
kibana:
|
||||
replicas: 1 <1>
|
||||
----
|
||||
|
||||
<1> Specify the number of Kibana nodes.
|
||||
|
||||
|
||||
@@ -21,23 +21,18 @@ If you make changes to these components in managed state, the Cluster Logging Op
|
||||
An unmanaged cluster logging environment does not receive updates until you return the Cluster Logging Operator to Managed state.
|
||||
====
|
||||
|
||||
.Prerequisite
|
||||
.Prerequisites
|
||||
|
||||
* The Cluster Logging Operator must be installed.
|
||||
|
||||
* Have the name of the custom logging CR, in the `openshift-logging` project:
|
||||
+
|
||||
----
|
||||
$ oc get -n openshift-logging ClusterLogging
|
||||
|
||||
NAME AGE
|
||||
instance 48m
|
||||
----
|
||||
|
||||
.Procedure
|
||||
|
||||
Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project:
|
||||
|
||||
. Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project:
|
||||
+
|
||||
----
|
||||
$ oc edit ClusterLogging instance
|
||||
----
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
$ oc edit ClusterLogging instance
|
||||
@@ -52,5 +47,4 @@ metadata:
|
||||
spec:
|
||||
managementState: "Managed" <1>
|
||||
----
|
||||
|
||||
<1> Specify the management state as `Managed` or `Unmanaged`.
|
||||
|
||||
@@ -7,18 +7,10 @@
|
||||
|
||||
You can remove cluster logging from your cluster.
|
||||
|
||||
.Prerequisite
|
||||
.Prerequisites
|
||||
|
||||
* Cluster logging and Elasticsearch must be installed.
|
||||
|
||||
* If needed, get the name of the Cluster Logging Custom Resource in the `openshift-logging` project:
|
||||
+
|
||||
----
|
||||
$ oc get ClusterLogging
|
||||
NAME AGE
|
||||
instance 112m
|
||||
----
|
||||
|
||||
.Procedure
|
||||
|
||||
To remove cluster logging:
|
||||
|
||||
@@ -21,17 +21,13 @@ You should set your MachineSet to use at least 6 replicas.
|
||||
|
||||
* Cluster logging and Elasticsearch must be installed. These features are not installed by default.
|
||||
|
||||
* If needed, get the name of the Cluster Logging Custom Resource in the openshift-logging project:
|
||||
+
|
||||
----
|
||||
$ oc get ClusterLogging
|
||||
NAME AGE
|
||||
instance 112m
|
||||
----
|
||||
|
||||
.Procedure
|
||||
|
||||
. Edit the Cluster Logging Custom Resource:
|
||||
. Edit the Cluster Logging Custom Resource in the `openshift-logging` project:
|
||||
+
|
||||
----
|
||||
$ oc edit ClusterLogging instance
|
||||
----
|
||||
+
|
||||
----
|
||||
apiVersion: logging.openshift.io/v1
|
||||
|
||||
Reference in New Issue
Block a user