diff --git a/modules/efk-logging-configuring-node-selector.adoc b/modules/efk-logging-configuring-node-selector.adoc index cd9c051472..ea0acc6639 100644 --- a/modules/efk-logging-configuring-node-selector.adoc +++ b/modules/efk-logging-configuring-node-selector.adoc @@ -7,16 +7,6 @@ Each component specification allows the component to target a specific node. -.Prerequisite - -If needed, get the name of the Cluster Logging Custom Resource in the `openshift-logging` project: - ----- -$ oc get ClusterLogging -NAME AGE -instance 112m ----- - .Procedure Edit the the Cluster Logging Custom Resource (CR) in the `openshift-logging` project: diff --git a/modules/efk-logging-curator-schedule.adoc b/modules/efk-logging-curator-schedule.adoc index 396a2c8919..bfeca3f870 100644 --- a/modules/efk-logging-curator-schedule.adoc +++ b/modules/efk-logging-curator-schedule.adoc @@ -8,26 +8,20 @@ You can specify the schedule for Curator using the cluster logging Custom Resource created by the cluster logging installation. -.Prerequisite +.Prerequisites * Cluster logging and Elasticsearch must be installed. -* If needed, get the name of the Cluster Logging Custom Resource in the `openshift-logging` project: -+ ----- -$ oc get ClusterLogging -NAME AGE -instance 112m ----- - .Procedure -To configure the Curator schedule, edit the Cluster Logging Custom Resource: +To configure the Curator schedule: +. Edit the Cluster Logging Custom Resource in the `openshift-logging` project: ++ ---- $ oc edit clusterlogging instance ---- - ++ [source,yaml] ---- apiVersion: "logging.openshift.io/v1" @@ -42,9 +36,8 @@ metadata: schedule: 30 3 * * * <1> type: curator ---- - <1> Specify the schedule for Curator in link:https://en.wikipedia.org/wiki/Cron[cron format]. - ++ [NOTE] ==== The time zone is set based on the host node where the Curator pod runs. diff --git a/modules/efk-logging-elasticsearch-add-remove.adoc b/modules/efk-logging-elasticsearch-add-remove.adoc index 40e14dbed4..10f8d79de1 100644 --- a/modules/efk-logging-elasticsearch-add-remove.adoc +++ b/modules/efk-logging-elasticsearch-add-remove.adoc @@ -14,22 +14,18 @@ For example, if you want to increase redundancy, and use the `FullRedundancy` or The maximum number of Elasticsearch master nodes is three. If you specify a `nodeCount` greater than `3`, {product-title} creates three Elasticsearch nodes that are Master-eligible nodes, with the master, client, and data roles. The additional Elasticsearch nodes are created as Data-only nodes, using client and data roles. Master nodes perform cluster-wide actions such as creating or deleting an index, shard allocation, and tracking nodes. Data nodes hold the shards and perform data-related operations such as CRUD, search, and aggregations. Data-related operations are I/O-, memory-, and CPU-intensive. It is important to monitor these resources and to add more Data nodes if the current nodes are overloaded. ==== -.Prerequisite +.Prerequisites * Cluster logging and Elasticsearch must be installed. -* If needed, get the name of the Elasticsearch Custom Resource in the `openshift-logging` project: -+ ----- -$ oc get ClusterLogging -NAME AGE -instance 112m ----- - .Procedure . To scale up the cluster, edit the Elasticsearch Custom Resource (CR) to add a number of nodes of a specific type: + +---- +$ oc edit ClusterLogging instance +---- ++ [source,yaml] ---- apiVersion: "logging.openshift.io/v1" @@ -50,7 +46,11 @@ metadata: <1> Specify the number of Elasticsearch nodes. This example adds two nodes to the default 3. The new nodes will be Data-only nodes. //// -. To scale down, edit the Cluster Logging Custom Resource (CR) to reduce the number of nodes of a specific type: +. To scale down, edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project to reduce the number of nodes of a specific type: ++ +---- +oc edit clusterlogging instance +---- + [source,yaml] ---- diff --git a/modules/efk-logging-elasticsearch-ha.adoc b/modules/efk-logging-elasticsearch-ha.adoc index da19a78439..6588a02fec 100644 --- a/modules/efk-logging-elasticsearch-ha.adoc +++ b/modules/efk-logging-elasticsearch-ha.adoc @@ -21,22 +21,18 @@ Use this mode when you are more concerned with performance than safety, or have implemented your own disk/PVC backup/restore strategy. -.Prerequisite +.Prerequisites * Cluster logging and Elasticsearch must be installed. -* If needed, get the name of the Cluster Logging Custom Resource in the `openshift-logging` project: -+ ----- -$ oc get ClusterLogging -NAME AGE -instance 112m ----- - .Procedure -Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project: - +. Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project: ++ +---- +oc edit clusterlogging instance +---- ++ [source,yaml] ---- apiVersion: "logging.openshift.io/v1" diff --git a/modules/efk-logging-elasticsearch-limits.adoc b/modules/efk-logging-elasticsearch-limits.adoc index e0d297e399..f15c9b8b41 100644 --- a/modules/efk-logging-elasticsearch-limits.adoc +++ b/modules/efk-logging-elasticsearch-limits.adoc @@ -9,26 +9,20 @@ Each component specification allows for adjustments to both the CPU and memory l You should not have to manually adjust these values as the Elasticsearch Operator sets values sufficient for your environment. -.Prerequisite +.Prerequisites * Cluster logging and Elasticsearch must be installed. -* If needed, get the name of the Cluster Logging Custom Resource in the `openshift-logging` project: -+ ----- -$ oc get ClusterLogging -NAME AGE -instance 112m ----- - .Procedure -Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project: - -[source,yaml] +. Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project: ++ ---- $ oc edit ClusterLogging instance - +---- ++ +[source,yaml] +---- apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: @@ -46,6 +40,5 @@ spec: cpu: "100m" memory: "1Gi" ---- - <1> Specify the CPU and memory limits as needed. If you leave these values blank, the Elasticsearch Operator sets default values that should be sufficient for most deployments. diff --git a/modules/efk-logging-elasticsearch-persistent-storage-empty.adoc b/modules/efk-logging-elasticsearch-persistent-storage-empty.adoc index 7acd27b392..f68d405f5b 100644 --- a/modules/efk-logging-elasticsearch-persistent-storage-empty.adoc +++ b/modules/efk-logging-elasticsearch-persistent-storage-empty.adoc @@ -13,18 +13,10 @@ deployment in which all of a pod's data is lost upon restart. When using emptyDir, you will lose data if Elasticsearch is restarted or redeployed. ==== -.Prerequisite +.Prerequisites * Cluster logging and Elasticsearch must be installed. -* If needed, get the name of the Cluster Logging Custom Resource in the `openshift-logging` project: -+ ----- -$ oc get ClusterLogging -NAME AGE -instance 112m ----- - .Procedure . Edit the Cluster Logging CR to specify emptyDir: diff --git a/modules/efk-logging-elasticsearch-storage.adoc b/modules/efk-logging-elasticsearch-storage.adoc index 3129d69521..317c817383 100644 --- a/modules/efk-logging-elasticsearch-storage.adoc +++ b/modules/efk-logging-elasticsearch-storage.adoc @@ -11,14 +11,6 @@ Elasticsearch requires persistent storage. The faster the storage, the faster t * Cluster logging and Elasticsearch must be installed. -* If needed, get the name of the Cluster Logging Custom Resource in the `openshift-logging` project: -+ ----- -$ oc get ClusterLogging -NAME AGE -instance 112m ----- - .Procedure . Edit the Cluster Logging CR to specify that each data node in the cluster is bound to a Persistent Volume Claim. This example requests 200G of General Purpose SSD (gp2) storage. diff --git a/modules/efk-logging-fluentd-collector.adoc b/modules/efk-logging-fluentd-collector.adoc index c49f99c72e..a69b6c50db 100644 --- a/modules/efk-logging-fluentd-collector.adoc +++ b/modules/efk-logging-fluentd-collector.adoc @@ -25,26 +25,20 @@ https://access.redhat.com/support/offerings/techpreview/. endif::[] ==== -.Prerequisite +.Prerequisites * Set cluster logging to the unmanaged state. In managed state, the Cluster Logging Operator reverts changes made to the `logging-curator` configuration map. -* If needed, get the name of the Cluster Logging Custom Resource in the `openshift-logging` project: -+ ----- -$ oc get ClusterLogging -NAME AGE -instance 112m ----- - .Procedure -Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project: - -[source,yaml] +. Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project: ++ ---- $ oc edit ClusterLogging instance - +---- ++ +[source,yaml] +---- apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: @@ -59,6 +53,5 @@ nodeSpec: logs: type: "fluentd" <1> ---- - <1> Set the log collector to `fluentd`, the default, or `rsyslog`. diff --git a/modules/efk-logging-fluentd-limits.adoc b/modules/efk-logging-fluentd-limits.adoc index 631aa8eb2a..1d32acaefb 100644 --- a/modules/efk-logging-fluentd-limits.adoc +++ b/modules/efk-logging-fluentd-limits.adoc @@ -7,20 +7,14 @@ Each component specification allows for adjustments to both the CPU and memory limits. -.Prerequisite - -If needed, get the name of the Cluster Logging Custom Resource in the `openshift-logging` project: - ----- -$ oc get ClusterLogging -NAME AGE -instance 112m ----- - .Procedure -Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project: - +. Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project: ++ +---- +$ oc edit ClusterLogging instance +---- ++ [source,yaml] ---- $ oc edit ClusterLogging instance @@ -44,5 +38,4 @@ spec: cpu: 250m memory: 1Gi ---- - <1> Specify the CPU and memory limits as needed. The values shown are the default values. diff --git a/modules/efk-logging-kibana-limits.adoc b/modules/efk-logging-kibana-limits.adoc index 4fcebbae05..0d234b7ed8 100644 --- a/modules/efk-logging-kibana-limits.adoc +++ b/modules/efk-logging-kibana-limits.adoc @@ -7,24 +7,16 @@ Each component specification allows for adjustments to both the CPU and memory limits. -.Prerequisite - -* If needed, get the name of the Cluster Logging Custom Resource from the `openshift-logging` project: -+ ----- -$ oc get ClusterLogging -NAME AGE -instance 112m ----- - .Procedure -Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project: - -[source,yaml] +. Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project: ++ ---- $ oc edit ClusterLogging instance - +---- ++ +[source,yaml] +---- apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: @@ -52,6 +44,5 @@ spec: memory: 100Mi ---- - <1> Specify the CPU and memory limits to allocate for each node. <2> Specify the CPU and memory limits to allocate to the Kibana proxy. diff --git a/modules/efk-logging-kibana-scaling.adoc b/modules/efk-logging-kibana-scaling.adoc index fb39b69b7e..c8a5017d4f 100644 --- a/modules/efk-logging-kibana-scaling.adoc +++ b/modules/efk-logging-kibana-scaling.adoc @@ -7,20 +7,14 @@ You can scale the Kibana deployment for redundancy. -.Prerequisite - -If needed, get the name of the Cluster Logging Custom Resource in the `openshift-logging` project: +..Procedure +. Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project: ++ ---- -$ oc get ClusterLogging -NAME AGE -instance 112m +$ oc edit ClusterLogging instance ---- - -.Procedure - -Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project: - ++ [source,yaml] ---- $ oc edit ClusterLogging instance @@ -38,6 +32,5 @@ spec: kibana: replicas: 1 <1> ---- - <1> Specify the number of Kibana nodes. diff --git a/modules/efk-logging-management-state-changing.adoc b/modules/efk-logging-management-state-changing.adoc index bf58eb5b75..c87eb5d70b 100644 --- a/modules/efk-logging-management-state-changing.adoc +++ b/modules/efk-logging-management-state-changing.adoc @@ -21,23 +21,18 @@ If you make changes to these components in managed state, the Cluster Logging Op An unmanaged cluster logging environment does not receive updates until you return the Cluster Logging Operator to Managed state. ==== -.Prerequisite +.Prerequisites * The Cluster Logging Operator must be installed. -* Have the name of the custom logging CR, in the `openshift-logging` project: -+ ----- -$ oc get -n openshift-logging ClusterLogging - -NAME AGE -instance 48m ----- - .Procedure -Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project: - +. Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project: ++ +---- +$ oc edit ClusterLogging instance +---- ++ [source,yaml] ---- $ oc edit ClusterLogging instance @@ -52,5 +47,4 @@ metadata: spec: managementState: "Managed" <1> ---- - <1> Specify the management state as `Managed` or `Unmanaged`. diff --git a/modules/efk-logging-uninstall-efk.adoc b/modules/efk-logging-uninstall-efk.adoc index 8319ab7227..652d8dd92a 100644 --- a/modules/efk-logging-uninstall-efk.adoc +++ b/modules/efk-logging-uninstall-efk.adoc @@ -7,18 +7,10 @@ You can remove cluster logging from your cluster. -.Prerequisite +.Prerequisites * Cluster logging and Elasticsearch must be installed. -* If needed, get the name of the Cluster Logging Custom Resource in the `openshift-logging` project: -+ ----- -$ oc get ClusterLogging -NAME AGE -instance 112m ----- - .Procedure To remove cluster logging: diff --git a/modules/infrastructure-moving-logging.adoc b/modules/infrastructure-moving-logging.adoc index 589f5a8b08..63e8603755 100644 --- a/modules/infrastructure-moving-logging.adoc +++ b/modules/infrastructure-moving-logging.adoc @@ -21,17 +21,13 @@ You should set your MachineSet to use at least 6 replicas. * Cluster logging and Elasticsearch must be installed. These features are not installed by default. -* If needed, get the name of the Cluster Logging Custom Resource in the openshift-logging project: -+ ----- -$ oc get ClusterLogging -NAME AGE -instance 112m ----- - .Procedure -. Edit the Cluster Logging Custom Resource: +. Edit the Cluster Logging Custom Resource in the `openshift-logging` project: ++ +---- +$ oc edit ClusterLogging instance +---- + ---- apiVersion: logging.openshift.io/v1