mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-06 15:46:57 +01:00
Adding changes to 3.11 docs to 4.0
edit
This commit is contained in:
@@ -141,8 +141,6 @@ Topics:
|
||||
File: efk-logging-deploy
|
||||
- Name: Uninstalling the EFK stack
|
||||
File: efk-logging-uninstall
|
||||
- Name: Troubleshooting Kubernetes
|
||||
File: efk-logging-troubleshooting
|
||||
- Name: Working with Elasticsearch
|
||||
File: efk-logging-elasticsearch
|
||||
- Name: Working with Fluentd
|
||||
@@ -159,5 +157,7 @@ Topics:
|
||||
File: efk-logging-manual-rollout
|
||||
- Name: Configuring systemd-journald and rsyslog
|
||||
File: efk-logging-systemd
|
||||
- Name: Troubleshooting Kubernetes
|
||||
File: efk-logging-troubleshooting
|
||||
- Name: Exported fields
|
||||
File: efk-logging-exported-fields
|
||||
|
||||
@@ -14,7 +14,11 @@ toc::[]
|
||||
|
||||
include::modules/efk-logging-elasticsearch-ha.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/efk-logging-elasticsearch-persistent-storage.adoc[leveloffset=+1]
|
||||
include::modules/efk-logging-elasticsearch-persistent-storage-about.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/efk-logging-elasticsearch-persistent-storage-persistent.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/efk-logging-elasticsearch-persistent-storage-local.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/efk-logging-elasticsearch-scaling.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
@@ -13,6 +13,14 @@ toc::[]
|
||||
// assemblies.
|
||||
|
||||
|
||||
include::modules/efk-logging-fluentd-pod-location.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/efk-logging-fluentd-log-viewing.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/efk-logging-fluentd-log-location.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/efk-logging-fluentd-log-rotation.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/efk-logging-external-fluentd.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/efk-logging-fluentd-connections.adoc[leveloffset=+1]
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
|
||||
{product-title} uses Fluentd to collect data about your cluster.
|
||||
|
||||
Fluentd is deployed as a DaemonSet in {product-title} that deploys replicas according to a node
|
||||
Fluentd is deployed as a DaemonSet in {product-title} that deploys nodes according to a node
|
||||
label selector, which you can specify with the inventory parameter
|
||||
`openshift_logging_fluentd_nodeselector` and the default is `logging-infra-fluentd`.
|
||||
As part of the OpenShift cluster installation, it is recommended that you add the
|
||||
|
||||
@@ -22,7 +22,7 @@ various areas of the EFK stack.
|
||||
+
|
||||
.. Ensure that you have deployed a router for the cluster.
|
||||
+
|
||||
** Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch replica
|
||||
** Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node
|
||||
requires its own storage volume.
|
||||
|
||||
. Specify a node selector
|
||||
@@ -34,22 +34,3 @@ node selector should be used.
|
||||
$ oc adm new-project logging --node-selector=""
|
||||
----
|
||||
|
||||
* Choose a project.
|
||||
+
|
||||
Once deployed, the EFK stack collects logs for every
|
||||
project within your {product-title} cluster. But the stack requires a dedicated project, by default *openshift-logging*.
|
||||
The Ansible playbook creates the project for you. You only need to create a project if you want
|
||||
to specify a node-selector on it.
|
||||
+
|
||||
----
|
||||
$ oc adm new-project logging --node-selector=""
|
||||
$ oc project logging
|
||||
----
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
Specifying an empty node selector on the project is recommended, as Fluentd should be deployed
|
||||
throughout the cluster and any selector would restrict where it is
|
||||
deployed. To control component placement, specify node selectors per component to
|
||||
be applied to their deployment configurations.
|
||||
====
|
||||
|
||||
@@ -344,7 +344,7 @@ server cert. The default is the internal CA.
|
||||
|The location of the client key Fluentd uses for `openshift_logging_es_host`.
|
||||
|
||||
|`openshift_logging_es_cluster_size`
|
||||
|Elasticsearch replicas to deploy. Redundancy requires at least three or more.
|
||||
|Elasticsearch nodes to deploy. Redundancy requires at least three or more.
|
||||
|
||||
|`openshift_logging_es_cpu_limit`
|
||||
|The amount of CPU limit for the ES cluster.
|
||||
@@ -377,7 +377,10 @@ openshift_logging_es_pvc_dynamic value.
|
||||
|`openshift_logging_es_pvc_size`
|
||||
|Size of the persistent volume claim to
|
||||
create per Elasticsearch instance. For example, 100G. If omitted, no PVCs are
|
||||
created and ephemeral volumes are used instead. If this parameter is set, `openshift_logging_elasticsearch_storage_type` is set to `pvc`.
|
||||
created and ephemeral volumes are used instead. If you set this parameter, the logging installer sets `openshift_logging_elasticsearch_storage_type` to `pvc`.
|
||||
|
||||
|`openshift_logging_elasticsearch_storage_type`
|
||||
|Sets the Elasticsearch storage type. If you are using Persistent Elasticsearch Storage, the logging installer sets this to `pvc`.
|
||||
|
||||
|`openshift_logging_elasticsearch_storage_type`
|
||||
|Sets the Elasticsearch storage type. If you are using Persistent Elasticsearch Storage, set to `pvc`.
|
||||
|
||||
@@ -0,0 +1,67 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * logging/efk-logging-elasticsearch.adoc
|
||||
|
||||
[id='efk-logging-elasticsearch-persistent-storage-about_{context}']
|
||||
= Configuring persistent storage for Elasticsearch
|
||||
|
||||
By default, the `openshift_logging` Ansible role creates an ephemeral
|
||||
deployment in which all of a pod's data is lost upon restart.
|
||||
|
||||
For production environments, each Elasticsearch deployment configuration requires a persistent storage volume. You can specify an existing persistent
|
||||
volume claim or allow {product-title} to create one.
|
||||
|
||||
* *Use existing PVCs.* If you create your own PVCs for the deployment, {product-title} uses those PVCs.
|
||||
+
|
||||
Name the PVCs to match the `openshift_logging_es_pvc_prefix` setting, which defaults to
|
||||
`logging-es`. Assign each PVC a name with a sequence number added to it: `logging-es-0`,
|
||||
`logging-es-1`, `logging-es-2`, and so on.
|
||||
|
||||
* *Allow {product-title} to create a PVC.* If a PVC for Elsaticsearch does not exist, {product-title} creates the PVC based on parameters
|
||||
in the Ansible inventory file, by default *_/etc/ansible/hosts_*.
|
||||
+
|
||||
[cols="3,7",options="header"]
|
||||
|===
|
||||
|Parameter
|
||||
|Description
|
||||
|
||||
|`openshift_logging_es_pvc_size`
|
||||
| Specify the size of the PVC request.
|
||||
|
||||
|`openshift_logging_elasticsearch_storage_type`
|
||||
a|Specify the storage type as `pvc`.
|
||||
[NOTE]
|
||||
====
|
||||
This is an optional parameter. Setting the `openshift_logging_es_pvc_size` parameter to a value greater than 0 automatically sets this parameter to `pvc` by default.
|
||||
====
|
||||
|
||||
|`openshift_logging_es_pvc_prefix`
|
||||
|Optionally, specify a custom prefix for the PVC.
|
||||
|===
|
||||
+
|
||||
For example:
|
||||
+
|
||||
[source,bash]
|
||||
----
|
||||
openshift_logging_elasticsearch_storage_type=pvc
|
||||
openshift_logging_es_pvc_size=104802308Ki
|
||||
openshift_logging_es_pvc_prefix=es-logging
|
||||
----
|
||||
|
||||
If you use dynamically provisioned PVs, the {product-title} logging installer creates PVCs
|
||||
that use the default storage class or the PVC specified with the `openshift_logging_elasticsearch_pvc_storage_class_name` parameter.
|
||||
|
||||
If you use NFS storage, the {product-title} installer creates the persistent volumes, based on the `openshift_logging_storage_*` parameters
|
||||
and the {product-title} logging installer creates PVCs, using the `openshift_logging_es_pvc_*` paramters.
|
||||
Make sure you specify the correct parameters to use persistent volumes with EFK.
|
||||
Also set the `openshift_enable_unsupported_configurations=true` parameter in the Ansible inventory file,
|
||||
as the logging installer blocks the installation of NFS with core infrastructure by default.
|
||||
|
||||
[WARNING]
|
||||
====
|
||||
Using NFS storage as a volume or a persistent volume (or via NAS such as
|
||||
Gluster) is not supported for Elasticsearch storage, as Lucene relies on file
|
||||
system behavior that NFS does not supply. Data corruption and other problems can
|
||||
occur.
|
||||
====
|
||||
|
||||
@@ -0,0 +1,91 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * logging/efk-logging-elasticsearch.adoc
|
||||
|
||||
[id='efk-logging-elasticsearch-persistent-storage-local_{context}']
|
||||
= Configuring NFS as local storage for Elasticsearch
|
||||
|
||||
|
||||
You can allocate a large file on an NFS server and mount the file to the nodes. You can then use the file as a host path device.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
Allocate a large file on an NFS server and mount the file to the nodes
|
||||
|
||||
----
|
||||
$ mount -F nfs nfserver:/nfs/storage/elasticsearch-1 /usr/local/es-storage
|
||||
$ chown 1000:1000 /usr/local/es-storage
|
||||
----
|
||||
|
||||
Then, use *_/usr/local/es-storage_* as a host-mount as described below.
|
||||
Use a different backing file as storage for each Elasticsearch replica.
|
||||
|
||||
This loopback must be maintained manually outside of {product-title}, on the
|
||||
node. You must not maintain it from inside a container.
|
||||
|
||||
.Procedure
|
||||
|
||||
To use a local disk volume (if available) on each
|
||||
node host as storage for an Elasticsearch replica:
|
||||
|
||||
. The relevant service account must be given the privilege to mount and edit a
|
||||
local volume:
|
||||
+
|
||||
----
|
||||
$ oc adm policy add-scc-to-user privileged \
|
||||
system:serviceaccount:logging:aggregated-logging-elasticsearch <1>
|
||||
----
|
||||
<1> Use the project you created earlier, for example, *logging*, when running the
|
||||
logging playbook.
|
||||
|
||||
. Each Elasticsearch node definition must be patched to claim that privilege,
|
||||
for example:
|
||||
+
|
||||
----
|
||||
$ for dc in $(oc get deploymentconfig --selector logging-infra=elasticsearch -o name); do
|
||||
oc scale $dc --replicas=0
|
||||
oc patch $dc \
|
||||
-p '{"spec":{"template":{"spec":{"containers":[{"name":"elasticsearch","securityContext":{"privileged": true}}]}}}}'
|
||||
done
|
||||
----
|
||||
|
||||
. The Elasticsearch replicas must be located on the correct nodes to use the local
|
||||
storage, and should not move around even if those nodes are taken down for a
|
||||
period of time. This requires giving each Elasticsearch node a node selector
|
||||
that is unique to a node where an administrator has allocated storage for it. To
|
||||
configure a node selector, edit each Elasticsearch deployment configuration and
|
||||
add or edit the *nodeSelector* section to specify a unique label that you have
|
||||
applied for each desired node:
|
||||
+
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: DeploymentConfig
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
nodeSelector:
|
||||
logging-es-node: "1" <1>
|
||||
----
|
||||
<1> This label should uniquely identify a replica with a single node that bears that
|
||||
label, in this case `logging-es-node=1`. Use the `oc label` command to apply
|
||||
labels to nodes as needed.
|
||||
+
|
||||
To automate applying the node selector you can instead use the `oc patch` command:
|
||||
+
|
||||
----
|
||||
$ oc patch dc/logging-es-<suffix> \
|
||||
-p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-es-node":"1"}}}}}'
|
||||
----
|
||||
|
||||
. Apply a local host mount to each replica. The following example assumes storage is mounted at the same path on each node:
|
||||
+
|
||||
----
|
||||
$ for dc in $(oc get deploymentconfig --selector logging-infra=elasticsearch -o name); do
|
||||
oc set volume $dc \
|
||||
--add --overwrite --name=elasticsearch-storage \
|
||||
--type=hostPath --path=/usr/local/es-storage
|
||||
oc rollout latest $dc
|
||||
oc scale $dc --replicas=1
|
||||
done
|
||||
----
|
||||
|
||||
@@ -0,0 +1,78 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * logging/efk-logging-elasticsearch.adoc
|
||||
|
||||
[id='efk-logging-elasticsearch-persistent-storage-persistent_{context}']
|
||||
= Using NFS as a persistent volume for Elasticsearch
|
||||
|
||||
You can deploy NFS as an automatically provisioned persistent volume or using a predefined NFS volume.
|
||||
|
||||
For more information, see _Sharing an NFS mount across two persistent volume claims_ to leverage shared storage for use by two separate containers.
|
||||
|
||||
|
||||
*Using automatically provisioned NFS*
|
||||
|
||||
You can use NFS as a persistent volume where NFS is automatically provisioned.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Add the following lines to the Ansible inventory file to create an NFS auto-provisioned storage class and dynamically provision the backing storage:
|
||||
+
|
||||
----
|
||||
openshift_logging_es_pvc_storage_class_name=$nfsclass
|
||||
openshift_logging_es_pvc_dynamic=true
|
||||
----
|
||||
|
||||
. Use the following command to deploy the NFS volume using the logging playbook:
|
||||
+
|
||||
----
|
||||
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml
|
||||
----
|
||||
|
||||
. Use the following steps to create a PVC:
|
||||
|
||||
.. Edit the Ansible inventory file to set the PVC size:
|
||||
+
|
||||
----
|
||||
openshift_logging_es_pvc_size=50Gi
|
||||
----
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
The logging playbook selects a volume based on size and might use an unexpected volume if any other persistent volume has same size.
|
||||
====
|
||||
|
||||
.. Use the following command to rerun the Ansible *_deploy_cluster.yml_* playbook:
|
||||
+
|
||||
----
|
||||
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
|
||||
----
|
||||
+
|
||||
The installer playbook creates the NFS volume based on the `openshift_logging_storage` variables.
|
||||
|
||||
*Using a predefined NFS volume*
|
||||
|
||||
You can deploy logging alongside the {product-title} cluster using an existing NFS volume.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Edit the Ansible inventory file to configure the NFS volume and set the PVC size:
|
||||
+
|
||||
----
|
||||
openshift_logging_storage_kind=nfs
|
||||
openshift_enable_unsupported_configurations=true
|
||||
openshift_logging_storage_access_modes=["ReadWriteOnce"]
|
||||
openshift_logging_storage_nfs_directory=/srv/nfs
|
||||
openshift_logging_storage_nfs_options=*(rw,root_squash)
|
||||
openshift_logging_storage_volume_name=logging
|
||||
openshift_logging_storage_volume_size=100Gi
|
||||
openshift_logging_storage_labels={:storage=>"logging"}
|
||||
openshift_logging_install_logging=true
|
||||
----
|
||||
|
||||
. Use the following command to redeploy the EFK stack:
|
||||
+
|
||||
----
|
||||
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
|
||||
----
|
||||
|
||||
@@ -1,139 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * logging/efk-logging-elasticsearch.adoc
|
||||
|
||||
[id='efk-logging-elasticsearch-persistent-storage_{context}']
|
||||
= Configuring persistent storage for Elasticsearch
|
||||
|
||||
By default, the `openshift_logging` Ansible role creates an ephemeral
|
||||
deployment in which all of a pod's data is lost upon restart.
|
||||
|
||||
For production environments, each Elasticsearch deployment configuration requires a persistent storage volume. You can specify an existing persistent
|
||||
volume claim or allow {product-title} to create one.
|
||||
|
||||
* *Use existing PVCs.* If you create your own PVCs for the deployment, {product-title} uses those PVCs.
|
||||
+
|
||||
Name the PVCs to match the `openshift_logging_es_pvc_prefix` setting, which defaults to
|
||||
`logging-es`. Assign each PVC a name with a sequence number added to it: `logging-es-0`,
|
||||
`logging-es-1`, `logging-es-2`, and so on.
|
||||
|
||||
* *Allow {product-title} to create a PVC.* If a PVC for Elsaticsearch does not exist, {product-title} creates the PVC based on parameters
|
||||
in the Ansible inventory file, by default *_/etc/ansible/hosts_*.
|
||||
+
|
||||
[cols="3,7",options="header"]
|
||||
|===
|
||||
|Parameter
|
||||
|Description
|
||||
|
||||
|`openshift_logging_es_pvc_size`
|
||||
| Specify the size of the PVC request.
|
||||
|
||||
|`openshift_logging_elasticsearch_storage_type`
|
||||
a|Specify the storage type as `pvc`.
|
||||
[NOTE]
|
||||
====
|
||||
This is an optional parameter. Setting the `openshift_logging_es_pvc_size` parameter to a value greater than 0 automatically sets this parameter to `pvc` by default.
|
||||
====
|
||||
|
||||
|`openshift_logging_es_pvc_prefix`
|
||||
|Optionally, specify a custom prefix for the PVC.
|
||||
|===
|
||||
+
|
||||
For example:
|
||||
+
|
||||
[source,bash]
|
||||
----
|
||||
openshift_logging_elasticsearch_storage_type=pvc
|
||||
openshift_logging_es_pvc_size=104802308Ki
|
||||
openshift_logging_es_pvc_prefix=es-logging
|
||||
----
|
||||
|
||||
[WARNING]
|
||||
====
|
||||
Using NFS storage as a volume or a persistent volume (or via NAS such as
|
||||
Gluster) is not supported for Elasticsearch storage, as Lucene relies on file
|
||||
system behavior that NFS does not supply. Data corruption and other problems can
|
||||
occur. If NFS storage is required, you can allocate a large file on a
|
||||
volume to serve as a storage device and mount it locally on one host.
|
||||
For example, if your NFS storage volume is mounted at *_/nfs/storage_*:
|
||||
|
||||
----
|
||||
$ truncate -s 1T /nfs/storage/elasticsearch-1
|
||||
$ mkfs.xfs /nfs/storage/elasticsearch-1
|
||||
$ mount -o loop /nfs/storage/elasticsearch-1 /usr/local/es-storage
|
||||
$ chown 1000:1000 /usr/local/es-storage
|
||||
----
|
||||
|
||||
Then, use *_/usr/local/es-storage_* as a host-mount as described below.
|
||||
Use a different backing file as storage for each Elasticsearch replica.
|
||||
|
||||
This loopback must be maintained manually outside of {product-title}, on the
|
||||
node. You must not maintain it from inside a container.
|
||||
====
|
||||
|
||||
It is possible to use a local disk volume (if available) on each
|
||||
node host as storage for an Elasticsearch replica. Doing so requires
|
||||
some preparation as follows.
|
||||
|
||||
. The relevant service account must be given the privilege to mount and edit a
|
||||
local volume:
|
||||
+
|
||||
----
|
||||
$ oc adm policy add-scc-to-user privileged \
|
||||
system:serviceaccount:logging:aggregated-logging-elasticsearch <1>
|
||||
----
|
||||
<1> Use the project you created earlier (for example, *logging*) when running the
|
||||
logging playbook.
|
||||
|
||||
. Each Elasticsearch replica definition must be patched to claim that privilege,
|
||||
for example:
|
||||
+
|
||||
----
|
||||
$ for dc in $(oc get deploymentconfig --selector logging-infra=elasticsearch -o name); do
|
||||
oc scale $dc --replicas=0
|
||||
oc patch $dc \
|
||||
-p '{"spec":{"template":{"spec":{"containers":[{"name":"elasticsearch","securityContext":{"privileged": true}}]}}}}'
|
||||
done
|
||||
----
|
||||
|
||||
. The Elasticsearch replicas must be located on the correct nodes to use the local
|
||||
storage, and should not move around even if those nodes are taken down for a
|
||||
period of time. This requires giving each Elasticsearch replica a node selector
|
||||
that is unique to a node where an administrator has allocated storage for it. To
|
||||
configure a node selector, edit each Elasticsearch deployment configuration and
|
||||
add or edit the *nodeSelector* section to specify a unique label that you have
|
||||
applied for each desired node:
|
||||
+
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: DeploymentConfig
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
nodeSelector:
|
||||
logging-es-node: "1" <1>
|
||||
----
|
||||
<1> This label should uniquely identify a replica with a single node that bears that
|
||||
label, in this case `logging-es-node=1`. Use the `oc label` command to apply
|
||||
labels to nodes as needed.
|
||||
+
|
||||
To automate applying the node selector you can instead use the `oc patch` command:
|
||||
+
|
||||
----
|
||||
$ oc patch dc/logging-es-<suffix> \
|
||||
-p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-es-node":"1"}}}}}'
|
||||
----
|
||||
|
||||
. Once these steps are taken, a local host mount can be applied to each replica
|
||||
as in this example (where we assume storage is mounted at the same path on each node):
|
||||
+
|
||||
----
|
||||
$ for dc in $(oc get deploymentconfig --selector logging-infra=elasticsearch -o name); do
|
||||
oc set volume $dc \
|
||||
--add --overwrite --name=elasticsearch-storage \
|
||||
--type=hostPath --path=/usr/local/es-storage
|
||||
oc rollout latest $dc
|
||||
oc scale $dc --replicas=1
|
||||
done
|
||||
----
|
||||
|
||||
31
modules/efk-logging-fluentd-log-location.adoc
Normal file
31
modules/efk-logging-fluentd-log-location.adoc
Normal file
@@ -0,0 +1,31 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * logging/efk-logging-fluentd.adoc
|
||||
|
||||
[id='efk-logging-fluentd-log-location_{context}']
|
||||
= Configuring Fluentd log location
|
||||
|
||||
Fluentd writes logs to a specified file or to the default location, `/var/log/fluentd/fluentd.log`, based on the `LOGGING_FILE_PATH` environment variable.
|
||||
|
||||
.Procedure
|
||||
|
||||
To set the output location for the Fluentd logs:
|
||||
|
||||
. Edit the `LOGGING_FILE_PATH` parameter
|
||||
in the default inventory file. You can specify a particular file or `STDOUT`:
|
||||
+
|
||||
----
|
||||
LOGGING_FILE_PATH=console <1>
|
||||
LOGGING_FILE_PATH=<path-to-log/fluentd.log> <2>
|
||||
----
|
||||
<1> Sends the log output to the Fluentd default location. Retrieve the logs with the `oc logs -f <pod_name>` command.
|
||||
<2> Sends the log output to the specified file. Retrieve the logs with the `oc exec <pod_name> -- logs` command.
|
||||
|
||||
. Re-run the logging installer playbook:
|
||||
+
|
||||
----
|
||||
$ cd /usr/share/ansible/openshift-ansible
|
||||
$ ansible-playbook [-i </path/to/inventory>] \
|
||||
playbooks/openshift-logging/config.yml
|
||||
----
|
||||
|
||||
49
modules/efk-logging-fluentd-log-rotation.adoc
Normal file
49
modules/efk-logging-fluentd-log-rotation.adoc
Normal file
@@ -0,0 +1,49 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * logging/efk-logging-fluentd.adoc
|
||||
|
||||
[id='efk-logging-fluentd-log-rotation_{context}']
|
||||
= Configuring Fluentd log rotation
|
||||
|
||||
When the current Fluentd log file reaches a specified size, {product-title} automatically renames the *fluentd.log* log file so that new logging data can be collected.
|
||||
Log rotation is enabled by default.
|
||||
|
||||
The following example shows logs in a cluster where the maximum log size is 1Mb and four logs should be retained. When the *fluentd.log* reaches 1Mb, {product-title}
|
||||
deletes the current *fluentd.log.4*, renames the each of the Fluentd logs in turn, and creates a new *fluentd.log*.
|
||||
|
||||
----
|
||||
fluentd.log 0b
|
||||
fluentd.log.1 1Mb
|
||||
fluentd.log.2 1Mb
|
||||
fluentd.log.3 1Mb
|
||||
fluentd.log.4 1Mb
|
||||
----
|
||||
|
||||
.Procedure
|
||||
|
||||
You can control the size of the Fluentd log files and how many of the renamed files that {product-title} retains using
|
||||
environment variables.
|
||||
|
||||
.Parameters for configuring Fluentd log rotation
|
||||
[cols="3,7",options="header"]
|
||||
|===
|
||||
|Parameter
|
||||
|Description
|
||||
|
||||
| `LOGGING_FILE_SIZE` | The maximum size of a single Fluentd log file in Bytes. If the size of the *flientd.log* file exceeds this value, {product-title} renames the *fluentd.log.** files and creates a new *fluentd.log*. The default is 1024000 (1MB).
|
||||
| `LOGGING_FILE_AGE` | The number of logs that Fluentd retains before deleting. The default value is `10`.
|
||||
|===
|
||||
|
||||
For example:
|
||||
|
||||
----
|
||||
$ oc set env ds/logging-fluentd LOGGING_FILE_AGE=30 LOGGING_FILE_SIZE=1024000"
|
||||
----
|
||||
|
||||
Turn off log rotation by setting `LOGGING_FILE_PATH=console`.
|
||||
This causes Fluentd to write logs to the Fluentd default location, *_/var/log/fluentd/fluentd.log_*, where you can retrieve them using the `oc logs -f <pod_name>` command.
|
||||
|
||||
----
|
||||
oc set env ds/fluentd LOGGING_FILE_PATH=console
|
||||
----
|
||||
|
||||
32
modules/efk-logging-fluentd-log-viewing.adoc
Normal file
32
modules/efk-logging-fluentd-log-viewing.adoc
Normal file
@@ -0,0 +1,32 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * logging/efk-logging-fluentd.adoc
|
||||
|
||||
[id='efk-logging-fluentd-viewing_{context}']
|
||||
= Viewing Fluentd logs
|
||||
|
||||
How you view logs depends upon the `LOGGING_FILE_PATH` setting.
|
||||
|
||||
* If `LOGGING_FILE_PATH` points to a file, use the *logs* utility to print out the contents of Fluentd log files:
|
||||
+
|
||||
----
|
||||
oc exec <pod> -- logs <1>
|
||||
----
|
||||
<1> Specify the name of the Fluentd pod. Note the space before `logs`.
|
||||
+
|
||||
For example:
|
||||
+
|
||||
----
|
||||
oc exec logging-fluentd-lmvms -- logs
|
||||
----
|
||||
+
|
||||
The contents of log files are printed out, starting with the oldest log. Use `-f` option to follow what is being written into the logs.
|
||||
|
||||
* If you are using `LOGGING_FILE_PATH=console`, Fluentd writes logs to its default location, `/var/log/fluentd/fluentd.log`. You can retrieve the logs with the `oc logs -f <pod_name>` command.
|
||||
+
|
||||
For example
|
||||
+
|
||||
----
|
||||
oc logs -f /var/log/fluentd/fluentd.log
|
||||
----
|
||||
|
||||
21
modules/efk-logging-fluentd-pod-location.adoc
Normal file
21
modules/efk-logging-fluentd-pod-location.adoc
Normal file
@@ -0,0 +1,21 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * logging/efk-logging-fluentd.adoc
|
||||
|
||||
[id='efk-logging-fluentd-pod-location_{context}']
|
||||
= Viewing Fluentd pods
|
||||
|
||||
You can use the `oc get pods -o wide` command to see the nodes where the Fluentd pod are deployed.
|
||||
|
||||
.Procedure
|
||||
|
||||
Run the following command:
|
||||
|
||||
----
|
||||
$ oc get pods -o wide
|
||||
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
|
||||
logging-es-data-master-5av030lk-1-2x494 2/2 Running 0 38m 154.128.0.80 ip-153-12-8-6.wef.internal <none>
|
||||
logging-fluentd-lqdxg 1/1 Running 0 2m 154.128.0.85 ip-153-12-8-6.wef.internal <none>
|
||||
logging-kibana-1-gj5kc 2/2 Running 0 39m 154.128.0.77 ip-153-12-8-6.wef.internal <none>
|
||||
----
|
||||
@@ -100,4 +100,3 @@ Use the `oc patch` command to modify the daemonset nodeSelector:
|
||||
----
|
||||
oc patch ds logging-fluentd -p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-infra-fluentd":"true"}}}}}'
|
||||
----
|
||||
|
||||
|
||||
@@ -52,7 +52,7 @@ $ oc exec -c elasticsearch <any_es_pod_in_the_cluster> --
|
||||
-d '{ "transient": { "cluster.routing.allocation.enable" : "none" } }'
|
||||
----
|
||||
|
||||
. Once complete, for each `dc` you have for an ES cluster, scale down all replicas:
|
||||
. Once complete, for each `dc` you have for an ES cluster, scale down all nodes:
|
||||
+
|
||||
----
|
||||
$ oc scale dc <dc_name> --replicas=0
|
||||
@@ -69,7 +69,7 @@ You will see a new pod deployed. Once the pod has two ready containers, you can
|
||||
move on to the next `dc`.
|
||||
|
||||
. Once deployment is complete, for each `dc` you have for an ES cluster, scale up
|
||||
replicas:
|
||||
nodes:
|
||||
+
|
||||
----
|
||||
$ oc scale dc <dc_name> --replicas=1
|
||||
|
||||
Reference in New Issue
Block a user