mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
61 lines
3.5 KiB
Plaintext
61 lines
3.5 KiB
Plaintext
// Module included in the following assemblies:
|
|
//
|
|
// * etcd/etcd-performance.adoc
|
|
|
|
:_mod-docs-content-type: PROCEDURE
|
|
[id="etcd-database-size_{context}"]
|
|
= Determining the size of the etcd database and understanding its effects
|
|
|
|
The size of the etcd database has a direct impact on the time to complete the etcd defragmentation process. {product-title} automatically runs the etcd defragmentation on one etcd member at a time when it detects at least 45% fragmentation. During the defragmentation process, the etcd member cannot process any requests. On small etcd databases, the defragmentation process happens in less than a second. With larger etcd databases, the disk latency directly impacts the fragmentation time, causing additional latency, as operations are blocked while defragmentation happens.
|
|
|
|
The size of the etcd database is a factor to consider when network partitions isolate a control plane node for a period and the control plane needs to resync after communication is re-established.
|
|
|
|
Minimal options exist for controlling the size of the etcd database, as it depends on the operators and applications in the system. When you consider the latency range under which the system will operate, account for the effects of synchronization or defragmentation per size of the etcd database.
|
|
|
|
The magnitude of the effects is specific to the deployment. The time to complete a defragmentation will cause degradation in the transaction rate, as the etcd member cannot accept updates during the defragmentation process. Similarly, the time for the etcd re-synchronization for large databases with high change rate affects the transaction rate and transaction latency on the system.
|
|
|
|
Consider the following two examples for the type of impacts to plan for.
|
|
|
|
Example of the effect of etcd defragementation based on database size:: Writing an etcd database of 1 GB to a slow 7200 RPMs disk at 80 Mbit/s takes about 1 minute and 40 seconds. In such a scenario, the defragmentation process takes at least this long, if not longer, to complete the defragmentation.
|
|
|
|
Example of the effect of database size on etcd synchronization:: If there is a change of 10% of the etcd database during the disconnection of one of the control plane nodes, the resync needs to transfer at least 100 MB. Transferring 100 MB over a 1 Gbps link takes 800 ms. On clusters with regular transactions with the Kubernetes API, the larger the etcd database size, the more network instabilities will cause control plane instabilities.
|
|
|
|
You can determine the size of an etcd database by using the {product-title} console or by running commands in the `etcdctl` tool.
|
|
|
|
.Procedure
|
|
|
|
* To find the database size in the {product-title} console, go to the *etcd* dashboard to view a plot that reports the size of the etcd database.
|
|
|
|
* To find the database size by using the etcdctl tool, you can enter two commands:
|
|
|
|
.. Enter the following command to list the pods:
|
|
+
|
|
[source,terminal]
|
|
----
|
|
# oc get pods -n openshift-etcd -l app=etcd
|
|
----
|
|
+
|
|
.Example output
|
|
[source,terminal]
|
|
----
|
|
NAME READY STATUS RESTARTS AGE
|
|
etcd-m0 4/4 Running 4 22h
|
|
etcd-m1 4/4 Running 4 22h
|
|
etcd-m2 4/4 Running 4 22h
|
|
----
|
|
|
|
.. Enter the following command and view the database size in the output:
|
|
+
|
|
[source,terminal]
|
|
----
|
|
# oc exec -t etcd-m0 -- etcdctl endpoint status -w simple | cut -d, -f 1,3,4
|
|
----
|
|
+
|
|
.Example output
|
|
[source,terminal]
|
|
----
|
|
https://198.18.111.12:2379, 3.5.6, 1.1 GB
|
|
https://198.18.111.13:2379, 3.5.6, 1.1 GB
|
|
https://198.18.111.14:2379, 3.5.6, 1.1 GB
|
|
----
|