1
0
mirror of https://github.com/gluster/glusterfs.git synced 2026-02-06 09:48:44 +01:00
Files
glusterfs/tests/basic/global-threading.t
Xavi Hernandez 31cba94074 core: implement a global thread pool
This patch implements a thread pool that is wait-free for adding jobs to
the queue and uses a very small locked region to get jobs. This makes it
possible to decrease contention drastically. It's based on wfcqueue
structure provided by urcu library.

It automatically enables more threads when load demands it, and stops
them when not needed. There's a maximum number of threads that can be
used. This value can be configured.

Depending on the workload, the maximum number of threads plays an
important role. So it needs to be configured for optimal performance.
Currently the thread pool doesn't self adjust the maximum for the
workload, so this configuration needs to be changed manually.

For this reason, the global thread pool has been made optional, so that
volumes can still use the thread pool provided by io-threads.

To enable it for bricks, the following option needs to be set:

   config.global-threading = on

This option has no effect if bricks are already running. A restart is
required to activate it. It's recommended to also enable the following
option when running bricks with the global thread pool:

   performance.iot-pass-through = on

To enable it for a FUSE mount point, the option '--global-threading'
must be added to the mount command. To change it, an umount and remount
is needed. It's recommended to disable the following option when using
global threading on a mount point:

   performance.client-io-threads = off

To enable it for services managed by glusterd, glusterd needs to be
started with option '--global-threading'. In this case all daemons, like
self-heal, will be using the global thread pool.

Currently it can only be enabled for bricks, FUSE mounts and glusterd
services.

The maximum number of threads for clients and bricks can be configured
using the following options:

   config.client-threads
   config.brick-threads

These options can be applied online and its effect is immediate most of
the times. If one of them is set to 0, the maximum number of threads
will be calcutated as #cores * 2.

Some distributions use a very old userspace-rcu library (version 0.7)
for this reason, some header files from version 0.10 have been copied
into contrib/userspace-rcu and are used if the detected version is 0.7
or older.

An additional change has been made to io-threads to prevent that threads
are started when iot-pass-through is set.

Change-Id: I09d19e246b9e6d53c6247b29dfca6af6ee00a24b
updates: #532
Signed-off-by: Xavi Hernandez <xhernandez@redhat.com>
2019-01-24 18:44:06 +01:00

105 lines
3.2 KiB
Bash

#!/bin/bash
. $(dirname $0)/../include.rc
. $(dirname $0)/../volume.rc
# Test if the given process has a number of threads of a given type between
# min and max.
function check_threads() {
local pid="${1}"
local pattern="${2}"
local min="${3}"
local max="${4-}"
local count
count="$(ps hH -o comm ${pid} | grep "${pattern}" | wc -l)"
if [[ ${min} -gt ${count} ]]; then
return 1
fi
if [[ ! -z "${max}" && ${max} -lt ${count} ]]; then
return 1
fi
return 0
}
cleanup
TEST glusterd
# Glusterd shouldn't use any thread
TEST check_threads $(get_glusterd_pid) glfs_tpw 0 0
TEST check_threads $(get_glusterd_pid) glfs_iotwr 0 0
TEST pkill -9 glusterd
TEST glusterd --global-threading
# Glusterd shouldn't use global threads, even if enabled
TEST check_threads $(get_glusterd_pid) glfs_tpw 0 0
TEST check_threads $(get_glusterd_pid) glfs_iotwr 0 0
TEST $CLI volume create $V0 replica 2 $H0:$B0/b{0,1}
# Normal configuration using io-threads on bricks
TEST $CLI volume set $V0 config.global-threading off
TEST $CLI volume set $V0 performance.iot-pass-through off
TEST $CLI volume set $V0 performance.client-io-threads off
TEST $CLI volume start $V0
# There shouldn't be global threads
TEST check_threads $(get_brick_pid $V0 $H0 $B0/b0) glfs_tpw 0 0
TEST check_threads $(get_brick_pid $V0 $H0 $B0/b1) glfs_tpw 0 0
# There should be at least 1 io-thread
TEST check_threads $(get_brick_pid $V0 $H0 $B0/b0) glfs_iotwr 1
TEST check_threads $(get_brick_pid $V0 $H0 $B0/b1) glfs_iotwr 1
# Self-heal should be using global threads
TEST check_threads $(get_shd_process_pid) glfs_tpw 1
TEST check_threads $(get_shd_process_pid) glfs_iotwr 0 0
TEST $CLI volume stop $V0
# Configuration with global threads on bricks
TEST $CLI volume set $V0 config.global-threading on
TEST $CLI volume set $V0 performance.iot-pass-through on
TEST $CLI volume start $V0
# There should be at least 1 global thread
TEST check_threads $(get_brick_pid $V0 $H0 $B0/b0) glfs_tpw 1
TEST check_threads $(get_brick_pid $V0 $H0 $B0/b1) glfs_tpw 1
# There shouldn't be any io-thread worker threads
TEST check_threads $(get_brick_pid $V0 $H0 $B0/b0) glfs_iotwr 0 0
TEST check_threads $(get_brick_pid $V0 $H0 $B0/b1) glfs_iotwr 0 0
# Normal configuration using io-threads on clients
TEST $CLI volume set $V0 performance.iot-pass-through off
TEST $CLI volume set $V0 performance.client-io-threads on
TEST $GFS --volfile-id=/$V0 --volfile-server=$H0 $M0
# There shouldn't be global threads
TEST check_threads $(get_mount_process_pid $V0 $M0) glfs_tpw 0 0
# There should be at least 1 io-thread
TEST check_threads $(get_mount_process_pid $V0 $M0) glfs_iotwr 1
EXPECT_WITHIN $UMOUNT_TIMEOUT "Y" force_umount $M0
# Configuration with global threads on clients
TEST $CLI volume set $V0 performance.client-io-threads off
TEST $GFS --volfile-id=/$V0 --volfile-server=$H0 --global-threading $M0
# There should be at least 1 global thread
TEST check_threads $(get_mount_process_pid $V0 $M0) glfs_tpw 1
# There shouldn't be io-threads
TEST check_threads $(get_mount_process_pid $V0 $M0) glfs_iotwr 0 0
# Some basic volume access checks with global-threading enabled everywhere
TEST mkdir ${M0}/dir
TEST dd if=/dev/zero of=${M0}/dir/file bs=128k count=8
cleanup