From a79c0061087eae7864cc2d583172681ff75814bb Mon Sep 17 00:00:00 2001 From: black-dragon74 Date: Wed, 25 May 2022 13:30:58 +0530 Subject: [PATCH] [admin-guide] Fix docs and cleanup syntax (2/2) Signed-off-by: black-dragon74 --- .../GlusterFS-Coreutils.md | 32 +- docs/Administrator-Guide/GlusterFS-Filter.md | 15 +- .../GlusterFS-Introduction.md | 26 +- .../GlusterFS-Keystone-Quickstart.md | 198 ++-- docs/Administrator-Guide/GlusterFS-iSCSI.md | 53 +- .../Handling-of-users-with-many-groups.md | 5 - docs/Administrator-Guide/Hook-scripts.md | 67 +- .../Linux-Kernel-Tuning.md | 119 ++- docs/Administrator-Guide/Logging.md | 25 +- .../Administrator-Guide/Managing-Snapshots.md | 73 +- docs/Administrator-Guide/Managing-Volumes.md | 511 +++++----- docs/Administrator-Guide/Mandatory-Locks.md | 27 +- .../Monitoring-Workload.md | 910 +++++++++--------- .../NFS-Ganesha-GlusterFS-Integration.md | 200 ++-- .../Network-Configurations-Techniques.md | 47 +- docs/Administrator-Guide/Object-Storage.md | 14 +- .../Performance-Testing.md | 160 ++- .../Administrator-Guide/Performance-Tuning.md | 75 +- docs/Administrator-Guide/RDMA-Transport.md | 63 +- docs/Administrator-Guide/SSL.md | 159 ++- .../Administrator-Guide/Setting-Up-Clients.md | 308 +++--- .../Split-brain-and-ways-to-deal-with-it.md | 49 +- docs/Administrator-Guide/Start-Stop-Daemon.md | 38 +- docs/Administrator-Guide/Storage-Pools.md | 26 +- .../Thin-Arbiter-Volumes.md | 32 +- docs/Administrator-Guide/Trash.md | 79 +- .../Tuning-Volume-Options.md | 251 +++-- docs/Administrator-Guide/io_uring.md | 18 +- docs/Administrator-Guide/overview.md | 1 - .../Administrator-Guide/setting-up-storage.md | 7 +- 30 files changed, 1871 insertions(+), 1717 deletions(-) diff --git a/docs/Administrator-Guide/GlusterFS-Coreutils.md b/docs/Administrator-Guide/GlusterFS-Coreutils.md index 65290e6..02d1250 100644 --- a/docs/Administrator-Guide/GlusterFS-Coreutils.md +++ b/docs/Administrator-Guide/GlusterFS-Coreutils.md @@ -1,43 +1,52 @@ -Coreutils for GlusterFS volumes -=============================== +# Coreutils for GlusterFS volumes + The GlusterFS Coreutils is a suite of utilities that aims to mimic the standard Linux coreutils, with the exception that it utilizes the gluster C API in order to do work. It offers an interface similar to that of the ftp program. Operations include things like getting files from the server to the local machine, putting files from the local machine to the server, retrieving directory information from the server and so on. ## Installation + #### Install GlusterFS + For information on prerequisites, instructions and configuration of GlusterFS, see Installation Guides from . #### Install glusterfs-coreutils + For now glusterfs-coreutils will be packaged only as rpm. Other package formats will be supported very soon. ##### For fedora + Use dnf/yum to install glusterfs-coreutils: ```console -# dnf install glusterfs-coreutils +dnf install glusterfs-coreutils ``` OR ```console -# yum install glusterfs-coreutils +yum install glusterfs-coreutils ``` ## Usage + glusterfs-coreutils provides a set of basic utilities such as cat, cp, flock, ls, mkdir, rm, stat and tail that are implemented specifically using the GlusterFS API commonly known as libgfapi. These utilities can be used either inside a gluster remote shell or as standalone commands with 'gf' prepended to their respective base names. For example, glusterfs cat utility is named as gfcat and so on with an exception to flock core utility for which a standalone gfflock command is not provided as such(see the notes section on why flock is designed in that way). #### Using coreutils within a remote gluster-shell + ##### Invoke a new shell -In order to enter into a gluster client-shell, type *gfcli* and press enter. You will now be presented with a similar prompt as shown below: + +In order to enter into a gluster client-shell, type _gfcli_ and press enter. You will now be presented with a similar prompt as shown below: ```console # gfcli gfcli> ``` -See the man page for *gfcli* for more options. +See the man page for _gfcli_ for more options. + ##### Connect to a gluster volume + Now we need to connect as a client to some glusterfs volume which has already started. Use connect command to do so as follows: ```console @@ -57,7 +66,8 @@ gfcli () ``` ##### Try out your favorite utilities -Please go through the man pages for different utilities and available options for each command. For example, *man gfcp* will display details on the usage of cp command outside or within a gluster-shell. Run different commands as follows: + +Please go through the man pages for different utilities and available options for each command. For example, _man gfcp_ will display details on the usage of cp command outside or within a gluster-shell. Run different commands as follows: ```console gfcli (localhost/vol) ls . @@ -65,6 +75,7 @@ gfcli (localhost/vol) stat .trashcan ``` ##### Terminate the client connection from the volume + Use disconnect command to close the connection: ```console @@ -73,6 +84,7 @@ gfcli> ``` ##### Exit from shell + Run quit from shell: ```console @@ -80,6 +92,7 @@ gfcli> quit ``` #### Using standalone glusterfs coreutil commands + As mentioned above glusterfs coreutils also provides standalone commands to perform the basic GNU coreutil functionalities. All those commands are prepended by 'gf'. Instead of invoking a gluster client-shell you can directly make use of these to establish and perform the operation in one shot. For example see the following sample usage of gfstat command: ```console @@ -91,5 +104,6 @@ There is an exemption regarding flock coreutility which is not available as a st For more information on each command and corresponding options see associated man pages. ## Notes -* Within a particular session of gluster client-shell, history of commands are preserved i.e, you can use up/down arrow keys to search through previously executed commands or the reverse history search technique using Ctrl+R. -* flock is not available as standalone 'gfflock'. Because locks are always associated with file descriptors. Unlike all other commands flock cannot straight away clean up the file descriptor after acquiring the lock. For flock we need to maintain an active connection as a glusterfs client. + +- Within a particular session of gluster client-shell, history of commands are preserved i.e, you can use up/down arrow keys to search through previously executed commands or the reverse history search technique using Ctrl+R. +- flock is not available as standalone 'gfflock'. Because locks are always associated with file descriptors. Unlike all other commands flock cannot straight away clean up the file descriptor after acquiring the lock. For flock we need to maintain an active connection as a glusterfs client. diff --git a/docs/Administrator-Guide/GlusterFS-Filter.md b/docs/Administrator-Guide/GlusterFS-Filter.md index a14a42c..07e36be 100644 --- a/docs/Administrator-Guide/GlusterFS-Filter.md +++ b/docs/Administrator-Guide/GlusterFS-Filter.md @@ -1,5 +1,4 @@ -Modifying .vol files with a filter -================================== +# Modifying .vol files with a filter If you need to make manual changes to a .vol file it is recommended to make these through the client interface ('gluster foo'). Making changes @@ -7,22 +6,24 @@ directly to .vol files is discouraged, because it cannot be predicted when a .vol file will be reset on disk, for example with a 'gluster set foo' command. The command line interface was never designed to read the .vol files, but rather to keep state and rebuild them (from -'/var/lib/glusterd/vols/\$vol/info'). There is, however, another way to +`/var/lib/glusterd/vols/$vol/info`). There is, however, another way to do this. You can create a shell script in the directory -'/usr/lib\*/glusterfs/\$VERSION/filter'. All scripts located there will +`/usr/lib*/glusterfs/$VERSION/filter`. All scripts located there will be executed every time the .vol files are written back to disk. The first and only argument passed to all script located there is the name of the .vol file. So you could create a script there that looks like this: - #!/bin/sh`\ - sed -i 'some-sed-magic' "$1" +```console +#!/bin/sh +sed -i 'some-sed-magic' "$1" +``` Which will run the script, which in turn will run the sed command on the .vol file (passed as \$1). Importantly, the script needs to be set as executable (eg via chmod), -else it won't be run. \ No newline at end of file +else it won't be run. diff --git a/docs/Administrator-Guide/GlusterFS-Introduction.md b/docs/Administrator-Guide/GlusterFS-Introduction.md index ddf35c7..4068a26 100644 --- a/docs/Administrator-Guide/GlusterFS-Introduction.md +++ b/docs/Administrator-Guide/GlusterFS-Introduction.md @@ -1,30 +1,24 @@ -What is Gluster ? -================= +# What is Gluster ? Gluster is a scalable, distributed file system that aggregates disk storage resources from multiple servers into a single global namespace. ### Advantages - * Scales to several petabytes - * Handles thousands of clients - * POSIX compatible - * Uses commodity hardware - * Can use any ondisk filesystem that supports extended attributes - * Accessible using industry standard protocols like NFS and SMB - * Provides replication, quotas, geo-replication, snapshots and bitrot detection - * Allows optimization for different workloads - * Open Source - +- Scales to several petabytes +- Handles thousands of clients +- POSIX compatible +- Uses commodity hardware +- Can use any ondisk filesystem that supports extended attributes +- Accessible using industry standard protocols like NFS and SMB +- Provides replication, quotas, geo-replication, snapshots and bitrot detection +- Allows optimization for different workloads +- Open Source ![640px-glusterfs_architecture](../images/640px-GlusterFS-Architecture.png) - - Enterprises can scale capacity, performance, and availability on demand, with no vendor lock-in, across on-premise, public cloud, and hybrid environments. Gluster is used in production at thousands of organisations spanning media, healthcare, government, education, web 2.0, and financial services. - - ### Commercial offerings and support Several companies offer support or [consulting](https://www.gluster.org/support/). diff --git a/docs/Administrator-Guide/GlusterFS-Keystone-Quickstart.md b/docs/Administrator-Guide/GlusterFS-Keystone-Quickstart.md index 83c8254..8c49917 100644 --- a/docs/Administrator-Guide/GlusterFS-Keystone-Quickstart.md +++ b/docs/Administrator-Guide/GlusterFS-Keystone-Quickstart.md @@ -12,131 +12,175 @@ These docs are largely derived from: [`http://fedoraproject.org/wiki/Getting_started_with_OpenStack_on_Fedora_17#Initial_Keystone_setup`](http://fedoraproject.org/wiki/Getting_started_with_OpenStack_on_Fedora_17#Initial_Keystone_setup) - Add the RDO Openstack Grizzly and Epel repos: - $ sudo yum install -y `[`http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm`](http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm) - $ sudo yum install -y `[`http://rdo.fedorapeople.org/openstack/openstack-grizzly/rdo-release-grizzly-1.noarch.rpm`](http://rdo.fedorapeople.org/openstack/openstack-grizzly/rdo-release-grizzly-1.noarch.rpm) +```console +sudo yum install -y "http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm" + +sudo yum install -y "http://rdo.fedorapeople.org/openstack/openstack-grizzly/rdo-release-grizzly-1.noarch.rpm" +``` Install Openstack-Keystone - $ sudo yum install openstack-keystone openstack-utils python-keystoneclient +```console +sudo yum install openstack-keystone openstack-utils python-keystoneclient +``` Configure keystone - $ cat > keystonerc << _EOF - export ADMIN_TOKEN=$(openssl rand -hex 10) - export OS_USERNAME=admin - export OS_PASSWORD=$(openssl rand -hex 10) - export OS_TENANT_NAME=admin - export OS_AUTH_URL=`[`https://127.0.0.1:5000/v2.0/`](https://127.0.0.1:5000/v2.0/) - export SERVICE_ENDPOINT=`[`https://127.0.0.1:35357/v2.0/`](https://127.0.0.1:35357/v2.0/) - export SERVICE_TOKEN=\$ADMIN_TOKEN - _EOF - $ . ./keystonerc - $ sudo openstack-db --service keystone --init +```console +$ cat > keystonerc << _EOF +export ADMIN_TOKEN=$(openssl rand -hex 10) +export OS_USERNAME=admin +export OS_PASSWORD=$(openssl rand -hex 10) +export OS_TENANT_NAME=admin +export OS_AUTH_URL=`[`https://127.0.0.1:5000/v2.0/`](https://127.0.0.1:5000/v2.0/) +export SERVICE_ENDPOINT=`[`https://127.0.0.1:35357/v2.0/`](https://127.0.0.1:35357/v2.0/) +export SERVICE_TOKEN=\$ADMIN_TOKEN +_EOF + +$ . ./keystonerc +$ sudo openstack-db --service keystone --init +``` Append the keystone configs to /etc/swift/proxy-server.conf - $ sudo -i` - # cat >> /etc/swift/proxy-server.conf << _EOM` - [filter:keystone]` - use = egg:swift#keystoneauth` - operator_roles = admin, swiftoperator` - - [filter:authtoken] - paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory - auth_port = 35357 - auth_host = 127.0.0.1 - auth_protocol = https - _EOM - exit +```console +$ sudo -i + +# cat >> /etc/swift/proxy-server.conf << _EOM +[filter:keystone]` +use = egg:swift#keystoneauth` +operator_roles = admin, swiftoperator` + +[filter:authtoken] +paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory +auth_port = 35357 +auth_host = 127.0.0.1 +auth_protocol = https +_EOM + +# exit +``` Finish configuring both swift and keystone using the command-line tool: - $ sudo openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_token $ADMIN_TOKEN - $ sudo openstack-config --set /etc/swift/proxy-server.conf filter:authtoken auth_token $ADMIN_TOKEN - $ sudo openstack-config --set /etc/swift/proxy-server.conf DEFAULT log_name proxy_server - $ sudo openstack-config --set /etc/swift/proxy-server.conf filter:authtoken signing_dir /etc/swift - $ sudo openstack-config --set /etc/swift/proxy-server.conf pipeline:main pipeline "healthcheck cache authtoken keystone proxy-server" +```console +sudo openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_token $ADMIN_TOKEN +sudo openstack-config --set /etc/swift/proxy-server.conf filter:authtoken auth_token $ADMIN_TOKEN +sudo openstack-config --set /etc/swift/proxy-server.conf DEFAULT log_name proxy_server +sudo openstack-config --set /etc/swift/proxy-server.conf filter:authtoken signing_dir /etc/swift +sudo openstack-config --set /etc/swift/proxy-server.conf pipeline:main pipeline "healthcheck cache authtoken keystone proxy-server" - $ sudo openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN - $ sudo openstack-config --set /etc/keystone/keystone.conf ssl enable True - $ sudo openstack-config --set /etc/keystone/keystone.conf ssl keyfile /etc/swift/cert.key - $ sudo openstack-config --set /etc/keystone/keystone.conf ssl certfile /etc/swift/cert.crt - $ sudo openstack-config --set /etc/keystone/keystone.conf signing token_format UUID - $ sudo openstack-config --set /etc/keystone/keystone.conf sql connection mysql://keystone:keystone@127.0.0.1/keystone +sudo openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN +sudo openstack-config --set /etc/keystone/keystone.conf ssl enable True +sudo openstack-config --set /etc/keystone/keystone.conf ssl keyfile /etc/swift/cert.key +sudo openstack-config --set /etc/keystone/keystone.conf ssl certfile /etc/swift/cert.crt +sudo openstack-config --set /etc/keystone/keystone.conf signing token_format UUID +sudo openstack-config --set /etc/keystone/keystone.conf sql connection mysql://keystone:keystone@127.0.0.1/keystone +``` Configure keystone to start at boot and start it up. - $ sudo chkconfig openstack-keystone on - $ sudo service openstack-keystone start # If you script this, you'll want to wait a few seconds to start using it +```console +sudo chkconfig openstack-keystone on +sudo service openstack-keystone start # If you script this, you'll want to wait a few seconds to start using it +``` We are using untrusted certs, so tell keystone not to complain. If you replace with trusted certs, or are not using SSL, set this to "". - $ INSECURE="--insecure" +```console +INSECURE="--insecure" +``` Create the keystone and swift services in keystone: - $ KS_SERVICEID=$(keystone $INSECURE service-create --name=keystone --type=identity --description="Keystone Identity Service" | grep " id " | cut -d "|" -f 3) - $ SW_SERVICEID=$(keystone $INSECURE service-create --name=swift --type=object-store --description="Swift Service" | grep " id " | cut -d "|" -f 3) - $ endpoint="`[`https://127.0.0.1:443`](https://127.0.0.1:443)`" - $ keystone $INSECURE endpoint-create --service_id $KS_SERVICEID \ -   --publicurl $endpoint'/v2.0' --adminurl `[`https://127.0.0.1:35357/v2.0`](https://127.0.0.1:35357/v2.0)` \ -   --internalurl `[`https://127.0.0.1:5000/v2.0`](https://127.0.0.1:5000/v2.0) - $ keystone $INSECURE endpoint-create --service_id $SW_SERVICEID \ -   --publicurl $endpoint'/v1/AUTH_$(tenant_id)s' \ -   --adminurl $endpoint'/v1/AUTH_$(tenant_id)s' \ -   --internalurl $endpoint'/v1/AUTH_$(tenant_id)s' +```console +KS_SERVICEID=$(keystone $INSECURE service-create --name=keystone --type=identity --description="Keystone Identity Service" | grep " id " | cut -d "|" -f 3) + +SW_SERVICEID=$(keystone $INSECURE service-create --name=swift --type=object-store --description="Swift Service" | grep " id " | cut -d "|" -f 3) + +endpoint="`[`https://127.0.0.1:443`](https://127.0.0.1:443)`" + +keystone $INSECURE endpoint-create --service_id $KS_SERVICEID \ +  --publicurl $endpoint'/v2.0' --adminurl `[`https://127.0.0.1:35357/v2.0`](https://127.0.0.1:35357/v2.0)` \ +  --internalurl `[`https://127.0.0.1:5000/v2.0`](https://127.0.0.1:5000/v2.0) + +keystone $INSECURE endpoint-create --service_id $SW_SERVICEID \ +  --publicurl $endpoint'/v1/AUTH_$(tenant_id)s' \ +  --adminurl $endpoint'/v1/AUTH_$(tenant_id)s' \ +  --internalurl $endpoint'/v1/AUTH_$(tenant_id)s' +``` Create the admin tenant: - $ admin_id=$(keystone $INSECURE tenant-create --name admin --description "Internal Admin Tenant" | grep id | awk '{print $4}') +```console +admin_id=$(keystone $INSECURE tenant-create --name admin --description "Internal Admin Tenant" | grep id | awk '{print $4}') +``` Create the admin roles: - $ admin_role=$(keystone $INSECURE role-create --name admin | grep id | awk '{print $4}') - $ ksadmin_role=$(keystone $INSECURE role-create --name KeystoneServiceAdmin | grep id | awk '{print $4}') - $ kadmin_role=$(keystone $INSECURE role-create --name KeystoneAdmin | grep id | awk '{print $4}') - $ member_role=$(keystone $INSECURE role-create --name member | grep id | awk '{print $4}') +```console +admin_role=$(keystone $INSECURE role-create --name admin | grep id | awk '{print $4}') +ksadmin_role=$(keystone $INSECURE role-create --name KeystoneServiceAdmin | grep id | awk '{print $4}') +kadmin_role=$(keystone $INSECURE role-create --name KeystoneAdmin | grep id | awk '{print $4}') +member_role=$(keystone $INSECURE role-create --name member | grep id | awk '{print $4}') +``` Create the admin user: - $ user_id=$(keystone $INSECURE user-create --name admin --tenant-id $admin_id --pass $OS_PASSWORD | grep id | awk '{print $4}') - $ keystone $INSECURE user-role-add --user-id $user_id --tenant-id $admin_id \ -   --role-id $admin_role - $ keystone $INSECURE user-role-add --user-id $user_id --tenant-id $admin_id \ -   --role-id $kadmin_role - $ keystone $INSECURE user-role-add --user-id $user_id --tenant-id $admin_id \ -   --role-id $ksadmin_role +```console +user_id=$(keystone $INSECURE user-create --name admin --tenant-id $admin_id --pass $OS_PASSWORD | grep id | awk '{print $4}') + +keystone $INSECURE user-role-add --user-id $user_id --tenant-id $admin_id \ +  --role-id $admin_role + +keystone $INSECURE user-role-add --user-id $user_id --tenant-id $admin_id \ +  --role-id $kadmin_role + +keystone $INSECURE user-role-add --user-id $user_id --tenant-id $admin_id \ +  --role-id $ksadmin_role +``` If you do not have multi-volume support (broken in 3.3.1-11), then the volume names will not correlate to the tenants, and all tenants will map to the same volume, so just use a normal name. (This will be fixed in 3.4, and should be fixed in 3.4 Beta. The bug report for this is here: ) - $ volname="admin" - #  or if you have the multi-volume patch - $ volname=$admin_id +```console +volname="admin" + +# or if you have the multi-volume patch +volname=$admin_id +``` Create and start the admin volume: - $ sudo gluster volume create $volname $myhostname:$pathtobrick - $ sudo gluster volume start $volname - $ sudo service openstack-keystone start +```console +sudo gluster volume create $volname $myhostname:$pathtobrick +sudo gluster volume start $volname +sudo service openstack-keystone start +``` Create the ring for the admin tenant. If you have working multi-volume support, then you can specify multiple volume names in the call: - $ cd /etc/swift - $ sudo /usr/bin/gluster-swift-gen-builders $volname - $ sudo swift-init main restart +```console +cd /etc/swift +sudo /usr/bin/gluster-swift-gen-builders $volname +sudo swift-init main restart +``` Create a testadmin user associated with the admin tenant with password testadmin and admin role: - $ user_id=$(keystone $INSECURE user-create --name testadmin --tenant-id $admin_id --pass testadmin | grep id | awk '{print $4}') - $ keystone $INSECURE user-role-add --user-id $user_id --tenant-id $admin_id \ -   --role-id $admin_role +```console +user_id=$(keystone $INSECURE user-create --name testadmin --tenant-id $admin_id --pass testadmin | grep id | awk '{print $4}') + +keystone $INSECURE user-role-add --user-id $user_id --tenant-id $admin_id \ +  --role-id $admin_role +``` Test the user: - $ curl $INSECURE -d '{"auth":{"tenantName": "admin", "passwordCredentials":{"username": "testadmin", "password": "testadmin"}}}' -H "Content-type: application/json" `[`https://127.0.0.1:5000/v2.0/tokens`](https://127.0.0.1:5000/v2.0/tokens) +```console +curl $INSECURE -d '{"auth":{"tenantName": "admin", "passwordCredentials":{"username": "testadmin", "password": "testadmin"}}}' -H "Content-type: application/json" "https://127.0.0.1:5000/v2.0/tokens" +``` See here for more examples: diff --git a/docs/Administrator-Guide/GlusterFS-iSCSI.md b/docs/Administrator-Guide/GlusterFS-iSCSI.md index 6e7f8ed..1fa4b3a 100644 --- a/docs/Administrator-Guide/GlusterFS-iSCSI.md +++ b/docs/Administrator-Guide/GlusterFS-iSCSI.md @@ -1,11 +1,10 @@ # GlusterFS iSCSI - ## Introduction iSCSI on Gluster can be set up using the Linux Target driver. This is a user space daemon that accepts iSCSI (as well as iSER and FCoE.) It interprets iSCSI CDBs and converts them into some other I/O operation, according to user configuration. In our case, we can convert the CDBs into file operations that run against a gluster file. The file represents the LUN and the offset in the file the LBA. -A plug-in for the Linux target driver has been written to use the libgfapi. It is part of the Linux target driver (bs\_glfs.c). Using it, the datapath skips FUSE. This document will be updated to describe how to use it. You can see README.glfs in the Linux target driver's documentation subdirectory. +A plug-in for the Linux target driver has been written to use the libgfapi. It is part of the Linux target driver (bs_glfs.c). Using it, the datapath skips FUSE. This document will be updated to describe how to use it. You can see README.glfs in the Linux target driver's documentation subdirectory. LIO is a replacement for the Linux Target Driver that is included in RHEL7. A user-space plug-in mechanism for it is under development. Once that piece of code exists a similar mechanism can be built for gluster as was done for the Linux target driver. @@ -17,18 +16,24 @@ For more information on iSCSI and the Linux target driver, see [1] and [2]. Mount gluster locally on your gluster server. Note you can also run it on the gluster client. There are pros and cons to these configurations, described [below](#Running_the_target_on_the_gluster_client "wikilink"). - # mount -t glusterfs 127.0.0.1:gserver /mnt +```console +mount -t glusterfs 127.0.0.1:gserver /mnt +``` -Create a large file representing your block device within the gluster fs. In this case, the lun is 2G. (You could also create a gluster "block device" for this purpose, which would skip the file system). +Create a large file representing your block device within the gluster fs. In this case, the lun is 2G. (_You could also create a gluster "block device" for this purpose, which would skip the file system_). - # dd if=/dev/zero of=disk3 bs=2G count=25 +```console +dd if=/dev/zero of=disk3 bs=2G count=25 +``` Create a target using the file as the backend storage. If necessary, download the Linux SCSI target. Then start the service. - # yum install scsi-target-utils - # service tgtd start +```console +yum install scsi-target-utils +service tgtd start +``` You must give an iSCSI Qualified name (IQN), in the format : iqn.yyyy-mm.reversed.domain.name:OptionalIdentifierText @@ -36,41 +41,57 @@ where: yyyy-mm represents the 4-digit year and 2-digit month the device was started (for example: 2011-07) - # tgtadm --lld iscsi --op new --mode target --tid 1 -T iqn.20013-10.com.redhat +```console +tgtadm --lld iscsi --op new --mode target --tid 1 -T iqn.20013-10.com.redhat +``` You can look at the target: - # tgtadm --lld iscsi --op show --mode conn --tid 1 +```console +# tgtadm --lld iscsi --op show --mode conn --tid 1 - Session: 11  Connection: 0     Initiator iqn.1994-05.com.redhat:cf75c8d4274d +Session: 11  Connection: 0     Initiator iqn.1994-05.com.redhat:cf75c8d4274d +``` Next, add a logical unit to the target - # tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /mnt/disk3 +```console +tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /mnt/disk3 +``` Allow any initiator to access the target. - # tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL +```console +tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL +``` Now it’s time to set up your client. Discover your targets. Note in this example's case, the target IP address is 192.168.1.2 - # iscsiadm --mode discovery --type sendtargets --portal 192.168.1.2 +```console +iscsiadm --mode discovery --type sendtargets --portal 192.168.1.2 +``` Login to your target session. - # iscsiadm --mode node --targetname iqn.2001-04.com.example:storage.disk1.amiens.sys1.xyz --portal 192.168.1.2:3260 --login +```console +iscsiadm --mode node --targetname iqn.2001-04.com.example:storage.disk1.amiens.sys1.xyz --portal 192.168.1.2:3260 --login +``` You should have a new SCSI disk. You will see it created in /var/log/messages. You will see it in lsblk. You can send I/O to it: - # dd if=/dev/zero of=/dev/sda bs=4K count=100 +```console +dd if=/dev/zero of=/dev/sda bs=4K count=100 +``` To tear down your iSCSI connection: - # iscsiadm  -m node -T iqn.2001-04.com.redhat  -p 172.17.40.21 -u +```console +iscsiadm  -m node -T iqn.2001-04.com.redhat  -p 172.17.40.21 -u +``` ## Running the iSCSI target on the gluster client diff --git a/docs/Administrator-Guide/Handling-of-users-with-many-groups.md b/docs/Administrator-Guide/Handling-of-users-with-many-groups.md index f84f462..09f79bf 100644 --- a/docs/Administrator-Guide/Handling-of-users-with-many-groups.md +++ b/docs/Administrator-Guide/Handling-of-users-with-many-groups.md @@ -10,7 +10,6 @@ different restrictions on different levels in the stack. The explanations in this document should clarify which restrictions exist, and how these can be handled. - ## tl;dr - if users belong to more than 90 groups, the brick processes need to resolve @@ -25,7 +24,6 @@ For all of the above options counts that the system doing the group resolving must be configured (`nsswitch`, `sssd`, ..) to be able to get all groups when only a UID is known. - ## Limit in the GlusterFS protocol When a Gluster client does some action on a Gluster volume, the operation is @@ -52,7 +50,6 @@ use the POSIX `getgrouplist()` function to fetch them. Because this is a protocol limitation, all clients, including FUSE mounts, Gluster/NFS server and libgfapi applications are affected by this. - ## Group limit with FUSE The FUSE client gets the groups of the process that does the I/O by reading the @@ -64,7 +61,6 @@ For that reason a mount option has been added. With the `resolve-gids` mount option, the FUSE client calls the POSIX `getgrouplist()` function instead of reading `/proc/$pid/status`. - ## Group limit for NFS The NFS protocol (actually the AUTH_SYS/AUTH_UNIX RPC header) allows up to 16 @@ -78,7 +74,6 @@ Other NFS-servers offer options like this too. The Linux kernel nfsd server uses `rpc.mountd --manage-gids`. NFS-Ganesha has the configuration option `Manage_Gids`. - ## Implications of these solutions All of the mentioned options are disabled by default. one of the reasons is diff --git a/docs/Administrator-Guide/Hook-scripts.md b/docs/Administrator-Guide/Hook-scripts.md index 4845967..f79de1a 100644 --- a/docs/Administrator-Guide/Hook-scripts.md +++ b/docs/Administrator-Guide/Hook-scripts.md @@ -1,63 +1,70 @@ # Managing GlusterFS Volume Life-Cycle Extensions with Hook Scripts -Glusterfs allows automation of operations by user-written scripts. For every operation, you can execute a *pre* and a *post* script. +Glusterfs allows automation of operations by user-written scripts. For every operation, you can execute a _pre_ and a _post_ script. ### Pre Scripts + These scripts are run before the occurrence of the event. You can write a script to automate activities like managing system-wide services. For example, you can write a script to stop exporting the SMB share corresponding to the volume before you stop the volume. ### Post Scripts + These scripts are run after execution of the event. For example, you can write a script to export the SMB share corresponding to the volume after you start the volume. You can run scripts for the following events: -+ Creating a volume -+ Starting a volume -+ Adding a brick -+ Removing a brick -+ Tuning volume options -+ Stopping a volume -+ Deleting a volume +- Creating a volume +- Starting a volume +- Adding a brick +- Removing a brick +- Tuning volume options +- Stopping a volume +- Deleting a volume ### Naming Convention + While creating the file names of your scripts, you must follow the naming convention followed in your underlying file system like XFS. > Note: To enable the script, the name of the script must start with an S . Scripts run in lexicographic order of their names. ### Location of Scripts + This section provides information on the folders where the scripts must be placed. When you create a trusted storage pool, the following directories are created: -+ `/var/lib/glusterd/hooks/1/create/` -+ `/var/lib/glusterd/hooks/1/delete/` -+ `/var/lib/glusterd/hooks/1/start/` -+ `/var/lib/glusterd/hooks/1/stop/` -+ `/var/lib/glusterd/hooks/1/set/` -+ `/var/lib/glusterd/hooks/1/add-brick/` -+ `/var/lib/glusterd/hooks/1/remove-brick/` +- `/var/lib/glusterd/hooks/1/create/` +- `/var/lib/glusterd/hooks/1/delete/` +- `/var/lib/glusterd/hooks/1/start/` +- `/var/lib/glusterd/hooks/1/stop/` +- `/var/lib/glusterd/hooks/1/set/` +- `/var/lib/glusterd/hooks/1/add-brick/` +- `/var/lib/glusterd/hooks/1/remove-brick/` After creating a script, you must ensure to save the script in its respective folder on all the nodes of the trusted storage pool. The location of the script dictates whether the script must be executed before or after an event. Scripts are provided with the command line argument `--volname=VOLNAME` to specify the volume. Command-specific additional arguments are provided for the following volume operations: - Start volume - --first=yes, if the volume is the first to be started - --first=no, for otherwise - Stop volume - --last=yes, if the volume is to be stopped last. - --last=no, for otherwise - Set volume - -o key=value - For every key, value is specified in volume set command. +```{ .text .no-copy } +Start volume + --first=yes, if the volume is the first to be started + --first=no, for otherwise +Stop volume + --last=yes, if the volume is to be stopped last. + --last=no, for otherwise +Set volume + -o key=value + For every key, value is specified in volume set command. +``` ### Prepackaged Scripts + Gluster provides scripts to export Samba (SMB) share when you start a volume and to remove the share when you stop the volume. These scripts are available at: `/var/lib/glusterd/hooks/1/start/post` and `/var/lib/glusterd/hooks/1/stop/pre`. By default, the scripts are enabled. When you start a volume using `gluster volume start VOLNAME`, the S30samba-start.sh script performs the following: -+ Adds Samba share configuration details of the volume to the smb.conf file -+ Mounts the volume through FUSE and adds an entry in /etc/fstab for the same. -+ Restarts Samba to run with updated configuration +- Adds Samba share configuration details of the volume to the smb.conf file +- Mounts the volume through FUSE and adds an entry in /etc/fstab for the same. +- Restarts Samba to run with updated configuration When you stop the volume using `gluster volume stop VOLNAME`, the S30samba-stop.sh script performs the following: -+ Removes the Samba share details of the volume from the smb.conf file -+ Unmounts the FUSE mount point and removes the corresponding entry in +- Removes the Samba share details of the volume from the smb.conf file +- Unmounts the FUSE mount point and removes the corresponding entry in /etc/fstab -+ Restarts Samba to run with updated configuration +- Restarts Samba to run with updated configuration diff --git a/docs/Administrator-Guide/Linux-Kernel-Tuning.md b/docs/Administrator-Guide/Linux-Kernel-Tuning.md index 9070382..74c2314 100644 --- a/docs/Administrator-Guide/Linux-Kernel-Tuning.md +++ b/docs/Administrator-Guide/Linux-Kernel-Tuning.md @@ -1,5 +1,4 @@ -Linux kernel tuning for GlusterFS ---------------------------------- +## Linux kernel tuning for GlusterFS Every now and then, questions come up here internally and with many enthusiasts on what Gluster has to say about kernel tuning, if anything. @@ -52,18 +51,18 @@ from the user for their own applications. Heavily loaded, streaming apps should set this value to '0'. By changing this value to '0', the system's responsiveness improves. -### vm.vfs\_cache\_pressure +### vm.vfs_cache_pressure This option controls the tendency of the kernel to reclaim the memory which is used for caching of directory and inode objects. -At the default value of vfs\_cache\_pressure=100 the kernel will attempt +At the default value of vfs_cache_pressure=100 the kernel will attempt to reclaim dentries and inodes at a "fair" rate with respect to -pagecache and swapcache reclaim. Decreasing vfs\_cache\_pressure causes +pagecache and swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer to retain dentry and inode caches. When -vfs\_cache\_pressure=0, the kernel will never reclaim dentries and +vfs_cache_pressure=0, the kernel will never reclaim dentries and inodes due to memory pressure and this can easily lead to out-of-memory -conditions. Increasing vfs\_cache\_pressure beyond 100 causes the kernel +conditions. Increasing vfs_cache_pressure beyond 100 causes the kernel to prefer to reclaim dentries and inodes. With GlusterFS, many users with a lot of storage and many small files @@ -73,18 +72,18 @@ keeps crawling through data-structures on a 40GB RAM system. Changing this value higher than 100 has helped many users to achieve fair caching and more responsiveness from the kernel. -### vm.dirty\_background\_ratio +### vm.dirty_background_ratio -### vm.dirty\_ratio +### vm.dirty_ratio -The first of the two (vm.dirty\_background\_ratio) defines the +The first of the two (vm.dirty_background_ratio) defines the percentage of memory that can become dirty before a background flushing of the pages to disk starts. Until this percentage is reached no pages are flushed to disk. However when the flushing starts, then it's done in the background without disrupting any of the running processes in the foreground. -Now the second of the two parameters (vm.dirty\_ratio) defines the +Now the second of the two parameters (vm.dirty_ratio) defines the percentage of memory which can be occupied by dirty pages before a forced flush starts. If the percentage of dirty pages reaches this threshold, then all processes become synchronous, and they are not @@ -124,14 +123,14 @@ performance. You can read more about them in the Linux kernel source documentation: linux/Documentation/block/\*iosched.txt . I have also seen 'read' throughput increase during mixed-operations (many writes). -### "256" \> /sys/block/sdc/queue/nr\_requests +### "256" \> /sys/block/sdc/queue/nr_requests This is the size of I/O requests which are buffered before they are communicated to the disk by the Scheduler. The internal queue size of -some controllers (queue\_depth) is larger than the I/O scheduler's -nr\_requests so that the I/O scheduler doesn't get much of a chance to +some controllers (queue_depth) is larger than the I/O scheduler's +nr_requests so that the I/O scheduler doesn't get much of a chance to properly order and merge the requests. Deadline or CFQ scheduler likes -to have nr\_requests to be set 2 times the value of queue\_depth, which +to have nr_requests to be set 2 times the value of queue_depth, which is the default for a given controller. Merging the order and requests helps the scheduler to be more responsive during huge load. @@ -144,7 +143,7 @@ after you have used swappiness=0, but if you defined swappiness=10 or 20, then using this value helps when your have a RAID stripe size of 64k. -### blockdev --setra 4096 /dev/ (eg:- sdb, hdc or dev\_mapper) +### blockdev --setra 4096 /dev/ (eg:- sdb, hdc or dev_mapper) Default block device settings often result in terrible performance for many RAID controllers. Adding the above option, which sets read-ahead to @@ -183,94 +182,94 @@ issues. More informative and interesting articles/emails/blogs to read -- -- -- -- +- +- +- +- -`   Last updated by: `[`User:y4m4`](User:y4m4 "wikilink") +`Last updated by:`[`User:y4m4`](User:y4m4 "wikilink") ### comment:jdarcy Some additional tuning ideas: -`   * The choice of scheduler is *really* hardware- and workload-dependent, and some schedulers have unique features other than performance.  For example, last time I looked cgroups support was limited to the cfq scheduler.  Different tests regularly do best on any of cfq, deadline, or noop.  The best advice here is not to use a particular scheduler but to try them all for a specific need.` +` * The choice of scheduler is *really* hardware- and workload-dependent, and some schedulers have unique features other than performance. For example, last time I looked cgroups support was limited to the cfq scheduler. Different tests regularly do best on any of cfq, deadline, or noop. The best advice here is not to use a particular scheduler but to try them all for a specific need.` -`   * It's worth checking to make sure that /sys/.../max_sectors_kb matches max_hw_sectors_kb.  I haven't seen this problem for a while, but back when I used to work on Lustre I often saw that these didn't match and performance suffered.` +` * It's worth checking to make sure that /sys/.../max_sectors_kb matches max_hw_sectors_kb. I haven't seen this problem for a while, but back when I used to work on Lustre I often saw that these didn't match and performance suffered.` -`   * For read-heavy workloads, experimenting with /sys/.../readahead_kb is definitely worthwhile.` +` * For read-heavy workloads, experimenting with /sys/.../readahead_kb is definitely worthwhile.` -`   * Filesystems should be built with -I 512 or similar so that more xattrs can be stored in the inode instead of requiring an extra seek.` +` * Filesystems should be built with -I 512 or similar so that more xattrs can be stored in the inode instead of requiring an extra seek.` -`   * Mounting with noatime or relatime is usually good for performance.` +` * Mounting with noatime or relatime is usually good for performance.` #### reply:y4m4 -`   Agreed i was about write those parameters you mentioned. I should write another elaborate article on FS changes. ` +`Agreed i was about write those parameters you mentioned. I should write another elaborate article on FS changes.` y4m4 ### comment:eco -`       1 year ago`\ -`   This article is the model on which all articles should be written.  Detailed information, solid examples and a great selection of references to let readers go more in depth on topics they choose.  Great benchmark for others to strive to attain.`\ -`       Eco`\ +` 1 year ago`\ +` This article is the model on which all articles should be written. Detailed information, solid examples and a great selection of references to let readers go more in depth on topics they choose. Great benchmark for others to strive to attain.`\ +` Eco`\ ### comment:y4m4 -`   sysctl -w net.core.{r,w}mem_max = 4096000 - this helped us to Reach 800MB/sec with replicated GlusterFS on 10gige  - Thanks to Ben England for these test results. `\ -`       y4m4` +`sysctl -w net.core.{r,w}mem_max = 4096000 - this helped us to Reach 800MB/sec with replicated GlusterFS on 10gige - Thanks to Ben England for these test results.`\ +` y4m4` ### comment:bengland -`   After testing Gluster 3.2.4 performance with RHEL6.1, I'd suggest some changes to this article's recommendations:` +` After testing Gluster 3.2.4 performance with RHEL6.1, I'd suggest some changes to this article's recommendations:` -`   vm.swappiness=10 not 0 -- I think 0 is a bit extreme and might lead to out-of-memory conditions, but 10 will avoid just about all paging/swapping.  If you still see swapping, you need to probably focus on restricting dirty pages with vm.dirty_ratio.` +` vm.swappiness=10 not 0 -- I think 0 is a bit extreme and might lead to out-of-memory conditions, but 10 will avoid just about all paging/swapping. If you still see swapping, you need to probably focus on restricting dirty pages with vm.dirty_ratio.` -`   vfs_cache_pressure > 100 -- why?   I thought this was a percentage.` +` vfs_cache_pressure > 100 -- why? I thought this was a percentage.` -`   vm.pagecache=1 -- some distros (e.g. RHEL6) don't have vm.pagecache parameter. ` +`vm.pagecache=1 -- some distros (e.g. RHEL6) don't have vm.pagecache parameter.` -`   vm.dirty_background_ratio=1 not 10 (kernel default?) -- the kernel default is a bit dependent on choice of Linux distro, but for most workloads it's better to set this parameter very low to cause Linux to push dirty pages out to storage sooner.    It means that if dirty pages exceed 1% of RAM then it will start to asynchronously write dirty pages to storage. The only workload where this is really bad: apps that write temp files and then quickly delete them (compiles) -- and you should probably be using local storage for such files anyway. ` +`vm.dirty_background_ratio=1 not 10 (kernel default?) -- the kernel default is a bit dependent on choice of Linux distro, but for most workloads it's better to set this parameter very low to cause Linux to push dirty pages out to storage sooner. It means that if dirty pages exceed 1% of RAM then it will start to asynchronously write dirty pages to storage. The only workload where this is really bad: apps that write temp files and then quickly delete them (compiles) -- and you should probably be using local storage for such files anyway.` -`   Choice of vm.dirty_ratio is more dependent upon the workload, but in other contexts I have observed that response time fairness and stability is much better if you lower dirty ratio so that it doesn't take more than 2-5 seconds to flush all dirty pages to storage. ` +`Choice of vm.dirty_ratio is more dependent upon the workload, but in other contexts I have observed that response time fairness and stability is much better if you lower dirty ratio so that it doesn't take more than 2-5 seconds to flush all dirty pages to storage.` -`   block device parameters:` +` block device parameters:` -`   I'm not aware of any case where cfq scheduler actually helps Gluster server.   Unless server I/O threads correspond directly to end-users, I don't see how cfq can help you.  Deadline scheduler is a good choice.  I/O request queue has to be deep enough to allow scheduler to reorder requests to optimize away disk seeks.  The parameters max_sectors_kb and nr_requests are relevant for this.  For read-ahead, consider increasing it to the point where you prefetch for longer period of time than a disk seek (on order of 10 msec), so that you can avoid unnecessary disk seeks for multi-stream workloads.  This comes at the expense of I/O latency so don't overdo it.` +` I'm not aware of any case where cfq scheduler actually helps Gluster server. Unless server I/O threads correspond directly to end-users, I don't see how cfq can help you. Deadline scheduler is a good choice. I/O request queue has to be deep enough to allow scheduler to reorder requests to optimize away disk seeks. The parameters max_sectors_kb and nr_requests are relevant for this. For read-ahead, consider increasing it to the point where you prefetch for longer period of time than a disk seek (on order of 10 msec), so that you can avoid unnecessary disk seeks for multi-stream workloads. This comes at the expense of I/O latency so don't overdo it.` -`   network:` +` network:` -`   jumbo frames can increase throughput significantly for 10-GbE networks.` +` jumbo frames can increase throughput significantly for 10-GbE networks.` -`   Raise net.core.{r,w}mem_max to 540000 from default of 131071  (not 4 MB above, my previous recommendation).  Gluster 3.2 does setsockopt() call to use 1/2 MB mem for TCP socket buffer space.`\ -`       bengland`\ +` Raise net.core.{r,w}mem_max to 540000 from default of 131071 (not 4 MB above, my previous recommendation). Gluster 3.2 does setsockopt() call to use 1/2 MB mem for TCP socket buffer space.`\ +` bengland`\ ### comment:hjmangalam -`   Thanks very much for noting this info - the descriptions are VERY good.. I'm in the midst of debugging a misbehaving gluster that can't seem to handle small writes over IPoIB and this contains some useful pointers.` +` Thanks very much for noting this info - the descriptions are VERY good.. I'm in the midst of debugging a misbehaving gluster that can't seem to handle small writes over IPoIB and this contains some useful pointers.` -`   Some suggestions that might make this more immediately useful:` +` Some suggestions that might make this more immediately useful:` -`   - I'm assuming that this discussion refers to the gluster server nodes, not to the gluster native client nodes, yes?  If that's the case, are there are also kernel parameters or recommended settings for the client nodes?`\ -`   -  While there are some cases where you mention that a value should be changed to a particular # or %, in a number of cases you advise just increasing/decreasing the values, which for something like  a kernel parameter is probably not a useful suggestion.  Do I raise it by 10?  10%  2x? 10x?  ` +` - I'm assuming that this discussion refers to the gluster server nodes, not to the gluster native client nodes, yes? If that's the case, are there are also kernel parameters or recommended settings for the client nodes?`\ +`- While there are some cases where you mention that a value should be changed to a particular # or %, in a number of cases you advise just increasing/decreasing the values, which for something like a kernel parameter is probably not a useful suggestion. Do I raise it by 10? 10% 2x? 10x?` -`   I also ran across a complimentary page, which might be of  interest - it explains more of the vm variables, especially as it relates to writing.`\ -`   "Theory of Operation and Tuning for Write-Heavy Loads" `\ -`      ``   and refs therein.` -`       hjmangalam` +` I also ran across a complimentary page, which might be of interest - it explains more of the vm variables, especially as it relates to writing.`\ +`"Theory of Operation and Tuning for Write-Heavy Loads"`\ +` `` and refs therein.` +` hjmangalam` ### comment:bengland -`   Here are some additional suggestions based on recent testing:`\ -`   - scaling out number of clients -- you need to increase the size of the ARP tables on Gluster server if you want to support more than 1K clients mounting a gluster volume.  The defaults for RHEL6.3 were too low to support this, we used this:` +` Here are some additional suggestions based on recent testing:`\ +` - scaling out number of clients -- you need to increase the size of the ARP tables on Gluster server if you want to support more than 1K clients mounting a gluster volume. The defaults for RHEL6.3 were too low to support this, we used this:` -`   net.ipv4.neigh.default.gc_thresh2 = 2048`\ -`   net.ipv4.neigh.default.gc_thresh3 = 4096` +` net.ipv4.neigh.default.gc_thresh2 = 2048`\ +` net.ipv4.neigh.default.gc_thresh3 = 4096` -`   In addition, tunings common to webservers become relevant at this number of clients as well, such as netdev_max_backlog, tcp_fin_timeout, and somaxconn.` +` In addition, tunings common to webservers become relevant at this number of clients as well, such as netdev_max_backlog, tcp_fin_timeout, and somaxconn.` -`   Bonding mode 6 has been observed to increase replication write performance, I have no experience with bonding mode 4 but it should work if switch is properly configured, other bonding modes are a waste of time.` +` Bonding mode 6 has been observed to increase replication write performance, I have no experience with bonding mode 4 but it should work if switch is properly configured, other bonding modes are a waste of time.` -`       bengland`\ -`       3 months ago` +` bengland`\ +` 3 months ago` diff --git a/docs/Administrator-Guide/Logging.md b/docs/Administrator-Guide/Logging.md index acc5525..9ef19cd 100644 --- a/docs/Administrator-Guide/Logging.md +++ b/docs/Administrator-Guide/Logging.md @@ -8,37 +8,41 @@ Below lists the component, services, and functionality based logs in the Gluster glusterd logs are located at `/var/log/glusterfs/glusterd.log`. One glusterd log file per server. This log file also contains the snapshot and user logs. ## Gluster cli command: -gluster cli logs are located at `/var/log/glusterfs/cli.log`. Gluster commands executed on a node in a GlusterFS Trusted Storage Pool is logged in `/var/log/glusterfs/cmd_history.log`. + +gluster cli logs are located at `/var/log/glusterfs/cli.log`. Gluster commands executed on a node in a GlusterFS Trusted Storage Pool is logged in `/var/log/glusterfs/cmd_history.log`. ## Bricks: -Bricks logs are located at `/var/log/glusterfs/bricks/.log` . One log file per brick on the server + +Bricks logs are located at `/var/log/glusterfs/bricks/.log`. One log file per brick on the server ## Rebalance: -rebalance logs are located at `/var/log/glusterfs/VOLNAME-rebalance.log` . One log file per volume on the server. + +rebalance logs are located at `/var/log/glusterfs/VOLNAME-rebalance.log` . One log file per volume on the server. ## Self heal deamon: -self heal deamon are logged at `/var/log/glusterfs/glustershd.log`. One log file per server + +self heal deamon are logged at `/var/log/glusterfs/glustershd.log`. One log file per server ## Quota: `/var/log/glusterfs/quotad.log` are log of the quota daemons running on each node. `/var/log/glusterfs/quota-crawl.log` Whenever quota is enabled, a file system crawl is performed and the corresponding log is stored in this file. -`/var/log/glusterfs/quota-mount- VOLNAME.log` An auxiliary FUSE client is mounted in /VOLNAME of the glusterFS and the corresponding client logs found in this file. - - One log file per server (and per volume from quota-mount. +`/var/log/glusterfs/quota-mount- VOLNAME.log` An auxiliary FUSE client is mounted in /VOLNAME of the glusterFS and the corresponding client logs found in this file. One log file per server and per volume from quota-mount. ## Gluster NFS: -`/var/log/glusterfs/nfs.log ` One log file per server +`/var/log/glusterfs/nfs.log ` One log file per server ## SAMBA Gluster: -`/var/log/samba/glusterfs-VOLNAME-.log` . If the client mounts this on a glusterFS server node, the actual log file or the mount point may not be found. In such a case, the mount outputs of all the glusterFS type mount operations need to be considered. +`/var/log/samba/glusterfs-VOLNAME-.log` . If the client mounts this on a glusterFS server node, the actual log file or the mount point may not be found. In such a case, the mount outputs of all the glusterFS type mount operations need to be considered. ## Ganesha NFS : + `/var/log/nfs-ganesha.log` ## FUSE Mount: + `/var/log/glusterfs/.log ` ## Geo-replication: @@ -47,10 +51,13 @@ self heal deamon are logged at `/var/log/glusterfs/glustershd.log`. One log f `/var/log/glusterfs/geo-replication-secondary ` ## Gluster volume heal VOLNAME info command: + `/var/log/glusterfs/glfsheal-VOLNAME.log` . One log file per server on which the command is executed. ## Gluster-swift: + `/var/log/messages` ## SwiftKrbAuth: + `/var/log/httpd/error_log ` diff --git a/docs/Administrator-Guide/Managing-Snapshots.md b/docs/Administrator-Guide/Managing-Snapshots.md index b609f3b..3512c4c 100644 --- a/docs/Administrator-Guide/Managing-Snapshots.md +++ b/docs/Administrator-Guide/Managing-Snapshots.md @@ -9,15 +9,14 @@ GlusterFS volume snapshot feature is based on thinly provisioned LVM snapshot. To make use of snapshot feature GlusterFS volume should fulfill following pre-requisites: -* Each brick should be on an independent thinly provisioned LVM. -* Brick LVM should not contain any other data other than brick. -* None of the brick should be on a thick LVM. -* gluster version should be 3.6 and above. +- Each brick should be on an independent thinly provisioned LVM. +- Brick LVM should not contain any other data other than brick. +- None of the brick should be on a thick LVM. +- gluster version should be 3.6 and above. Details of how to create thin volume can be found at the following link. https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Logical_Volume_Manager_Administration/LV.html#thinly_provisioned_volume_creation - ## Few features of snapshot are: **Crash Consistency** @@ -26,13 +25,11 @@ when a snapshot is taken at a particular point-in-time, it is made sure that the taken snapshot is crash consistent. when the taken snapshot is restored, then the data is identical as it was at the time of taking a snapshot. - **Online Snapshot** When the snapshot is being taken the file system and its associated data continue to be available for the clients. - **Barrier** During snapshot creation some of the fops are blocked to guarantee crash @@ -95,7 +92,7 @@ gluster snapshot delete (all | | volume ) If snapname is specified then mentioned snapshot is deleted. If volname is specified then all snapshots belonging to that particular -volume is deleted. If keyword *all* is used then all snapshots belonging +volume is deleted. If keyword _all_ is used then all snapshots belonging to the system is deleted. ### Listing of available snaps @@ -104,7 +101,7 @@ to the system is deleted. gluster snapshot list [volname] ``` -Lists all snapshots taken. +Lists all snapshots taken. If volname is provided, then only the snapshots belonging to that particular volume is listed. @@ -125,14 +122,14 @@ for that particular volume, and the state of the snapshot. gluster snapshot status [(snapname | volume )] ``` -This command gives status of the snapshot. +This command gives status of the snapshot. The details included are snapshot brick path, volume group(LVM details), -status of the snapshot bricks, PID of the bricks, data percentage filled for +status of the snapshot bricks, PID of the bricks, data percentage filled for that particular volume group to which the snapshots belong to, and total size of the logical volume. If snapname is specified then status of the mentioned snapshot is displayed. -If volname is specified then status of all snapshots belonging to that volume +If volname is specified then status of all snapshots belonging to that volume is displayed. If both snapname and volname is not specified then status of all the snapshots present in the system are displayed. @@ -146,15 +143,15 @@ snapshot config [volname] ([snap-max-hard-limit ] [snap-max-soft-limit