diff --git a/docs/Administrator Guide/Storage Pools.md b/docs/Administrator Guide/Storage Pools.md deleted file mode 100644 index d448b64..0000000 --- a/docs/Administrator Guide/Storage Pools.md +++ /dev/null @@ -1,133 +0,0 @@ -# Managing Trusted Storage Pools - - -### Overview - -A trusted storage pool (TSP) is a trusted network of storage servers. Before you can configure a -GlusterFS volume, you must create a trusted storage pool of the storage servers -that will provide bricks to the volume by peer probing the servers. -The servers in a TSP are peers of each other. - -After installing Gluster on your servers and before creating a trusted storage pool, -each server belongs to a storage pool consisting of only that server. - -- [Adding Servers](#adding-servers) -- [Listing Servers](#listing-servers) -- [Viewing Peer Status](#peer-status) -- [Removing Servers](#removing-servers) - - -**Before you start**: - -- The servers used to create the storage pool must be resolvable by hostname. - -- The glusterd daemon must be running on all storage servers that you -want to add to the storage pool. See [Managing the glusterd Service](./Start Stop Daemon.md) for details. - -- The firewall on the servers must be configured to allow access to port 24007. - -The following commands were run on a TSP consisting of 3 servers - server1, server2, -and server3. - - -### Adding Servers - -To add a server to a TSP, peer probe it from a server already in the pool. - -```console -# gluster peer probe -``` - -For example, to add a new server4 to the cluster described above, probe it from one of the other servers: - -```console -server1# gluster peer probe server4 -Probe successful -``` - -Verify the peer status from the first server (server1): - -```console -server1# gluster peer status -Number of Peers: 3 - -Hostname: server2 -Uuid: 5e987bda-16dd-43c2-835b-08b7d55e94e5 -State: Peer in Cluster (Connected) - -Hostname: server3 -Uuid: 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7 -State: Peer in Cluster (Connected) - -Hostname: server4 -Uuid: 3e0cabaa-9df7-4f66-8e5d-cbc348f29ff7 -State: Peer in Cluster (Connected) -``` - - -### Listing Servers - -To list all nodes in the TSP: - -```console -server1# gluster pool list -UUID Hostname State -d18d36c5-533a-4541-ac92-c471241d5418 localhost Connected -5e987bda-16dd-43c2-835b-08b7d55e94e5 server2 Connected -1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7 server3 Connected -3e0cabaa-9df7-4f66-8e5d-cbc348f29ff7 server4 Connected -``` - - -### Viewing Peer Status - -To view the status of the peers in the TSP: - -```console -server1# gluster peer status -Number of Peers: 3 - -Hostname: server2 -Uuid: 5e987bda-16dd-43c2-835b-08b7d55e94e5 -State: Peer in Cluster (Connected) - -Hostname: server3 -Uuid: 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7 -State: Peer in Cluster (Connected) - -Hostname: server4 -Uuid: 3e0cabaa-9df7-4f66-8e5d-cbc348f29ff7 -State: Peer in Cluster (Connected) -``` - - -### Removing Servers - -To remove a server from the TSP, run the following command from another server in the pool: - -```console -# gluster peer detach -``` - -For example, to remove server4 from the trusted storage pool: - -```console -server1# gluster peer detach server4 -Detach successful -``` - -Verify the peer status: - -```console -server1# gluster peer status -Number of Peers: 2 - -Hostname: server2 -Uuid: 5e987bda-16dd-43c2-835b-08b7d55e94e5 -State: Peer in Cluster (Connected) - -Hostname: server3 -Uuid: 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7 -State: Peer in Cluster (Connected) -``` - diff --git a/docs/Administrator Guide/index.md b/docs/Administrator Guide/index.md deleted file mode 100644 index 1b8b2a5..0000000 --- a/docs/Administrator Guide/index.md +++ /dev/null @@ -1,74 +0,0 @@ -# Administration Guide - -1. Managing a Cluster - - * [Managing the Gluster Service](./Start Stop Daemon.md) - * [Managing Trusted Storage Pools](./Storage Pools.md) - -2. Setting Up Storage - - * [Brick Naming Conventions](./Brick Naming Conventions.md) - * [Formatting and Mounting Bricks](./formatting-and-mounting-bricks.md) - * [POSIX Access Control Lists](./Access Control Lists.md) - -3. [Setting Up Clients](./Setting Up Clients.md) - * [Handling of users that belong to many groups](./Handling-of-users-with-many-groups.md) - -4. Volumes - * [Setting Up Volumes](./Setting Up Volumes.md) - * [Managing Volumes](./Managing Volumes.md) - * [Modifying .vol files with a filter](./GlusterFS Filter.md) - -5. [Configuring NFS-Ganesha](./NFS-Ganesha GlusterFS Integration.md) - -6. Features - - * [Geo Replication](./Geo Replication.md) - * [Quotas](./Directory Quota.md) - * [Snapshots](./Managing Snapshots.md) - * [Trash](./Trash.md) - * [Hook Scripts](./Hook-scripts.md) - -7. Data Access With Other Interfaces - - * [Managing Object Store](./Object Storage.md) - * [Accessing GlusterFS using Cinder Hosts](./GlusterFS Cinder.md) - * [GlusterFS with Keystone](./GlusterFS Keystone Quickstart.md) - * [Install Gluster on Top of ZFS](./Gluster On ZFS.md) - * [Configuring Bareos to store backups on Gluster](./Bareos.md) - -8. [GlusterFS Service Logs and Locations](./Logging.md) - -9. [Monitoring Workload](./Monitoring Workload.md) - -10. [Securing GlusterFS Communication using SSL](./SSL.md) - -11. [Puppet Gluster](./Puppet.md) - -12. [RDMA Transport](./RDMA Transport.md) - -13. [GlusterFS iSCSI](./GlusterFS iSCSI.md) - -14. [Linux Kernel Tuning](./Linux Kernel Tuning.md) - -15. [Export and Netgroup Authentication](./Export And Netgroup Authentication.md) - -16. [Thin Arbiter volumes](./Thin-Arbiter-Volumes.md) - -17. [Trash for GlusterFS](./Trash.md) - -18. [Split brain and ways to deal with it](./Split brain and ways to deal with it.md) - -19. [Arbiter volumes and quorum options](./arbiter-volumes-and-quorum.md) - -20. [Mandatory Locks](./Mandatory Locks.md) - -21. [GlusterFS coreutilities](./GlusterFS Coreutils.md) - -22. [Events APIs](./Events APIs.md) - -23. [Building QEMU With gfapi For Debian Based Systems](./Building QEMU With gfapi For Debian Based Systems.md) - -24. Appendices - * [Network Configuration Techniques](./Network Configurations Techniques.md) - * [Performance Testing](./Performance Testing.md) diff --git a/docs/Administrator Guide/Access Control Lists.md b/docs/Administrator-Guide/Access-Control-Lists.md similarity index 100% rename from docs/Administrator Guide/Access Control Lists.md rename to docs/Administrator-Guide/Access-Control-Lists.md diff --git a/docs/Administrator Guide/Accessing Gluster from Windows.md b/docs/Administrator-Guide/Accessing-Gluster-from-Windows.md similarity index 100% rename from docs/Administrator Guide/Accessing Gluster from Windows.md rename to docs/Administrator-Guide/Accessing-Gluster-from-Windows.md diff --git a/docs/Administrator Guide/Bareos.md b/docs/Administrator-Guide/Bareos.md similarity index 100% rename from docs/Administrator Guide/Bareos.md rename to docs/Administrator-Guide/Bareos.md diff --git a/docs/Administrator Guide/Brick Naming Conventions.md b/docs/Administrator-Guide/Brick-Naming-Conventions.md similarity index 100% rename from docs/Administrator Guide/Brick Naming Conventions.md rename to docs/Administrator-Guide/Brick-Naming-Conventions.md diff --git a/docs/Administrator Guide/Building QEMU With gfapi For Debian Based Systems.md b/docs/Administrator-Guide/Building-QEMU-With-gfapi-For-Debian-Based-Systems.md similarity index 100% rename from docs/Administrator Guide/Building QEMU With gfapi For Debian Based Systems.md rename to docs/Administrator-Guide/Building-QEMU-With-gfapi-For-Debian-Based-Systems.md diff --git a/docs/Administrator Guide/Consul.md b/docs/Administrator-Guide/Consul.md similarity index 100% rename from docs/Administrator Guide/Consul.md rename to docs/Administrator-Guide/Consul.md diff --git a/docs/Administrator Guide/Directory Quota.md b/docs/Administrator-Guide/Directory-Quota.md similarity index 100% rename from docs/Administrator Guide/Directory Quota.md rename to docs/Administrator-Guide/Directory-Quota.md diff --git a/docs/Administrator Guide/Events APIs.md b/docs/Administrator-Guide/Events-APIs.md similarity index 100% rename from docs/Administrator Guide/Events APIs.md rename to docs/Administrator-Guide/Events-APIs.md diff --git a/docs/Administrator Guide/Export And Netgroup Authentication.md b/docs/Administrator-Guide/Export-And-Netgroup-Authentication.md similarity index 100% rename from docs/Administrator Guide/Export And Netgroup Authentication.md rename to docs/Administrator-Guide/Export-And-Netgroup-Authentication.md diff --git a/docs/Administrator Guide/Geo Replication.md b/docs/Administrator-Guide/Geo-Replication.md similarity index 100% rename from docs/Administrator Guide/Geo Replication.md rename to docs/Administrator-Guide/Geo-Replication.md diff --git a/docs/Administrator Guide/Gluster On ZFS.md b/docs/Administrator-Guide/Gluster-On-ZFS.md similarity index 100% rename from docs/Administrator Guide/Gluster On ZFS.md rename to docs/Administrator-Guide/Gluster-On-ZFS.md diff --git a/docs/Administrator Guide/GlusterFS Cinder.md b/docs/Administrator-Guide/GlusterFS-Cinder.md similarity index 100% rename from docs/Administrator Guide/GlusterFS Cinder.md rename to docs/Administrator-Guide/GlusterFS-Cinder.md diff --git a/docs/Administrator Guide/GlusterFS Coreutils.md b/docs/Administrator-Guide/GlusterFS-Coreutils.md similarity index 100% rename from docs/Administrator Guide/GlusterFS Coreutils.md rename to docs/Administrator-Guide/GlusterFS-Coreutils.md diff --git a/docs/Administrator Guide/GlusterFS Filter.md b/docs/Administrator-Guide/GlusterFS-Filter.md similarity index 100% rename from docs/Administrator Guide/GlusterFS Filter.md rename to docs/Administrator-Guide/GlusterFS-Filter.md diff --git a/docs/Administrator Guide/GlusterFS Introduction.md b/docs/Administrator-Guide/GlusterFS-Introduction.md similarity index 94% rename from docs/Administrator Guide/GlusterFS Introduction.md rename to docs/Administrator-Guide/GlusterFS-Introduction.md index ab6b067..ddf35c7 100644 --- a/docs/Administrator Guide/GlusterFS Introduction.md +++ b/docs/Administrator-Guide/GlusterFS-Introduction.md @@ -16,7 +16,7 @@ Gluster is a scalable, distributed file system that aggregates disk storage reso * Open Source -![640px-glusterfs_architecture](../images/640px-GlusterFS_Architecture.png) +![640px-glusterfs_architecture](../images/640px-GlusterFS-Architecture.png) diff --git a/docs/Administrator Guide/GlusterFS Keystone Quickstart.md b/docs/Administrator-Guide/GlusterFS-Keystone-Quickstart.md similarity index 100% rename from docs/Administrator Guide/GlusterFS Keystone Quickstart.md rename to docs/Administrator-Guide/GlusterFS-Keystone-Quickstart.md diff --git a/docs/Administrator Guide/GlusterFS iSCSI.md b/docs/Administrator-Guide/GlusterFS-iSCSI.md similarity index 100% rename from docs/Administrator Guide/GlusterFS iSCSI.md rename to docs/Administrator-Guide/GlusterFS-iSCSI.md diff --git a/docs/Administrator Guide/Handling-of-users-with-many-groups.md b/docs/Administrator-Guide/Handling-of-users-with-many-groups.md similarity index 100% rename from docs/Administrator Guide/Handling-of-users-with-many-groups.md rename to docs/Administrator-Guide/Handling-of-users-with-many-groups.md diff --git a/docs/Administrator Guide/Hook-scripts.md b/docs/Administrator-Guide/Hook-scripts.md similarity index 71% rename from docs/Administrator Guide/Hook-scripts.md rename to docs/Administrator-Guide/Hook-scripts.md index eea05ab..4845967 100644 --- a/docs/Administrator Guide/Hook-scripts.md +++ b/docs/Administrator-Guide/Hook-scripts.md @@ -1,23 +1,22 @@ # Managing GlusterFS Volume Life-Cycle Extensions with Hook Scripts - Glusterfs allows automation of operations by user-written scripts. For every operation, you can execute a *pre* and a *post* script. ### Pre Scripts These scripts are run before the occurrence of the event. You can write a script to automate activities like managing system-wide services. For example, you can write a script to stop exporting the SMB share corresponding to the volume before you stop the volume. -### Post Scripts +### Post Scripts These scripts are run after execution of the event. For example, you can write a script to export the SMB share corresponding to the volume after you start the volume. You can run scripts for the following events: - Creating a volume - Starting a volume - Adding a brick - Removing a brick - Tuning volume options - Stopping a volume - Deleting a volume ++ Creating a volume ++ Starting a volume ++ Adding a brick ++ Removing a brick ++ Tuning volume options ++ Stopping a volume ++ Deleting a volume ### Naming Convention While creating the file names of your scripts, you must follow the naming convention followed in your underlying file system like XFS. @@ -27,37 +26,38 @@ While creating the file names of your scripts, you must follow the naming conven ### Location of Scripts This section provides information on the folders where the scripts must be placed. When you create a trusted storage pool, the following directories are created: - /var/lib/glusterd/hooks/1/create/ - /var/lib/glusterd/hooks/1/delete/ - /var/lib/glusterd/hooks/1/start/ - /var/lib/glusterd/hooks/1/stop/ - /var/lib/glusterd/hooks/1/set/ - /var/lib/glusterd/hooks/1/add-brick/ - /var/lib/glusterd/hooks/1/remove-brick/ ++ `/var/lib/glusterd/hooks/1/create/` ++ `/var/lib/glusterd/hooks/1/delete/` ++ `/var/lib/glusterd/hooks/1/start/` ++ `/var/lib/glusterd/hooks/1/stop/` ++ `/var/lib/glusterd/hooks/1/set/` ++ `/var/lib/glusterd/hooks/1/add-brick/` ++ `/var/lib/glusterd/hooks/1/remove-brick/` After creating a script, you must ensure to save the script in its respective folder on all the nodes of the trusted storage pool. The location of the script dictates whether the script must be executed before or after an event. Scripts are provided with the command line argument `--volname=VOLNAME` to specify the volume. Command-specific additional arguments are provided for the following volume operations: Start volume --first=yes, if the volume is the first to be started - --first=no, for otherwise + --first=no, for otherwise Stop volume --last=yes, if the volume is to be stopped last. - --last=no, for otherwise + --last=no, for otherwise Set volume -o key=value - For every key, value is specified in volume set command. + For every key, value is specified in volume set command. ### Prepackaged Scripts Gluster provides scripts to export Samba (SMB) share when you start a volume and to remove the share when you stop the volume. These scripts are available at: `/var/lib/glusterd/hooks/1/start/post` and `/var/lib/glusterd/hooks/1/stop/pre`. By default, the scripts are enabled. When you start a volume using `gluster volume start VOLNAME`, the S30samba-start.sh script performs the following: - Adds Samba share configuration details of the volume to the smb.conf file - Mounts the volume through FUSE and adds an entry in /etc/fstab for the same. - Restarts Samba to run with updated configuration ++ Adds Samba share configuration details of the volume to the smb.conf file ++ Mounts the volume through FUSE and adds an entry in /etc/fstab for the same. ++ Restarts Samba to run with updated configuration When you stop the volume using `gluster volume stop VOLNAME`, the S30samba-stop.sh script performs the following: - Removes the Samba share details of the volume from the smb.conf file - Unmounts the FUSE mount point and removes the corresponding entry in /etc/fstab - Restarts Samba to run with updated configuration ++ Removes the Samba share details of the volume from the smb.conf file ++ Unmounts the FUSE mount point and removes the corresponding entry in + /etc/fstab ++ Restarts Samba to run with updated configuration diff --git a/docs/Administrator Guide/Linux Kernel Tuning.md b/docs/Administrator-Guide/Linux-Kernel-Tuning.md similarity index 100% rename from docs/Administrator Guide/Linux Kernel Tuning.md rename to docs/Administrator-Guide/Linux-Kernel-Tuning.md diff --git a/docs/Administrator Guide/Logging.md b/docs/Administrator-Guide/Logging.md similarity index 100% rename from docs/Administrator Guide/Logging.md rename to docs/Administrator-Guide/Logging.md diff --git a/docs/Administrator Guide/Managing Snapshots.md b/docs/Administrator-Guide/Managing-Snapshots.md similarity index 100% rename from docs/Administrator Guide/Managing Snapshots.md rename to docs/Administrator-Guide/Managing-Snapshots.md diff --git a/docs/Administrator Guide/Managing Volumes.md b/docs/Administrator-Guide/Managing-Volumes.md similarity index 100% rename from docs/Administrator Guide/Managing Volumes.md rename to docs/Administrator-Guide/Managing-Volumes.md diff --git a/docs/Administrator Guide/Mandatory Locks.md b/docs/Administrator-Guide/Mandatory-Locks.md similarity index 100% rename from docs/Administrator Guide/Mandatory Locks.md rename to docs/Administrator-Guide/Mandatory-Locks.md diff --git a/docs/Administrator Guide/Monitoring Workload.md b/docs/Administrator-Guide/Monitoring-Workload.md similarity index 100% rename from docs/Administrator Guide/Monitoring Workload.md rename to docs/Administrator-Guide/Monitoring-Workload.md diff --git a/docs/Administrator Guide/NFS-Ganesha GlusterFS Integration.md b/docs/Administrator-Guide/NFS-Ganesha-GlusterFS-Integration.md similarity index 100% rename from docs/Administrator Guide/NFS-Ganesha GlusterFS Integration.md rename to docs/Administrator-Guide/NFS-Ganesha-GlusterFS-Integration.md diff --git a/docs/Administrator Guide/Network Configurations Techniques.md b/docs/Administrator-Guide/Network-Configurations-Techniques.md similarity index 100% rename from docs/Administrator Guide/Network Configurations Techniques.md rename to docs/Administrator-Guide/Network-Configurations-Techniques.md diff --git a/docs/Administrator Guide/Object Storage.md b/docs/Administrator-Guide/Object-Storage.md similarity index 100% rename from docs/Administrator Guide/Object Storage.md rename to docs/Administrator-Guide/Object-Storage.md diff --git a/docs/Administrator Guide/Performance Testing.md b/docs/Administrator-Guide/Performance-Testing.md similarity index 99% rename from docs/Administrator Guide/Performance Testing.md rename to docs/Administrator-Guide/Performance-Testing.md index 0e62cbd..8a8d258 100644 --- a/docs/Administrator Guide/Performance Testing.md +++ b/docs/Administrator-Guide/Performance-Testing.md @@ -64,14 +64,14 @@ capabilities of a distributed filesystem. - [iozone](http://www.iozone.org) - for pure-workload large-file tests - [parallel-libgfapi](https://github.com/bengland2/parallel-libgfapi) - for pure-workload libgfapi tests -The "netmist" mixed-workload generator of SPECsfs2014 may be suitable in some cases, but is not technically an open-source tool. This tool was written by Don Capps, who was an author of iozone. +The "netmist" mixed-workload generator of SPECsfs2014 may be suitable in some cases, but is not technically an open-source tool. This tool was written by Don Capps, who was an author of iozone. ### fio -fio is extremely powerful and is easily installed from traditional distros, unlike iozone, and has increasingly powerful distributed test capabilities described in its --client parameter upstream as of May 2015. To use this mode, start by launching an fio "server" instance on each workload generator host using: +fio is extremely powerful and is easily installed from traditional distros, unlike iozone, and has increasingly powerful distributed test capabilities described in its --client parameter upstream as of May 2015. To use this mode, start by launching an fio "server" instance on each workload generator host using: fio --server --daemonize=/var/run/fio-svr.pid - + And make sure your firewall allows port 8765 through for it. You can now run tests on sets of hosts using syntax like: fio --client=workload-generator.list --output-format=json my-workload.fiojob @@ -83,14 +83,14 @@ fio also has different I/O engines, in particular Huamin Chen authored the ***li Limitations of fio in distributed mode: - stonewalling - fio calculates throughput based on when the last thread finishes a test run. In contrast, iozone calculates throughput by default based on when the FIRST thread finishes the workload. This can lead to (deceptively?) higher throughput results for iozone, since there are inevitably some "straggler" threads limping to the finish line later than others. It is possible in some cases to overcome this limitation by specifying a time limit for the test. This works well for random I/O tests, where typically you do not want to read/write the entire file/device anyway. -- inaccuracy when response times > 1 sec - at least in some cases fio has reported excessively high IOPS when fio threads encounter response times much greater than 1 second, this can happen for distributed storage when there is unfairness in the implementation. +- inaccuracy when response times > 1 sec - at least in some cases fio has reported excessively high IOPS when fio threads encounter response times much greater than 1 second, this can happen for distributed storage when there is unfairness in the implementation. - io engines are not integrated. ### smallfile Distributed I/O Benchmark [Smallfile](https://github.com/distributed-system-analysis/smallfile) is a python-based small-file distributed POSIX workload generator which can be used to quickly measure performance for a variety of metadata-intensive workloads across an entire cluster. It has no dependencies on any specific filesystem or implementation AFAIK. It runs on Linux, Windows and should work on most Unixes too. It is intended to complement use of iozone benchmark for measuring performance of large-file workloads, and borrows certain concepts from iozone and Ric Wheeler's fs_mark. It was developed by Ben England starting in March 2009, and is now open-source (Apache License v2). -Here is a typical simple sequence of tests where files laid down in an initial create test are then used in subsequent tests. There are many more smallfile operation types than these 5 (see doc), but these are the most commonly used ones. +Here is a typical simple sequence of tests where files laid down in an initial create test are then used in subsequent tests. There are many more smallfile operation types than these 5 (see doc), but these are the most commonly used ones. SMF="./smallfile_cli.py --top /mnt/glusterfs/smf --host-set h1,h2,h3,h4 --threads 8 --file-size 4 --files 10000 --response-times Y " $SMF --operation create @@ -162,7 +162,7 @@ within that host, and iozone-pathname is the full pathname of the iozone executable to use on that host. Be sure that every target host can resolve the hostname of host where the iozone command was run. All target hosts must permit password-less ssh access from the host running -the command. +the command. For example: (Here, my-ip-address refers to the machine from where the iozone is being run) @@ -309,7 +309,7 @@ running the "gluster volume profile" and "gluster volume top" commands. These extremely useful tools will help you understand both the workload and the bottlenecks which are limiting performance of that workload. -TBS: links to documentation for these tools and scripts that reduce the data to usable form. +TBS: links to documentation for these tools and scripts that reduce the data to usable form. Configuration ------------- @@ -331,7 +331,7 @@ in order of importance: Network configuration has a huge impact on performance of distributed storage, but is often not given the attention it deserves during the planning and installation phases of the cluster lifecycle. Fortunately, -[network configuration](./Network Configurations Techniques.md) +[network configuration](./Network-Configurations-Techniques.md) can be enhanced significantly, often without additional hardware. To measure network performance, consider use of a diff --git a/docs/Administrator Guide/Performance Tuning.md b/docs/Administrator-Guide/Performance-Tuning.md similarity index 100% rename from docs/Administrator Guide/Performance Tuning.md rename to docs/Administrator-Guide/Performance-Tuning.md diff --git a/docs/Administrator Guide/Puppet.md b/docs/Administrator-Guide/Puppet.md similarity index 100% rename from docs/Administrator Guide/Puppet.md rename to docs/Administrator-Guide/Puppet.md diff --git a/docs/Administrator Guide/RDMA Transport.md b/docs/Administrator-Guide/RDMA-Transport.md similarity index 100% rename from docs/Administrator Guide/RDMA Transport.md rename to docs/Administrator-Guide/RDMA-Transport.md diff --git a/docs/Administrator Guide/SSL.md b/docs/Administrator-Guide/SSL.md similarity index 100% rename from docs/Administrator Guide/SSL.md rename to docs/Administrator-Guide/SSL.md diff --git a/docs/Administrator Guide/Setting Up Clients.md b/docs/Administrator-Guide/Setting-Up-Clients.md similarity index 100% rename from docs/Administrator Guide/Setting Up Clients.md rename to docs/Administrator-Guide/Setting-Up-Clients.md diff --git a/docs/Administrator Guide/Setting Up Volumes.md b/docs/Administrator-Guide/Setting-Up-Volumes.md similarity index 100% rename from docs/Administrator Guide/Setting Up Volumes.md rename to docs/Administrator-Guide/Setting-Up-Volumes.md diff --git a/docs/Administrator Guide/Split brain and ways to deal with it.md b/docs/Administrator-Guide/Split-brain-and-ways-to-deal-with-it.md similarity index 100% rename from docs/Administrator Guide/Split brain and ways to deal with it.md rename to docs/Administrator-Guide/Split-brain-and-ways-to-deal-with-it.md diff --git a/docs/Administrator Guide/Start Stop Daemon.md b/docs/Administrator-Guide/Start-Stop-Daemon.md similarity index 100% rename from docs/Administrator Guide/Start Stop Daemon.md rename to docs/Administrator-Guide/Start-Stop-Daemon.md diff --git a/docs/Administrator-Guide/Storage-Pools.md b/docs/Administrator-Guide/Storage-Pools.md new file mode 100644 index 0000000..d953384 --- /dev/null +++ b/docs/Administrator-Guide/Storage-Pools.md @@ -0,0 +1,124 @@ +# Managing Trusted Storage Pools + + +### Overview + +A trusted storage pool(TSP) is a trusted network of storage servers. Before you can configure a +GlusterFS volume, you must create a trusted storage pool of the storage servers +that will provide bricks to the volume by peer probing the servers. +The servers in a TSP are peers of each other. + +After installing Gluster on your servers and before creating a trusted storage pool, +each server belongs to a storage pool consisting of only that server. + +- [Adding Servers](#adding-servers) +- [Listing Servers](#listing-servers) +- [Viewing Peer Status](#peer-status) +- [Removing Servers](#removing-servers) + + + +**Before you start**: + +- The servers used to create the storage pool must be resolvable by hostname. + +- The glusterd daemon must be running on all storage servers that you +want to add to the storage pool. See [Managing the glusterd Service](./Start-Stop-Daemon.md) for details. + +- The firewall on the servers must be configured to allow access to port 24007. + +The following commands were run on a TSP consisting of 3 servers - server1, server2, +and server3. + + +### Adding Servers + +To add a server to a TSP, peer probe it from a server already in the pool. + + # gluster peer probe + +For example, to add a new server4 to the cluster described above, probe it from one of the other servers: + + server1# gluster peer probe server4 + Probe successful + +Verify the peer status from the first server (server1): + + server1# gluster peer status + Number of Peers: 3 + + Hostname: server2 + Uuid: 5e987bda-16dd-43c2-835b-08b7d55e94e5 + State: Peer in Cluster (Connected) + + Hostname: server3 + Uuid: 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7 + State: Peer in Cluster (Connected) + + Hostname: server4 + Uuid: 3e0cabaa-9df7-4f66-8e5d-cbc348f29ff7 + State: Peer in Cluster (Connected) + + + + +### Listing Servers + +To list all nodes in the TSP: + + server1# gluster pool list + UUID Hostname State + d18d36c5-533a-4541-ac92-c471241d5418 localhost Connected + 5e987bda-16dd-43c2-835b-08b7d55e94e5 server2 Connected + 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7 server3 Connected + 3e0cabaa-9df7-4f66-8e5d-cbc348f29ff7 server4 Connected + + + + +### Viewing Peer Status + +To view the status of the peers in the TSP: + + server1# gluster peer status + Number of Peers: 3 + + Hostname: server2 + Uuid: 5e987bda-16dd-43c2-835b-08b7d55e94e5 + State: Peer in Cluster (Connected) + + Hostname: server3 + Uuid: 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7 + State: Peer in Cluster (Connected) + + Hostname: server4 + Uuid: 3e0cabaa-9df7-4f66-8e5d-cbc348f29ff7 + State: Peer in Cluster (Connected) + + + + +### Removing Servers + +To remove a server from the TSP, run the following command from another server in the pool: + + # gluster peer detach + +For example, to remove server4 from the trusted storage pool: + + server1# gluster peer detach server4 + Detach successful + + +Verify the peer status: + + server1# gluster peer status + Number of Peers: 2 + + Hostname: server2 + Uuid: 5e987bda-16dd-43c2-835b-08b7d55e94e5 + State: Peer in Cluster (Connected) + + Hostname: server3 + Uuid: 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7 + State: Peer in Cluster (Connected) diff --git a/docs/Administrator Guide/Thin-Arbiter-Volumes.md b/docs/Administrator-Guide/Thin-Arbiter-Volumes.md similarity index 100% rename from docs/Administrator Guide/Thin-Arbiter-Volumes.md rename to docs/Administrator-Guide/Thin-Arbiter-Volumes.md diff --git a/docs/Administrator Guide/Trash.md b/docs/Administrator-Guide/Trash.md similarity index 100% rename from docs/Administrator Guide/Trash.md rename to docs/Administrator-Guide/Trash.md diff --git a/docs/Administrator Guide/arbiter-volumes-and-quorum.md b/docs/Administrator-Guide/arbiter-volumes-and-quorum.md similarity index 100% rename from docs/Administrator Guide/arbiter-volumes-and-quorum.md rename to docs/Administrator-Guide/arbiter-volumes-and-quorum.md diff --git a/docs/Administrator Guide/formatting-and-mounting-bricks.md b/docs/Administrator-Guide/formatting-and-mounting-bricks.md similarity index 100% rename from docs/Administrator Guide/formatting-and-mounting-bricks.md rename to docs/Administrator-Guide/formatting-and-mounting-bricks.md diff --git a/docs/Administrator-Guide/index.md b/docs/Administrator-Guide/index.md new file mode 100644 index 0000000..d9d4df0 --- /dev/null +++ b/docs/Administrator-Guide/index.md @@ -0,0 +1,75 @@ +# Administration Guide + +1. Managing a Cluster + + * [Managing the Gluster Service](./Start-Stop-Daemon.md) + * [Managing Trusted Storage Pools](./Storage-Pools.md) + +2. Setting Up Storage + + * [Brick Naming Conventions](./Brick-Naming-Conventions.md) + * [Formatting and Mounting Bricks](./formatting-and-mounting-bricks.md) + * [POSIX Access Control Lists](./Access-Control-Lists.md) + +3. [Setting Up Clients](./Setting-Up-Clients.md) + * [Handling of users that belong to many groups](./Handling-of-users-with-many-groups.md) + +4. Volumes + + * [Setting Up Volumes](./Setting-Up-Volumes.md) + * [Managing Volumes](./Managing-Volumes.md) + * [Modifying .vol files with a filter](./GlusterFS-Filter.md) + +5. [Configuring NFS-Ganesha](./NFS-Ganesha-GlusterFS-Integration.md) + +6. Features + + * [Geo Replication](./Geo-Replication.md) + * [Quotas](./Directory-Quota.md) + * [Snapshots](./Managing-Snapshots.md) + * [Trash](./Trash.md) + + +7. Data Access With Other Interfaces + + * [Managing Object Store](./Object-Storage.md) + * [Accessing GlusterFS using Cinder Hosts](./GlusterFS-Cinder.md) + * [GlusterFS with Keystone](./GlusterFS-Keystone-Quickstart.md) + * [Install Gluster on Top of ZFS](./Gluster-On-ZFS.md) + * [Configuring Bareos to store backups on Gluster](./Bareos.md) + +8. [GlusterFS Service Logs and Locations](./Logging.md) + +9. [Monitoring Workload](./Monitoring-Workload.md) + +10. [Securing GlusterFS Communication using SSL](./SSL.md) + +11. [Puppet Gluster](./Puppet.md) + +12. [RDMA Transport](./RDMA-Transport.md) + +13. [GlusterFS iSCSI](./GlusterFS-iSCSI.md) + +14. [Linux Kernel Tuning](./Linux-Kernel-Tuning.md) + +15. [Export and Netgroup Authentication](./Export-And-Netgroup-Authentication.md) + +16. [Thin Arbiter volumes](./Thin-Arbiter-Volumes.md) + +17. [Trash for GlusterFS](./Trash.md) + +18. [Split brain and ways to deal with it](Split-brain-and-ways-to-deal-with-it.md) + +19. [Arbiter volumes and quorum options](./arbiter-volumes-and-quorum.md) + +20. [Mandatory Locks](./Mandatory-Locks.md) + +21. [GlusterFS coreutilities](./GlusterFS-Coreutils.md) + +22. [Events APIs](./Events-APIs.md) + +23. [Building QEMU With gfapi For Debian Based Systems](./Building-QEMU-With-gfapi-For-Debian-Based-Systems.md) + +24. Appendices + * [Network Configuration Techniques](./Network-Configurations-Techniques.md) + * [Performance Testing](./Performance-Testing.md) diff --git a/docs/Administrator Guide/overview.md b/docs/Administrator-Guide/overview.md similarity index 100% rename from docs/Administrator Guide/overview.md rename to docs/Administrator-Guide/overview.md diff --git a/docs/Administrator Guide/setting-up-storage.md b/docs/Administrator-Guide/setting-up-storage.md similarity index 74% rename from docs/Administrator Guide/setting-up-storage.md rename to docs/Administrator-Guide/setting-up-storage.md index 8f92cb2..1b7affc 100644 --- a/docs/Administrator Guide/setting-up-storage.md +++ b/docs/Administrator-Guide/setting-up-storage.md @@ -4,6 +4,6 @@ A volume is a logical collection of bricks where each brick is an export directo Before creating a volume, you need to set up the bricks that will form the volume. - - [Brick Naming Conventions](./Brick Naming Conventions.md) + - [Brick Naming Conventions](./Brick-Naming-Conventions.md) - [Formatting and Mounting Bricks](./formatting-and-mounting-bricks.md) - - [Posix ACLS](./Access Control Lists.md) + - [Posix ACLS](./Access-Control-Lists.md) diff --git a/docs/GlusterFS Tools/README.md b/docs/GlusterFS-Tools/README.md similarity index 57% rename from docs/GlusterFS Tools/README.md rename to docs/GlusterFS-Tools/README.md index c46447b..bafd575 100644 --- a/docs/GlusterFS Tools/README.md +++ b/docs/GlusterFS-Tools/README.md @@ -2,4 +2,4 @@ GlusterFS Tools --------------- - [glusterfind](./glusterfind.md) -- [gfind missing files](./gfind_missing_files.md) +- [gfind missing files](./gfind-missing-files.md) diff --git a/docs/GlusterFS Tools/gfind_missing_files.md b/docs/GlusterFS-Tools/gfind-missing-files.md similarity index 100% rename from docs/GlusterFS Tools/gfind_missing_files.md rename to docs/GlusterFS-Tools/gfind-missing-files.md diff --git a/docs/GlusterFS Tools/glusterfind.md b/docs/GlusterFS-Tools/glusterfind.md similarity index 100% rename from docs/GlusterFS Tools/glusterfind.md rename to docs/GlusterFS-Tools/glusterfind.md diff --git a/docs/Install-Guide/Common_criteria.md b/docs/Install-Guide/Common-criteria.md similarity index 98% rename from docs/Install-Guide/Common_criteria.md rename to docs/Install-Guide/Common-criteria.md index 8aff487..5c6aa4f 100644 --- a/docs/Install-Guide/Common_criteria.md +++ b/docs/Install-Guide/Common-criteria.md @@ -70,7 +70,7 @@ Other notes: being able to operate in a trusted environment without firewalls can mean huge gains in performance, and is recommended. In case you absolutely need to set up a firewall, have a look at - [Setting up clients](../Administrator Guide/Setting Up Clients.md) for + [Setting up clients](../Administrator-Guide/Setting-Up-Clients.md) for information on the ports used. Click here to [get started](../Quick-Start-Guide/Quickstart.md) diff --git a/docs/Install-Guide/Community_Packages.md b/docs/Install-Guide/Community-Packages.md similarity index 100% rename from docs/Install-Guide/Community_Packages.md rename to docs/Install-Guide/Community-Packages.md diff --git a/docs/Install-Guide/Install.md b/docs/Install-Guide/Install.md index 41aba5d..7825738 100644 --- a/docs/Install-Guide/Install.md +++ b/docs/Install-Guide/Install.md @@ -7,7 +7,7 @@ such as compat-readline5 ###### Community Packages -Packages are provided according to this [table](./Community_Packages.md). +Packages are provided according to this [table](./Community-Packages.md). ###### For Debian diff --git a/docs/Install-Guide/Overview.md b/docs/Install-Guide/Overview.md index 34498f5..a0460af 100644 --- a/docs/Install-Guide/Overview.md +++ b/docs/Install-Guide/Overview.md @@ -1,13 +1,19 @@ -# Overview +# Overview ### Purpose -The Install Guide (IG) is aimed at providing the sequence of steps needed for setting up Gluster. It contains a reasonable degree of detail which helps an administrator to understand the terminology, the choices and how to configure the deployment to the storage needs of their application workload. The [Quick Start Guide](../Quick-Start-Guide/Quickstart.md) (QSG) is designed to get a deployment with default choices and is aimed at those who want to spend less time to get to a deployment. +The Install Guide (IG) is aimed at providing the sequence of steps needed for +setting up Gluster. It contains a reasonable degree of detail which helps an +administrator to understand the terminology, the choices and how to configure +the deployment to the storage needs of their application workload. The [Quick +Start Guide](../Quick-Start-Guide/Quickstart.md) (QSG) is designed to get a +deployment with default choices and is aimed at those who want to spend less +time to get to a deployment. -After you deploy Gluster by following these steps, we recommend that -you read the [Gluster Admin Guide](../Administrator Guide/index.md) (AG) to learn how to administer Gluster and -how to select a volume type that fits your needs. Also, be sure to -enlist the help of the Gluster community via the IRC or, Slack channels (see https://www.gluster.org/community/) or Q&A -section. +After you deploy Gluster by following these steps, we recommend that you read +the [Gluster Admin Guide](../Administrator-Guide/index.md) to learn how to +administer Gluster and how to select a volume type that fits your needs. Also, +be sure to enlist the help of the Gluster community via the IRC or, Slack +channels (see https://www.gluster.org/community/) or Q&A section. ### Overview @@ -109,4 +115,4 @@ In a perfect world, sure. Having the hardware be the same means less troubleshooting when the fires start popping up. But plenty of people deploy Gluster on mix and match hardware, and successfully. -Get started by checking some [Common Criteria](./Common_criteria.md) +Get started by checking some [Common Criteria](./Common-criteria.md) diff --git a/docs/Install-Guide/Setup_Bare_metal.md b/docs/Install-Guide/Setup-Bare-metal.md similarity index 100% rename from docs/Install-Guide/Setup_Bare_metal.md rename to docs/Install-Guide/Setup-Bare-metal.md diff --git a/docs/Install-Guide/Setup_aws.md b/docs/Install-Guide/Setup-aws.md similarity index 100% rename from docs/Install-Guide/Setup_aws.md rename to docs/Install-Guide/Setup-aws.md diff --git a/docs/Install-Guide/Setup_virt.md b/docs/Install-Guide/Setup-virt.md similarity index 100% rename from docs/Install-Guide/Setup_virt.md rename to docs/Install-Guide/Setup-virt.md diff --git a/docs/Upgrade-Guide/README.md b/docs/Upgrade-Guide/README.md index f3de25f..daf09c1 100644 --- a/docs/Upgrade-Guide/README.md +++ b/docs/Upgrade-Guide/README.md @@ -1,26 +1,25 @@ Upgrading GlusterFS ------------------- -- [About op-version](./op_version.md) +- [About op-version](./op-version.md) If you are using GlusterFS version 5.x or above, you can upgrade it to the following: -- [Upgrading to 8](./upgrade_to_8.md) -- [Upgrading to 7](./upgrade_to_7.md) -- [Upgrading to 6](./upgrade_to_6.md) - +- [Upgrading to 8](./upgrade-to-8.md) +- [Upgrading to 7](./upgrade-to-7.md) +- [Upgrading to 6](./upgrade-to-6.md) If you are using GlusterFS version 4.x or above, you can upgrade it to the following: -- [Upgrading to 6](./upgrade_to_6.md) -- [Upgrading to 5](./upgrade_to_5.md) +- [Upgrading to 6](./upgrade-to-6.md) +- [Upgrading to 5](./upgrade-to-5.md) If you are using GlusterFS version 3.4.x or above, you can upgrade it to following: -- [Upgrading to 3.5](./upgrade_to_3.5.md) -- [Upgrading to 3.6](./upgrade_to_3.6.md) -- [Upgrading to 3.7](./upgrade_to_3.7.md) -- [Upgrading to 3.9](./upgrade_to_3.9.md) -- [Upgrading to 3.10](./upgrade_to_3.10.md) -- [Upgrading to 3.11](./upgrade_to_3.11.md) -- [Upgrading to 3.12](./upgrade_to_3.12.md) -- [Upgrading to 3.13](./upgrade_to_3.13.md) +- [Upgrading to 3.5](./upgrade-to-3.5.md) +- [Upgrading to 3.6](./upgrade-to-3.6.md) +- [Upgrading to 3.7](./upgrade-to-3.7.md) +- [Upgrading to 3.9](./upgrade-to-3.9.md) +- [Upgrading to 3.10](./upgrade-to-3.10.md) +- [Upgrading to 3.11](./upgrade-to-3.11.md) +- [Upgrading to 3.12](./upgrade-to-3.12.md) +- [Upgrading to 3.13](./upgrade-to-3.13.md) diff --git a/docs/Upgrade-Guide/Generic_Upgrade_procedure.md b/docs/Upgrade-Guide/generic-upgrade-procedure.md similarity index 93% rename from docs/Upgrade-Guide/Generic_Upgrade_procedure.md rename to docs/Upgrade-Guide/generic-upgrade-procedure.md index fbdec0b..2829069 100644 --- a/docs/Upgrade-Guide/Generic_Upgrade_procedure.md +++ b/docs/Upgrade-Guide/generic-upgrade-procedure.md @@ -1,6 +1,5 @@ # Generic Upgrade procedure - ### Pre-upgrade notes - Online upgrade is only possible with replicated and distributed replicate volumes - Online upgrade is not supported for dispersed or distributed dispersed volumes @@ -21,27 +20,27 @@ This procedure involves upgrading **one server at a time**, while keeping the vo # systemctl stop glusterd # systemctl stop glustereventsd # killall glusterfs glusterfsd glusterd - + 2. Stop all applications that run on this server and access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.) -3. Install Gluster new-version, below example shows how to create a repository on fedora and use it to upgrade : +3. Install Gluster new-version, below example shows how to create a repository on fedora and use it to upgrade : - 3.1 Create a private repository (assuming /new-gluster-rpms/ folder has the new rpms ): + 3.1 Create a private repository (assuming /new-gluster-rpms/ folder has the new rpms ): # createrepo /new-gluster-rpms/ - 3.2 Create the .repo file in /etc/yum.d/ : + 3.2 Create the .repo file in /etc/yum.d/ : - # cat /etc/yum.d/newglusterrepo.repo + # cat /etc/yum.d/newglusterrepo.repo [newglusterrepo] name=NewGlusterRepo baseurl="file:///new-gluster-rpms/" gpgcheck=0 enabled=1 - 3.3 Upgrade glusterfs, for example to upgrade glusterfs-server to x.y version : - - # yum update glusterfs-server-x.y.fc30.x86_64.rpm + 3.3 Upgrade glusterfs, for example to upgrade glusterfs-server to x.y version : + + # yum update glusterfs-server-x.y.fc30.x86_64.rpm 4. Ensure that version reflects new-version in the output of, @@ -78,7 +77,7 @@ This procedure involves cluster downtime and during the upgrade window, clients 1. On every server in the trusted storage pool, stop all gluster services, either using the command below, or through other means, ```sh - + # systemctl stop glusterd # systemctl stop glustereventsd # killall glusterfs glusterfsd glusterd @@ -111,7 +110,7 @@ This procedure involves cluster downtime and during the upgrade window, clients ### Post upgrade steps Perform the following steps post upgrading the entire trusted storage pool, -- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op_version.md) section for further details +- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op-version.md) section for further details - Proceed to [upgrade the clients](#upgrade-procedure-for-clients) to new-version version as well - Post upgrading the clients, for replicate volumes, it is recommended to enable the option `gluster volume set fips-mode-rchecksum on` to turn off usage of MD5 checksums during healing. This enables running Gluster on FIPS compliant systems. @@ -119,7 +118,7 @@ Perform the following steps post upgrading the entire trusted storage pool, > **NOTE:** If you have ever enabled quota on your volumes then after the upgrade is done, you will have to restart all the nodes in the cluster one by one so as to -fix the checksum values in the quota.cksum file under the `/var/lib/glusterd/vols// directory.` +fix the checksum values in the quota.cksum file under the `/var/lib/glusterd/vols// directory.` The peers may go into `Peer rejected` state while doing so but once all the nodes are rebooted everything will be back to normal. diff --git a/docs/Upgrade-Guide/op_version.md b/docs/Upgrade-Guide/op-version.md similarity index 100% rename from docs/Upgrade-Guide/op_version.md rename to docs/Upgrade-Guide/op-version.md diff --git a/docs/Upgrade-Guide/upgrade_to_3.10.md b/docs/Upgrade-Guide/upgrade-to-3.10.md similarity index 97% rename from docs/Upgrade-Guide/upgrade_to_3.10.md rename to docs/Upgrade-Guide/upgrade-to-3.10.md index 967e4ca..a8790a1 100644 --- a/docs/Upgrade-Guide/upgrade_to_3.10.md +++ b/docs/Upgrade-Guide/upgrade-to-3.10.md @@ -1,6 +1,6 @@ ## Upgrade procedure to Gluster 3.10.0, from Gluster 3.9.x, 3.8.x and 3.7.x -### Pre-upgrade notes +### Pre-upgrade notes - Online upgrade is only possible with replicated and distributed replicate volumes - Online upgrade is not supported for dispersed or distributed dispersed volumes - Ensure no configuration changes are done during the upgrade @@ -82,7 +82,7 @@ This procedure involves cluster downtime and during the upgrade window, clients ### Post upgrade steps Perform the following steps post upgrading the entire trusted storage pool, -- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op_version.md) section for further details +- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op-version.md) section for further details - Proceed to [upgrade the clients](#upgrade-procedure-for-clients) to 3.10 version as well ### Upgrade procedure for clients diff --git a/docs/Upgrade-Guide/upgrade_to_3.11.md b/docs/Upgrade-Guide/upgrade-to-3.11.md similarity index 97% rename from docs/Upgrade-Guide/upgrade_to_3.11.md rename to docs/Upgrade-Guide/upgrade-to-3.11.md index da4ab3d..e828f0b 100644 --- a/docs/Upgrade-Guide/upgrade_to_3.11.md +++ b/docs/Upgrade-Guide/upgrade-to-3.11.md @@ -2,7 +2,7 @@ **NOTE:** Upgrade procedure remains the same as with the 3.10 release -### Pre-upgrade notes +### Pre-upgrade notes - Online upgrade is only possible with replicated and distributed replicate volumes - Online upgrade is not supported for dispersed or distributed dispersed volumes - Ensure no configuration changes are done during the upgrade @@ -88,7 +88,7 @@ This procedure involves cluster downtime and during the upgrade window, clients ### Post upgrade steps Perform the following steps post upgrading the entire trusted storage pool, -- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op_version.md) section for further details +- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op-version.md) section for further details - Proceed to [upgrade the clients](#upgrade-procedure-for-clients) to 3.11 version as well ### Upgrade procedure for clients diff --git a/docs/Upgrade-Guide/upgrade_to_3.12.md b/docs/Upgrade-Guide/upgrade-to-3.12.md similarity index 98% rename from docs/Upgrade-Guide/upgrade_to_3.12.md rename to docs/Upgrade-Guide/upgrade-to-3.12.md index 46d6e4c..59ba3ee 100644 --- a/docs/Upgrade-Guide/upgrade_to_3.12.md +++ b/docs/Upgrade-Guide/upgrade-to-3.12.md @@ -91,7 +91,7 @@ This procedure involves cluster downtime and during the upgrade window, clients ### Post upgrade steps Perform the following steps post upgrading the entire trusted storage pool, -- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op_version.md) section for further details +- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op-version.md) section for further details - Proceed to [upgrade the clients](#upgrade-procedure-for-clients) to 3.12 version as well ### Upgrade procedure for clients diff --git a/docs/Upgrade-Guide/upgrade_to_3.13.md b/docs/Upgrade-Guide/upgrade-to-3.13.md similarity index 98% rename from docs/Upgrade-Guide/upgrade_to_3.13.md rename to docs/Upgrade-Guide/upgrade-to-3.13.md index 64aced6..e7f986f 100644 --- a/docs/Upgrade-Guide/upgrade_to_3.13.md +++ b/docs/Upgrade-Guide/upgrade-to-3.13.md @@ -81,7 +81,7 @@ This procedure involves cluster downtime and during the upgrade window, clients ### Post upgrade steps Perform the following steps post upgrading the entire trusted storage pool, -- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op_version.md) section for further details +- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op-version.md) section for further details - Proceed to [upgrade the clients](#upgrade-procedure-for-clients) to 3.13 version as well ### Upgrade procedure for clients diff --git a/docs/Upgrade-Guide/upgrade_to_3.5.md b/docs/Upgrade-Guide/upgrade-to-3.5.md similarity index 100% rename from docs/Upgrade-Guide/upgrade_to_3.5.md rename to docs/Upgrade-Guide/upgrade-to-3.5.md diff --git a/docs/Upgrade-Guide/upgrade_to_3.6.md b/docs/Upgrade-Guide/upgrade-to-3.6.md similarity index 100% rename from docs/Upgrade-Guide/upgrade_to_3.6.md rename to docs/Upgrade-Guide/upgrade-to-3.6.md diff --git a/docs/Upgrade-Guide/upgrade_to_3.7.md b/docs/Upgrade-Guide/upgrade-to-3.7.md similarity index 100% rename from docs/Upgrade-Guide/upgrade_to_3.7.md rename to docs/Upgrade-Guide/upgrade-to-3.7.md diff --git a/docs/Upgrade-Guide/upgrade_to_3.8.md b/docs/Upgrade-Guide/upgrade-to-3.8.md similarity index 100% rename from docs/Upgrade-Guide/upgrade_to_3.8.md rename to docs/Upgrade-Guide/upgrade-to-3.8.md diff --git a/docs/Upgrade-Guide/upgrade_to_3.9.md b/docs/Upgrade-Guide/upgrade-to-3.9.md similarity index 91% rename from docs/Upgrade-Guide/upgrade_to_3.9.md rename to docs/Upgrade-Guide/upgrade-to-3.9.md index 91cc7f6..fd3b815 100644 --- a/docs/Upgrade-Guide/upgrade_to_3.9.md +++ b/docs/Upgrade-Guide/upgrade-to-3.9.md @@ -2,7 +2,7 @@ The steps to uprade to Gluster 3.9 are the same as for upgrading to Gluster 3.8. Please follow the detailed instructions from [the 3.8 upgrade -guide](upgrade_to_3.8.md). +guide](upgrade-to-3.8.md). Note that there is only a single difference, related to the `op-version`: diff --git a/docs/Upgrade-Guide/upgrade_to_4.0.md b/docs/Upgrade-Guide/upgrade-to-4.0.md similarity index 98% rename from docs/Upgrade-Guide/upgrade_to_4.0.md rename to docs/Upgrade-Guide/upgrade-to-4.0.md index d4ba2b2..1f98cb9 100644 --- a/docs/Upgrade-Guide/upgrade_to_4.0.md +++ b/docs/Upgrade-Guide/upgrade-to-4.0.md @@ -81,7 +81,7 @@ This procedure involves cluster downtime and during the upgrade window, clients ### Post upgrade steps Perform the following steps post upgrading the entire trusted storage pool, -- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op_version.md) section for further details +- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op-version.md) section for further details - Proceed to [upgrade the clients](#upgrade-procedure-for-clients) to 4.0 version as well - Post upgrading the clients, for replicate volumes, it is recommended to enable the option `gluster volume set fips-mode-rchecksum on` to turn off usage of MD5 checksums during healing. This enables running Gluster on FIPS compliant systems. diff --git a/docs/Upgrade-Guide/upgrade_to_4.1.md b/docs/Upgrade-Guide/upgrade-to-4.1.md similarity index 98% rename from docs/Upgrade-Guide/upgrade_to_4.1.md rename to docs/Upgrade-Guide/upgrade-to-4.1.md index 188cf40..6166e42 100644 --- a/docs/Upgrade-Guide/upgrade_to_4.1.md +++ b/docs/Upgrade-Guide/upgrade-to-4.1.md @@ -95,7 +95,7 @@ This procedure involves cluster downtime and during the upgrade window, clients ### Post upgrade steps Perform the following steps post upgrading the entire trusted storage pool, -- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op_version.md) section for further details +- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op-version.md) section for further details - Proceed to [upgrade the clients](#upgrade-procedure-for-clients) to 4.1 version as well - Post upgrading the clients, for replicate volumes, it is recommended to enable the option `gluster volume set fips-mode-rchecksum on` to turn off usage of MD5 checksums during healing. This enables running Gluster on FIPS compliant systems. diff --git a/docs/Upgrade-Guide/upgrade_to_5.md b/docs/Upgrade-Guide/upgrade-to-5.md similarity index 93% rename from docs/Upgrade-Guide/upgrade_to_5.md rename to docs/Upgrade-Guide/upgrade-to-5.md index b20cb42..2006b16 100644 --- a/docs/Upgrade-Guide/upgrade_to_5.md +++ b/docs/Upgrade-Guide/upgrade-to-5.md @@ -2,7 +2,7 @@ > **NOTE:** Upgrade procedure remains the same as with 4.1 release -Refer, to the [Upgrading to 4.1](./upgrade_to_4.1.md) guide and follow +Refer, to the [Upgrading to 4.1](./upgrade-to-4.1.md) guide and follow documented instructions, replacing 5 when you encounter 4.1 in the guide as the version reference. diff --git a/docs/Upgrade-Guide/upgrade_to_6.md b/docs/Upgrade-Guide/upgrade-to-6.md similarity index 97% rename from docs/Upgrade-Guide/upgrade_to_6.md rename to docs/Upgrade-Guide/upgrade-to-6.md index 0290956..74f5785 100644 --- a/docs/Upgrade-Guide/upgrade_to_6.md +++ b/docs/Upgrade-Guide/upgrade-to-6.md @@ -5,7 +5,7 @@ aware of the features and fixes provided with the release. > **NOTE:** Upgrade procedure remains the same as with 4.1.x release -Refer, to the [Upgrading to 4.1](./upgrade_to_4.1.md) guide and follow +Refer, to the [Upgrading to 4.1](./upgrade-to-4.1.md) guide and follow documented instructions, replacing 6 when you encounter 4.1 in the guide as the version reference. diff --git a/docs/Upgrade-Guide/upgrade_to_7.md b/docs/Upgrade-Guide/upgrade-to-7.md similarity index 91% rename from docs/Upgrade-Guide/upgrade_to_7.md rename to docs/Upgrade-Guide/upgrade-to-7.md index 33f2eab..5cf7323 100644 --- a/docs/Upgrade-Guide/upgrade_to_7.md +++ b/docs/Upgrade-Guide/upgrade-to-7.md @@ -5,13 +5,13 @@ aware of the features and fixes provided with the release. > **NOTE:** Upgrade procedure remains the same as with 4.1.x release -Refer, to the [Upgrading to 4.1](./upgrade_to_4.1.md) guide and follow +Refer, to the [Upgrading to 4.1](./upgrade-to-4.1.md) guide and follow documented instructions, replacing 7 when you encounter 4.1 in the guide as the version reference. > **NOTE:** If you have ever enabled quota on your volumes then after the upgrade is done, you will have to restart all the nodes in the cluster one by one so as to -fix the checksum values in the quota.cksum file under the `/var/lib/glusterd/vols// directory.` +fix the checksum values in the quota.cksum file under the `/var/lib/glusterd/vols// directory.` The peers may go into `Peer rejected` state while doing so but once all the nodes are rebooted everything will be back to normal. @@ -43,6 +43,4 @@ upgrading the cluster. ### Deprecated translators and upgrade procedure for volumes using these features -[If you are upgrading from a release prior to release-6 be aware of deprecated xlators and functionality](https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_6/#deprecated-translators-and-upgrade-procedure-for-volumes-using-these-features). - - +[If you are upgrading from a release prior to release-6 be aware of deprecated xlators and functionality](https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_6/#deprecated-translators-and-upgrade-procedure-for-volumes-using-these-features). diff --git a/docs/Upgrade-Guide/upgrade_to_8.md b/docs/Upgrade-Guide/upgrade-to-8.md similarity index 89% rename from docs/Upgrade-Guide/upgrade_to_8.md rename to docs/Upgrade-Guide/upgrade-to-8.md index 7ed4356..6e45e0a 100644 --- a/docs/Upgrade-Guide/upgrade_to_8.md +++ b/docs/Upgrade-Guide/upgrade-to-8.md @@ -3,9 +3,9 @@ We recommend reading the [release notes for 8.0](../release-notes/8.0.md) to be aware of the features and fixes provided with the release. -> **NOTE:** Before following the generic upgrade procedure checkout the "**Major Issues**" section given below. +> **NOTE:** Before following the generic upgrade procedure checkout the "**Major Issues**" section given below. -Refer, to the [generic upgrade procedure](./Generic_Upgrade_procedure.md) guide and follow documented instructions. +Refer, to the [generic upgrade procedure](./generic-upgrade-procedure.md) guide and follow documented instructions. ## Major issues @@ -30,7 +30,7 @@ If these are set, then unset them using the following commands, # gluster volume reset