1
0
mirror of https://github.com/gluster/glusterdocs.git synced 2026-02-05 15:47:01 +01:00

updating registries and files replacing space and underscore with dash (#613)

* updating registries and files replacing space and underscore with dash
* Change more paths and file names to use dash instead of underscore
* Add hook scripts, bug reporting and generic upgrade guides to TOC

Co-authored-by: adityaramteke <adityaramteke05icr@gmail.com>
This commit is contained in:
Rune Juhl Jacobsen
2020-11-24 04:49:28 +01:00
committed by GitHub
parent f9384f01b3
commit 520a7d8a7f
98 changed files with 373 additions and 379 deletions

View File

@@ -1,133 +0,0 @@
# Managing Trusted Storage Pools
### Overview
A trusted storage pool (TSP) is a trusted network of storage servers. Before you can configure a
GlusterFS volume, you must create a trusted storage pool of the storage servers
that will provide bricks to the volume by peer probing the servers.
The servers in a TSP are peers of each other.
After installing Gluster on your servers and before creating a trusted storage pool,
each server belongs to a storage pool consisting of only that server.
- [Adding Servers](#adding-servers)
- [Listing Servers](#listing-servers)
- [Viewing Peer Status](#peer-status)
- [Removing Servers](#removing-servers)
**Before you start**:
- The servers used to create the storage pool must be resolvable by hostname.
- The glusterd daemon must be running on all storage servers that you
want to add to the storage pool. See [Managing the glusterd Service](./Start Stop Daemon.md) for details.
- The firewall on the servers must be configured to allow access to port 24007.
The following commands were run on a TSP consisting of 3 servers - server1, server2,
and server3.
<a name="adding-servers"></a>
### Adding Servers
To add a server to a TSP, peer probe it from a server already in the pool.
```console
# gluster peer probe <server>
```
For example, to add a new server4 to the cluster described above, probe it from one of the other servers:
```console
server1# gluster peer probe server4
Probe successful
```
Verify the peer status from the first server (server1):
```console
server1# gluster peer status
Number of Peers: 3
Hostname: server2
Uuid: 5e987bda-16dd-43c2-835b-08b7d55e94e5
State: Peer in Cluster (Connected)
Hostname: server3
Uuid: 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7
State: Peer in Cluster (Connected)
Hostname: server4
Uuid: 3e0cabaa-9df7-4f66-8e5d-cbc348f29ff7
State: Peer in Cluster (Connected)
```
<a name="listing-servers"></a>
### Listing Servers
To list all nodes in the TSP:
```console
server1# gluster pool list
UUID Hostname State
d18d36c5-533a-4541-ac92-c471241d5418 localhost Connected
5e987bda-16dd-43c2-835b-08b7d55e94e5 server2 Connected
1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7 server3 Connected
3e0cabaa-9df7-4f66-8e5d-cbc348f29ff7 server4 Connected
```
<a name="peer-status"></a>
### Viewing Peer Status
To view the status of the peers in the TSP:
```console
server1# gluster peer status
Number of Peers: 3
Hostname: server2
Uuid: 5e987bda-16dd-43c2-835b-08b7d55e94e5
State: Peer in Cluster (Connected)
Hostname: server3
Uuid: 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7
State: Peer in Cluster (Connected)
Hostname: server4
Uuid: 3e0cabaa-9df7-4f66-8e5d-cbc348f29ff7
State: Peer in Cluster (Connected)
```
<a name="removing-servers"></a>
### Removing Servers
To remove a server from the TSP, run the following command from another server in the pool:
```console
# gluster peer detach <server>
```
For example, to remove server4 from the trusted storage pool:
```console
server1# gluster peer detach server4
Detach successful
```
Verify the peer status:
```console
server1# gluster peer status
Number of Peers: 2
Hostname: server2
Uuid: 5e987bda-16dd-43c2-835b-08b7d55e94e5
State: Peer in Cluster (Connected)
Hostname: server3
Uuid: 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7
State: Peer in Cluster (Connected)
```

View File

@@ -1,74 +0,0 @@
# Administration Guide
1. Managing a Cluster
* [Managing the Gluster Service](./Start Stop Daemon.md)
* [Managing Trusted Storage Pools](./Storage Pools.md)
2. Setting Up Storage
* [Brick Naming Conventions](./Brick Naming Conventions.md)
* [Formatting and Mounting Bricks](./formatting-and-mounting-bricks.md)
* [POSIX Access Control Lists](./Access Control Lists.md)
3. [Setting Up Clients](./Setting Up Clients.md)
* [Handling of users that belong to many groups](./Handling-of-users-with-many-groups.md)
4. Volumes
* [Setting Up Volumes](./Setting Up Volumes.md)
* [Managing Volumes](./Managing Volumes.md)
* [Modifying .vol files with a filter](./GlusterFS Filter.md)
5. [Configuring NFS-Ganesha](./NFS-Ganesha GlusterFS Integration.md)
6. Features
* [Geo Replication](./Geo Replication.md)
* [Quotas](./Directory Quota.md)
* [Snapshots](./Managing Snapshots.md)
* [Trash](./Trash.md)
* [Hook Scripts](./Hook-scripts.md)
7. Data Access With Other Interfaces
* [Managing Object Store](./Object Storage.md)
* [Accessing GlusterFS using Cinder Hosts](./GlusterFS Cinder.md)
* [GlusterFS with Keystone](./GlusterFS Keystone Quickstart.md)
* [Install Gluster on Top of ZFS](./Gluster On ZFS.md)
* [Configuring Bareos to store backups on Gluster](./Bareos.md)
8. [GlusterFS Service Logs and Locations](./Logging.md)
9. [Monitoring Workload](./Monitoring Workload.md)
10. [Securing GlusterFS Communication using SSL](./SSL.md)
11. [Puppet Gluster](./Puppet.md)
12. [RDMA Transport](./RDMA Transport.md)
13. [GlusterFS iSCSI](./GlusterFS iSCSI.md)
14. [Linux Kernel Tuning](./Linux Kernel Tuning.md)
15. [Export and Netgroup Authentication](./Export And Netgroup Authentication.md)
16. [Thin Arbiter volumes](./Thin-Arbiter-Volumes.md)
17. [Trash for GlusterFS](./Trash.md)
18. [Split brain and ways to deal with it](./Split brain and ways to deal with it.md)
19. [Arbiter volumes and quorum options](./arbiter-volumes-and-quorum.md)
20. [Mandatory Locks](./Mandatory Locks.md)
21. [GlusterFS coreutilities](./GlusterFS Coreutils.md)
22. [Events APIs](./Events APIs.md)
23. [Building QEMU With gfapi For Debian Based Systems](./Building QEMU With gfapi For Debian Based Systems.md)
24. Appendices
* [Network Configuration Techniques](./Network Configurations Techniques.md)
* [Performance Testing](./Performance Testing.md)

View File

@@ -16,7 +16,7 @@ Gluster is a scalable, distributed file system that aggregates disk storage reso
* Open Source
![640px-glusterfs_architecture](../images/640px-GlusterFS_Architecture.png)
![640px-glusterfs_architecture](../images/640px-GlusterFS-Architecture.png)

View File

@@ -1,23 +1,22 @@
# Managing GlusterFS Volume Life-Cycle Extensions with Hook Scripts
Glusterfs allows automation of operations by user-written scripts. For every operation, you can execute a *pre* and a *post* script.
### Pre Scripts
These scripts are run before the occurrence of the event. You can write a script to automate activities like managing system-wide services. For example, you can write a script to stop exporting the SMB share corresponding to the volume before you stop the volume.
### Post Scripts
### Post Scripts
These scripts are run after execution of the event. For example, you can write a script to export the SMB share corresponding to the volume after you start the volume.
You can run scripts for the following events:
Creating a volume
Starting a volume
Adding a brick
Removing a brick
Tuning volume options
Stopping a volume
Deleting a volume
+ Creating a volume
+ Starting a volume
+ Adding a brick
+ Removing a brick
+ Tuning volume options
+ Stopping a volume
+ Deleting a volume
### Naming Convention
While creating the file names of your scripts, you must follow the naming convention followed in your underlying file system like XFS.
@@ -27,37 +26,38 @@ While creating the file names of your scripts, you must follow the naming conven
### Location of Scripts
This section provides information on the folders where the scripts must be placed. When you create a trusted storage pool, the following directories are created:
/var/lib/glusterd/hooks/1/create/
/var/lib/glusterd/hooks/1/delete/
/var/lib/glusterd/hooks/1/start/
/var/lib/glusterd/hooks/1/stop/
/var/lib/glusterd/hooks/1/set/
/var/lib/glusterd/hooks/1/add-brick/
/var/lib/glusterd/hooks/1/remove-brick/
+ `/var/lib/glusterd/hooks/1/create/`
+ `/var/lib/glusterd/hooks/1/delete/`
+ `/var/lib/glusterd/hooks/1/start/`
+ `/var/lib/glusterd/hooks/1/stop/`
+ `/var/lib/glusterd/hooks/1/set/`
+ `/var/lib/glusterd/hooks/1/add-brick/`
+ `/var/lib/glusterd/hooks/1/remove-brick/`
After creating a script, you must ensure to save the script in its respective folder on all the nodes of the trusted storage pool. The location of the script dictates whether the script must be executed before or after an event. Scripts are provided with the command line argument `--volname=VOLNAME` to specify the volume. Command-specific additional arguments are provided for the following volume operations:
Start volume
--first=yes, if the volume is the first to be started
--first=no, for otherwise
--first=no, for otherwise
Stop volume
--last=yes, if the volume is to be stopped last.
--last=no, for otherwise
--last=no, for otherwise
Set volume
-o key=value
For every key, value is specified in volume set command.
For every key, value is specified in volume set command.
### Prepackaged Scripts
Gluster provides scripts to export Samba (SMB) share when you start a volume and to remove the share when you stop the volume. These scripts are available at: `/var/lib/glusterd/hooks/1/start/post` and `/var/lib/glusterd/hooks/1/stop/pre`. By default, the scripts are enabled.
When you start a volume using `gluster volume start VOLNAME`, the S30samba-start.sh script performs the following:
Adds Samba share configuration details of the volume to the smb.conf file
Mounts the volume through FUSE and adds an entry in /etc/fstab for the same.
Restarts Samba to run with updated configuration
+ Adds Samba share configuration details of the volume to the smb.conf file
+ Mounts the volume through FUSE and adds an entry in /etc/fstab for the same.
+ Restarts Samba to run with updated configuration
When you stop the volume using `gluster volume stop VOLNAME`, the S30samba-stop.sh script performs the following:
Removes the Samba share details of the volume from the smb.conf file
Unmounts the FUSE mount point and removes the corresponding entry in /etc/fstab
Restarts Samba to run with updated configuration
+ Removes the Samba share details of the volume from the smb.conf file
+ Unmounts the FUSE mount point and removes the corresponding entry in
/etc/fstab
+ Restarts Samba to run with updated configuration

View File

@@ -64,14 +64,14 @@ capabilities of a distributed filesystem.
- [iozone](http://www.iozone.org) - for pure-workload large-file tests
- [parallel-libgfapi](https://github.com/bengland2/parallel-libgfapi) - for pure-workload libgfapi tests
The "netmist" mixed-workload generator of SPECsfs2014 may be suitable in some cases, but is not technically an open-source tool. This tool was written by Don Capps, who was an author of iozone.
The "netmist" mixed-workload generator of SPECsfs2014 may be suitable in some cases, but is not technically an open-source tool. This tool was written by Don Capps, who was an author of iozone.
### fio
fio is extremely powerful and is easily installed from traditional distros, unlike iozone, and has increasingly powerful distributed test capabilities described in its --client parameter upstream as of May 2015. To use this mode, start by launching an fio "server" instance on each workload generator host using:
fio is extremely powerful and is easily installed from traditional distros, unlike iozone, and has increasingly powerful distributed test capabilities described in its --client parameter upstream as of May 2015. To use this mode, start by launching an fio "server" instance on each workload generator host using:
fio --server --daemonize=/var/run/fio-svr.pid
And make sure your firewall allows port 8765 through for it. You can now run tests on sets of hosts using syntax like:
fio --client=workload-generator.list --output-format=json my-workload.fiojob
@@ -83,14 +83,14 @@ fio also has different I/O engines, in particular Huamin Chen authored the ***li
Limitations of fio in distributed mode:
- stonewalling - fio calculates throughput based on when the last thread finishes a test run. In contrast, iozone calculates throughput by default based on when the FIRST thread finishes the workload. This can lead to (deceptively?) higher throughput results for iozone, since there are inevitably some "straggler" threads limping to the finish line later than others. It is possible in some cases to overcome this limitation by specifying a time limit for the test. This works well for random I/O tests, where typically you do not want to read/write the entire file/device anyway.
- inaccuracy when response times > 1 sec - at least in some cases fio has reported excessively high IOPS when fio threads encounter response times much greater than 1 second, this can happen for distributed storage when there is unfairness in the implementation.
- inaccuracy when response times > 1 sec - at least in some cases fio has reported excessively high IOPS when fio threads encounter response times much greater than 1 second, this can happen for distributed storage when there is unfairness in the implementation.
- io engines are not integrated.
### smallfile Distributed I/O Benchmark
[Smallfile](https://github.com/distributed-system-analysis/smallfile) is a python-based small-file distributed POSIX workload generator which can be used to quickly measure performance for a variety of metadata-intensive workloads across an entire cluster. It has no dependencies on any specific filesystem or implementation AFAIK. It runs on Linux, Windows and should work on most Unixes too. It is intended to complement use of iozone benchmark for measuring performance of large-file workloads, and borrows certain concepts from iozone and Ric Wheeler's fs_mark. It was developed by Ben England starting in March 2009, and is now open-source (Apache License v2).
Here is a typical simple sequence of tests where files laid down in an initial create test are then used in subsequent tests. There are many more smallfile operation types than these 5 (see doc), but these are the most commonly used ones.
Here is a typical simple sequence of tests where files laid down in an initial create test are then used in subsequent tests. There are many more smallfile operation types than these 5 (see doc), but these are the most commonly used ones.
SMF="./smallfile_cli.py --top /mnt/glusterfs/smf --host-set h1,h2,h3,h4 --threads 8 --file-size 4 --files 10000 --response-times Y "
$SMF --operation create
@@ -162,7 +162,7 @@ within that host, and iozone-pathname is the full pathname of the iozone
executable to use on that host. Be sure that every target host can
resolve the hostname of host where the iozone command was run. All
target hosts must permit password-less ssh access from the host running
the command.
the command.
For example: (Here, my-ip-address refers to the machine from where the iozone is being run)
@@ -309,7 +309,7 @@ running the "gluster volume profile" and "gluster volume top" commands.
These extremely useful tools will help you understand both the workload
and the bottlenecks which are limiting performance of that workload.
TBS: links to documentation for these tools and scripts that reduce the data to usable form.
TBS: links to documentation for these tools and scripts that reduce the data to usable form.
Configuration
-------------
@@ -331,7 +331,7 @@ in order of importance:
Network configuration has a huge impact on performance of distributed storage, but is often not given the
attention it deserves during the planning and installation phases of the
cluster lifecycle. Fortunately,
[network configuration](./Network Configurations Techniques.md)
[network configuration](./Network-Configurations-Techniques.md)
can be enhanced significantly, often without additional hardware.
To measure network performance, consider use of a

View File

@@ -0,0 +1,124 @@
# Managing Trusted Storage Pools
### Overview
A trusted storage pool(TSP) is a trusted network of storage servers. Before you can configure a
GlusterFS volume, you must create a trusted storage pool of the storage servers
that will provide bricks to the volume by peer probing the servers.
The servers in a TSP are peers of each other.
After installing Gluster on your servers and before creating a trusted storage pool,
each server belongs to a storage pool consisting of only that server.
- [Adding Servers](#adding-servers)
- [Listing Servers](#listing-servers)
- [Viewing Peer Status](#peer-status)
- [Removing Servers](#removing-servers)
**Before you start**:
- The servers used to create the storage pool must be resolvable by hostname.
- The glusterd daemon must be running on all storage servers that you
want to add to the storage pool. See [Managing the glusterd Service](./Start-Stop-Daemon.md) for details.
- The firewall on the servers must be configured to allow access to port 24007.
The following commands were run on a TSP consisting of 3 servers - server1, server2,
and server3.
<a name="adding-servers"></a>
### Adding Servers
To add a server to a TSP, peer probe it from a server already in the pool.
# gluster peer probe <server>
For example, to add a new server4 to the cluster described above, probe it from one of the other servers:
server1# gluster peer probe server4
Probe successful
Verify the peer status from the first server (server1):
server1# gluster peer status
Number of Peers: 3
Hostname: server2
Uuid: 5e987bda-16dd-43c2-835b-08b7d55e94e5
State: Peer in Cluster (Connected)
Hostname: server3
Uuid: 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7
State: Peer in Cluster (Connected)
Hostname: server4
Uuid: 3e0cabaa-9df7-4f66-8e5d-cbc348f29ff7
State: Peer in Cluster (Connected)
<a name="listing-servers"></a>
### Listing Servers
To list all nodes in the TSP:
server1# gluster pool list
UUID Hostname State
d18d36c5-533a-4541-ac92-c471241d5418 localhost Connected
5e987bda-16dd-43c2-835b-08b7d55e94e5 server2 Connected
1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7 server3 Connected
3e0cabaa-9df7-4f66-8e5d-cbc348f29ff7 server4 Connected
<a name="peer-status"></a>
### Viewing Peer Status
To view the status of the peers in the TSP:
server1# gluster peer status
Number of Peers: 3
Hostname: server2
Uuid: 5e987bda-16dd-43c2-835b-08b7d55e94e5
State: Peer in Cluster (Connected)
Hostname: server3
Uuid: 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7
State: Peer in Cluster (Connected)
Hostname: server4
Uuid: 3e0cabaa-9df7-4f66-8e5d-cbc348f29ff7
State: Peer in Cluster (Connected)
<a name="removing-servers"></a>
### Removing Servers
To remove a server from the TSP, run the following command from another server in the pool:
# gluster peer detach <server>
For example, to remove server4 from the trusted storage pool:
server1# gluster peer detach server4
Detach successful
Verify the peer status:
server1# gluster peer status
Number of Peers: 2
Hostname: server2
Uuid: 5e987bda-16dd-43c2-835b-08b7d55e94e5
State: Peer in Cluster (Connected)
Hostname: server3
Uuid: 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7
State: Peer in Cluster (Connected)

View File

@@ -0,0 +1,75 @@
# Administration Guide
1. Managing a Cluster
* [Managing the Gluster Service](./Start-Stop-Daemon.md)
* [Managing Trusted Storage Pools](./Storage-Pools.md)
2. Setting Up Storage
* [Brick Naming Conventions](./Brick-Naming-Conventions.md)
* [Formatting and Mounting Bricks](./formatting-and-mounting-bricks.md)
* [POSIX Access Control Lists](./Access-Control-Lists.md)
3. [Setting Up Clients](./Setting-Up-Clients.md)
* [Handling of users that belong to many groups](./Handling-of-users-with-many-groups.md)
4. Volumes
* [Setting Up Volumes](./Setting-Up-Volumes.md)
* [Managing Volumes](./Managing-Volumes.md)
* [Modifying .vol files with a filter](./GlusterFS-Filter.md)
5. [Configuring NFS-Ganesha](./NFS-Ganesha-GlusterFS-Integration.md)
6. Features
* [Geo Replication](./Geo-Replication.md)
* [Quotas](./Directory-Quota.md)
* [Snapshots](./Managing-Snapshots.md)
* [Trash](./Trash.md)
7. Data Access With Other Interfaces
* [Managing Object Store](./Object-Storage.md)
* [Accessing GlusterFS using Cinder Hosts](./GlusterFS-Cinder.md)
* [GlusterFS with Keystone](./GlusterFS-Keystone-Quickstart.md)
* [Install Gluster on Top of ZFS](./Gluster-On-ZFS.md)
* [Configuring Bareos to store backups on Gluster](./Bareos.md)
8. [GlusterFS Service Logs and Locations](./Logging.md)
9. [Monitoring Workload](./Monitoring-Workload.md)
10. [Securing GlusterFS Communication using SSL](./SSL.md)
11. [Puppet Gluster](./Puppet.md)
12. [RDMA Transport](./RDMA-Transport.md)
13. [GlusterFS iSCSI](./GlusterFS-iSCSI.md)
14. [Linux Kernel Tuning](./Linux-Kernel-Tuning.md)
15. [Export and Netgroup Authentication](./Export-And-Netgroup-Authentication.md)
16. [Thin Arbiter volumes](./Thin-Arbiter-Volumes.md)
17. [Trash for GlusterFS](./Trash.md)
18. [Split brain and ways to deal with it](Split-brain-and-ways-to-deal-with-it.md)
19. [Arbiter volumes and quorum options](./arbiter-volumes-and-quorum.md)
20. [Mandatory Locks](./Mandatory-Locks.md)
21. [GlusterFS coreutilities](./GlusterFS-Coreutils.md)
22. [Events APIs](./Events-APIs.md)
23. [Building QEMU With gfapi For Debian Based Systems](./Building-QEMU-With-gfapi-For-Debian-Based-Systems.md)
24. Appendices
* [Network Configuration Techniques](./Network-Configurations-Techniques.md)
* [Performance Testing](./Performance-Testing.md)

View File

@@ -4,6 +4,6 @@ A volume is a logical collection of bricks where each brick is an export directo
Before creating a volume, you need to set up the bricks that will form the volume.
- [Brick Naming Conventions](./Brick Naming Conventions.md)
- [Brick Naming Conventions](./Brick-Naming-Conventions.md)
- [Formatting and Mounting Bricks](./formatting-and-mounting-bricks.md)
- [Posix ACLS](./Access Control Lists.md)
- [Posix ACLS](./Access-Control-Lists.md)

View File

@@ -2,4 +2,4 @@ GlusterFS Tools
---------------
- [glusterfind](./glusterfind.md)
- [gfind missing files](./gfind_missing_files.md)
- [gfind missing files](./gfind-missing-files.md)

View File

@@ -70,7 +70,7 @@ Other notes:
being able to operate in a trusted environment without firewalls can
mean huge gains in performance, and is recommended. In case you absolutely
need to set up a firewall, have a look at
[Setting up clients](../Administrator Guide/Setting Up Clients.md) for
[Setting up clients](../Administrator-Guide/Setting-Up-Clients.md) for
information on the ports used.
Click here to [get started](../Quick-Start-Guide/Quickstart.md)

View File

@@ -7,7 +7,7 @@ such as compat-readline5
###### Community Packages
Packages are provided according to this [table](./Community_Packages.md).
Packages are provided according to this [table](./Community-Packages.md).
###### For Debian

View File

@@ -1,13 +1,19 @@
# Overview
# Overview
### Purpose
The Install Guide (IG) is aimed at providing the sequence of steps needed for setting up Gluster. It contains a reasonable degree of detail which helps an administrator to understand the terminology, the choices and how to configure the deployment to the storage needs of their application workload. The [Quick Start Guide](../Quick-Start-Guide/Quickstart.md) (QSG) is designed to get a deployment with default choices and is aimed at those who want to spend less time to get to a deployment.
The Install Guide (IG) is aimed at providing the sequence of steps needed for
setting up Gluster. It contains a reasonable degree of detail which helps an
administrator to understand the terminology, the choices and how to configure
the deployment to the storage needs of their application workload. The [Quick
Start Guide](../Quick-Start-Guide/Quickstart.md) (QSG) is designed to get a
deployment with default choices and is aimed at those who want to spend less
time to get to a deployment.
After you deploy Gluster by following these steps, we recommend that
you read the [Gluster Admin Guide](../Administrator Guide/index.md) (AG) to learn how to administer Gluster and
how to select a volume type that fits your needs. Also, be sure to
enlist the help of the Gluster community via the IRC or, Slack channels (see https://www.gluster.org/community/) or Q&A
section.
After you deploy Gluster by following these steps, we recommend that you read
the [Gluster Admin Guide](../Administrator-Guide/index.md) to learn how to
administer Gluster and how to select a volume type that fits your needs. Also,
be sure to enlist the help of the Gluster community via the IRC or, Slack
channels (see https://www.gluster.org/community/) or Q&A section.
### Overview
@@ -109,4 +115,4 @@ In a perfect world, sure. Having the hardware be the same means less
troubleshooting when the fires start popping up. But plenty of people
deploy Gluster on mix and match hardware, and successfully.
Get started by checking some [Common Criteria](./Common_criteria.md)
Get started by checking some [Common Criteria](./Common-criteria.md)

View File

@@ -1,26 +1,25 @@
Upgrading GlusterFS
-------------------
- [About op-version](./op_version.md)
- [About op-version](./op-version.md)
If you are using GlusterFS version 5.x or above, you can upgrade it to the following:
- [Upgrading to 8](./upgrade_to_8.md)
- [Upgrading to 7](./upgrade_to_7.md)
- [Upgrading to 6](./upgrade_to_6.md)
- [Upgrading to 8](./upgrade-to-8.md)
- [Upgrading to 7](./upgrade-to-7.md)
- [Upgrading to 6](./upgrade-to-6.md)
If you are using GlusterFS version 4.x or above, you can upgrade it to the following:
- [Upgrading to 6](./upgrade_to_6.md)
- [Upgrading to 5](./upgrade_to_5.md)
- [Upgrading to 6](./upgrade-to-6.md)
- [Upgrading to 5](./upgrade-to-5.md)
If you are using GlusterFS version 3.4.x or above, you can upgrade it to following:
- [Upgrading to 3.5](./upgrade_to_3.5.md)
- [Upgrading to 3.6](./upgrade_to_3.6.md)
- [Upgrading to 3.7](./upgrade_to_3.7.md)
- [Upgrading to 3.9](./upgrade_to_3.9.md)
- [Upgrading to 3.10](./upgrade_to_3.10.md)
- [Upgrading to 3.11](./upgrade_to_3.11.md)
- [Upgrading to 3.12](./upgrade_to_3.12.md)
- [Upgrading to 3.13](./upgrade_to_3.13.md)
- [Upgrading to 3.5](./upgrade-to-3.5.md)
- [Upgrading to 3.6](./upgrade-to-3.6.md)
- [Upgrading to 3.7](./upgrade-to-3.7.md)
- [Upgrading to 3.9](./upgrade-to-3.9.md)
- [Upgrading to 3.10](./upgrade-to-3.10.md)
- [Upgrading to 3.11](./upgrade-to-3.11.md)
- [Upgrading to 3.12](./upgrade-to-3.12.md)
- [Upgrading to 3.13](./upgrade-to-3.13.md)

View File

@@ -1,6 +1,5 @@
# Generic Upgrade procedure
### Pre-upgrade notes
- Online upgrade is only possible with replicated and distributed replicate volumes
- Online upgrade is not supported for dispersed or distributed dispersed volumes
@@ -21,27 +20,27 @@ This procedure involves upgrading **one server at a time**, while keeping the vo
# systemctl stop glusterd
# systemctl stop glustereventsd
# killall glusterfs glusterfsd glusterd
2. Stop all applications that run on this server and access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.)
3. Install Gluster new-version, below example shows how to create a repository on fedora and use it to upgrade :
3. Install Gluster new-version, below example shows how to create a repository on fedora and use it to upgrade :
3.1 Create a private repository (assuming /new-gluster-rpms/ folder has the new rpms ):
3.1 Create a private repository (assuming /new-gluster-rpms/ folder has the new rpms ):
# createrepo /new-gluster-rpms/
3.2 Create the .repo file in /etc/yum.d/ :
3.2 Create the .repo file in /etc/yum.d/ :
# cat /etc/yum.d/newglusterrepo.repo
# cat /etc/yum.d/newglusterrepo.repo
[newglusterrepo]
name=NewGlusterRepo
baseurl="file:///new-gluster-rpms/"
gpgcheck=0
enabled=1
3.3 Upgrade glusterfs, for example to upgrade glusterfs-server to x.y version :
# yum update glusterfs-server-x.y.fc30.x86_64.rpm
3.3 Upgrade glusterfs, for example to upgrade glusterfs-server to x.y version :
# yum update glusterfs-server-x.y.fc30.x86_64.rpm
4. Ensure that version reflects new-version in the output of,
@@ -78,7 +77,7 @@ This procedure involves cluster downtime and during the upgrade window, clients
1. On every server in the trusted storage pool, stop all gluster services, either using the command below, or through other means,
```sh
# systemctl stop glusterd
# systemctl stop glustereventsd
# killall glusterfs glusterfsd glusterd
@@ -111,7 +110,7 @@ This procedure involves cluster downtime and during the upgrade window, clients
### Post upgrade steps
Perform the following steps post upgrading the entire trusted storage pool,
- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op_version.md) section for further details
- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op-version.md) section for further details
- Proceed to [upgrade the clients](#upgrade-procedure-for-clients) to new-version version as well
- Post upgrading the clients, for replicate volumes, it is recommended to enable the option `gluster volume set <volname> fips-mode-rchecksum on` to turn off usage of MD5 checksums during healing. This enables running Gluster on FIPS compliant systems.
@@ -119,7 +118,7 @@ Perform the following steps post upgrading the entire trusted storage pool,
> **NOTE:** If you have ever enabled quota on your volumes then after the upgrade
is done, you will have to restart all the nodes in the cluster one by one so as to
fix the checksum values in the quota.cksum file under the `/var/lib/glusterd/vols/<volname>/ directory.`
fix the checksum values in the quota.cksum file under the `/var/lib/glusterd/vols/<volname>/ directory.`
The peers may go into `Peer rejected` state while doing so but once all the nodes are rebooted
everything will be back to normal.

View File

@@ -1,6 +1,6 @@
## Upgrade procedure to Gluster 3.10.0, from Gluster 3.9.x, 3.8.x and 3.7.x
### Pre-upgrade notes
### Pre-upgrade notes
- Online upgrade is only possible with replicated and distributed replicate volumes
- Online upgrade is not supported for dispersed or distributed dispersed volumes
- Ensure no configuration changes are done during the upgrade
@@ -82,7 +82,7 @@ This procedure involves cluster downtime and during the upgrade window, clients
### Post upgrade steps
Perform the following steps post upgrading the entire trusted storage pool,
- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op_version.md) section for further details
- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op-version.md) section for further details
- Proceed to [upgrade the clients](#upgrade-procedure-for-clients) to 3.10 version as well
### Upgrade procedure for clients

View File

@@ -2,7 +2,7 @@
**NOTE:** Upgrade procedure remains the same as with the 3.10 release
### Pre-upgrade notes
### Pre-upgrade notes
- Online upgrade is only possible with replicated and distributed replicate volumes
- Online upgrade is not supported for dispersed or distributed dispersed volumes
- Ensure no configuration changes are done during the upgrade
@@ -88,7 +88,7 @@ This procedure involves cluster downtime and during the upgrade window, clients
### Post upgrade steps
Perform the following steps post upgrading the entire trusted storage pool,
- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op_version.md) section for further details
- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op-version.md) section for further details
- Proceed to [upgrade the clients](#upgrade-procedure-for-clients) to 3.11 version as well
### Upgrade procedure for clients

View File

@@ -91,7 +91,7 @@ This procedure involves cluster downtime and during the upgrade window, clients
### Post upgrade steps
Perform the following steps post upgrading the entire trusted storage pool,
- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op_version.md) section for further details
- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op-version.md) section for further details
- Proceed to [upgrade the clients](#upgrade-procedure-for-clients) to 3.12 version as well
### Upgrade procedure for clients

View File

@@ -81,7 +81,7 @@ This procedure involves cluster downtime and during the upgrade window, clients
### Post upgrade steps
Perform the following steps post upgrading the entire trusted storage pool,
- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op_version.md) section for further details
- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op-version.md) section for further details
- Proceed to [upgrade the clients](#upgrade-procedure-for-clients) to 3.13 version as well
### Upgrade procedure for clients

View File

@@ -2,7 +2,7 @@
The steps to uprade to Gluster 3.9 are the same as for upgrading to Gluster
3.8. Please follow the detailed instructions from [the 3.8 upgrade
guide](upgrade_to_3.8.md).
guide](upgrade-to-3.8.md).
Note that there is only a single difference, related to the `op-version`:

View File

@@ -81,7 +81,7 @@ This procedure involves cluster downtime and during the upgrade window, clients
### Post upgrade steps
Perform the following steps post upgrading the entire trusted storage pool,
- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op_version.md) section for further details
- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op-version.md) section for further details
- Proceed to [upgrade the clients](#upgrade-procedure-for-clients) to 4.0 version as well
- Post upgrading the clients, for replicate volumes, it is recommended to enable the option `gluster volume set <volname> fips-mode-rchecksum on` to turn off usage of MD5 checksums during healing. This enables running Gluster on FIPS compliant systems.

View File

@@ -95,7 +95,7 @@ This procedure involves cluster downtime and during the upgrade window, clients
### Post upgrade steps
Perform the following steps post upgrading the entire trusted storage pool,
- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op_version.md) section for further details
- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op-version.md) section for further details
- Proceed to [upgrade the clients](#upgrade-procedure-for-clients) to 4.1 version as well
- Post upgrading the clients, for replicate volumes, it is recommended to enable the option `gluster volume set <volname> fips-mode-rchecksum on` to turn off usage of MD5 checksums during healing. This enables running Gluster on FIPS compliant systems.

View File

@@ -2,7 +2,7 @@
> **NOTE:** Upgrade procedure remains the same as with 4.1 release
Refer, to the [Upgrading to 4.1](./upgrade_to_4.1.md) guide and follow
Refer, to the [Upgrading to 4.1](./upgrade-to-4.1.md) guide and follow
documented instructions, replacing 5 when you encounter 4.1 in the guide as the
version reference.

View File

@@ -5,7 +5,7 @@ aware of the features and fixes provided with the release.
> **NOTE:** Upgrade procedure remains the same as with 4.1.x release
Refer, to the [Upgrading to 4.1](./upgrade_to_4.1.md) guide and follow
Refer, to the [Upgrading to 4.1](./upgrade-to-4.1.md) guide and follow
documented instructions, replacing 6 when you encounter 4.1 in the guide as the
version reference.

View File

@@ -5,13 +5,13 @@ aware of the features and fixes provided with the release.
> **NOTE:** Upgrade procedure remains the same as with 4.1.x release
Refer, to the [Upgrading to 4.1](./upgrade_to_4.1.md) guide and follow
Refer, to the [Upgrading to 4.1](./upgrade-to-4.1.md) guide and follow
documented instructions, replacing 7 when you encounter 4.1 in the guide as the
version reference.
> **NOTE:** If you have ever enabled quota on your volumes then after the upgrade
is done, you will have to restart all the nodes in the cluster one by one so as to
fix the checksum values in the quota.cksum file under the `/var/lib/glusterd/vols/<volname>/ directory.`
fix the checksum values in the quota.cksum file under the `/var/lib/glusterd/vols/<volname>/ directory.`
The peers may go into `Peer rejected` state while doing so but once all the nodes are rebooted
everything will be back to normal.
@@ -43,6 +43,4 @@ upgrading the cluster.
### Deprecated translators and upgrade procedure for volumes using these features
[If you are upgrading from a release prior to release-6 be aware of deprecated xlators and functionality](https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_6/#deprecated-translators-and-upgrade-procedure-for-volumes-using-these-features).
[If you are upgrading from a release prior to release-6 be aware of deprecated xlators and functionality](https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_6/#deprecated-translators-and-upgrade-procedure-for-volumes-using-these-features).

View File

@@ -3,9 +3,9 @@
We recommend reading the [release notes for 8.0](../release-notes/8.0.md) to be
aware of the features and fixes provided with the release.
> **NOTE:** Before following the generic upgrade procedure checkout the "**Major Issues**" section given below.
> **NOTE:** Before following the generic upgrade procedure checkout the "**Major Issues**" section given below.
Refer, to the [generic upgrade procedure](./Generic_Upgrade_procedure.md) guide and follow documented instructions.
Refer, to the [generic upgrade procedure](./generic-upgrade-procedure.md) guide and follow documented instructions.
## Major issues
@@ -30,7 +30,7 @@ If these are set, then unset them using the following commands,
# gluster volume reset <volname> <option>
```
### Make sure you are not using any of the following depricated features :
### Make sure you are not using any of the following depricated features :
- Block device (bd) xlator
- Decompounder feature
@@ -40,13 +40,11 @@ If these are set, then unset them using the following commands,
- Tiering support (tier xlator and changetimerecorder)
- Glupy
**NOTE:** Failure to do the above may result in failure during online upgrades,
and the reset of these options to their defaults needs to be done **prior** to
upgrading the cluster.
### Deprecated translators and upgrade procedure for volumes using these features
[If you are upgrading from a release prior to release-6 be aware of deprecated xlators and functionality](https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_6/#deprecated-translators-and-upgrade-procedure-for-volumes-using-these-features).
[If you are upgrading from a release prior to release-6 be aware of deprecated xlators and functionality](https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_6/#deprecated-translators-and-upgrade-procedure-for-volumes-using-these-features).

View File

Before

Width:  |  Height:  |  Size: 95 KiB

After

Width:  |  Height:  |  Size: 95 KiB

View File

Before

Width:  |  Height:  |  Size: 13 KiB

After

Width:  |  Height:  |  Size: 13 KiB

View File

Before

Width:  |  Height:  |  Size: 62 KiB

After

Width:  |  Height:  |  Size: 62 KiB

View File

Before

Width:  |  Height:  |  Size: 56 KiB

After

Width:  |  Height:  |  Size: 56 KiB

View File

Before

Width:  |  Height:  |  Size: 52 KiB

After

Width:  |  Height:  |  Size: 52 KiB

View File

Before

Width:  |  Height:  |  Size: 46 KiB

After

Width:  |  Height:  |  Size: 46 KiB

View File

Before

Width:  |  Height:  |  Size: 35 KiB

After

Width:  |  Height:  |  Size: 35 KiB

View File

Before

Width:  |  Height:  |  Size: 160 KiB

After

Width:  |  Height:  |  Size: 160 KiB

View File

Before

Width:  |  Height:  |  Size: 94 KiB

After

Width:  |  Height:  |  Size: 94 KiB

View File

Before

Width:  |  Height:  |  Size: 129 KiB

After

Width:  |  Height:  |  Size: 129 KiB

View File

Before

Width:  |  Height:  |  Size: 183 KiB

After

Width:  |  Height:  |  Size: 183 KiB

View File

Before

Width:  |  Height:  |  Size: 130 KiB

After

Width:  |  Height:  |  Size: 130 KiB

View File

Before

Width:  |  Height:  |  Size: 43 KiB

After

Width:  |  Height:  |  Size: 43 KiB

View File

Before

Width:  |  Height:  |  Size: 43 KiB

After

Width:  |  Height:  |  Size: 43 KiB

View File

Before

Width:  |  Height:  |  Size: 61 KiB

After

Width:  |  Height:  |  Size: 61 KiB

View File

Before

Width:  |  Height:  |  Size: 42 KiB

After

Width:  |  Height:  |  Size: 42 KiB

View File

Before

Width:  |  Height:  |  Size: 112 KiB

After

Width:  |  Height:  |  Size: 112 KiB

View File

Before

Width:  |  Height:  |  Size: 70 KiB

After

Width:  |  Height:  |  Size: 70 KiB

View File

Before

Width:  |  Height:  |  Size: 38 KiB

After

Width:  |  Height:  |  Size: 38 KiB

View File

@@ -21,7 +21,7 @@ Install Guide.
**More Documentation**
- [Administration Guide](./Administrator Guide/index.md) - describes the configuration and management of GlusterFS.
- [Administration Guide](./Administrator-Guide/index.md) - describes the configuration and management of GlusterFS.
- [GlusterFS Developer Guide](./Developer-guide/Developers-Index.md) - describes how you can contribute to this open source project; built through the efforts of its dedicated, passionate community.
@@ -29,7 +29,7 @@ Install Guide.
- [Release Notes](./release-notes/index.md) - Glusterfs Release Notes provides high-level insight into the improvements and additions that have been implemented in various Glusterfs releases.
- [GlusterFS Tools](./GlusterFS Tools/README.md) - Guides for GlusterFS tools.
- [GlusterFS Tools](./GlusterFS-Tools/README.md) - Guides for GlusterFS tools.
- [Troubleshooting Guide](./Troubleshooting/README.md) - Guide for troubleshooting.

View File

@@ -129,4 +129,4 @@ The following features are experimental with this release:
### Upgrading to 3.6.X
Before upgrading to 3.6 version of gluster from 3.4.X or 3.5.x, please take a look at following link:
[Upgrade Gluster to 3.6](../Upgrade-Guide/upgrade_to_3.6.md)
[Upgrade Gluster to 3.6](../Upgrade-Guide/upgrade-to-3.6.md)

View File

@@ -59,7 +59,7 @@ For more information refer [here](https://github.com/gluster/glusterfs-specs/blo
GlusterFind is a new tool that provides a mechanism to monitor data events within a volume. Detection of events like modified files is made easier without having to traverse the entire volume.
For more information refer [here](../GlusterFS Tools/glusterfind.md).
For more information refer [here](../GlusterFS-Tools/glusterfind.md).
### Rebalance Performance Improvements
@@ -163,4 +163,4 @@ For more information, see the 'Resolution of split-brain from the mount point' s
### Upgrading to 3.7.0
Instructions for upgrading from previous versions of GlusterFS are maintained on [this page](../Upgrade-Guide/upgrade_to_3.7.md).
Instructions for upgrading from previous versions of GlusterFS are maintained on [this page](../Upgrade-Guide/upgrade-to-3.7.md).

View File

@@ -8,66 +8,66 @@ docs_dir: docs
nav:
- Home: index.md
- Getting started with GlusterFS:
- Introduction: Administrator Guide/GlusterFS Introduction.md
- Introduction: Administrator-Guide/GlusterFS-Introduction.md
- Quick Start Guide: Quick-Start-Guide/Quickstart.md
- Architecture: Quick-Start-Guide/Architecture.md
- Install Guide:
- Overview: Install-Guide/Overview.md
- Common Criteria: Install-Guide/Common_criteria.md
- Setting up in virtual machines: Install-Guide/Setup_virt.md
- Setting up on physical servers: Install-Guide/Setup_Bare_metal.md
- Deploying in AWS: Install-Guide/Setup_aws.md
- Common Criteria: Install-Guide/Common-criteria.md
- Setting up in virtual machines: Install-Guide/Setup-virt.md
- Setting up on physical servers: Install-Guide/Setup-Bare-metal.md
- Deploying in AWS: Install-Guide/Setup-aws.md
- Install: Install-Guide/Install.md
- Community Packages: Install-Guide/Community_Packages.md
- Community Packages: Install-Guide/Community-Packages.md
- Configure: Install-Guide/Configure.md
- Administration Guide:
- Overview: Administrator Guide/overview.md
- Index: Administrator Guide/index.md
- Managing the Gluster Service: Administrator Guide/Start Stop Daemon.md
- Managing Trusted Storage Pools: Administrator Guide/Storage Pools.md
- Overview: Administrator-Guide/overview.md
- Index: Administrator-Guide/index.md
- Managing the Gluster Service: Administrator-Guide/Start-Stop-Daemon.md
- Managing Trusted Storage Pools: Administrator-Guide/Storage-Pools.md
- Setting Up Storage:
- Setting Up Storage : Administrator Guide/setting-up-storage.md
- Brick Naming Conventions: Administrator Guide/Brick Naming Conventions.md
- Formatting and Mounting Bricks: Administrator Guide/formatting-and-mounting-bricks.md
- Access Control Lists: Administrator Guide/Access Control Lists.md
- Handling of users that belong to many groups: Administrator Guide/Handling-of-users-with-many-groups.md
- Setting Up Volumes: Administrator Guide/Setting Up Volumes.md
- Setting Up Clients: Administrator Guide/Setting Up Clients.md
- Managing Volumes: Administrator Guide/Managing Volumes.md
- Building QEMU with gfapi For Debian Based Systems: Administrator Guide/Building QEMU With gfapi For Debian Based Systems.md
- GlusterFS Filter: Administrator Guide/GlusterFS Filter.md
- Logging: Administrator Guide/Logging.md
- Setting Up Storage : Administrator-Guide/setting-up-storage.md
- Brick Naming Conventions: Administrator-Guide/Brick-Naming-Conventions.md
- Formatting and Mounting Bricks: Administrator-Guide/formatting-and-mounting-bricks.md
- Access Control Lists: Administrator-Guide/Access-Control-Lists.md
- Handling of users that belong to many groups: Administrator-Guide/Handling-of-users-with-many-groups.md
- Setting Up Volumes: Administrator-Guide/Setting-Up-Volumes.md
- Setting Up Clients: Administrator-Guide/Setting-Up-Clients.md
- Managing Volumes: Administrator-Guide/Managing-Volumes.md
- Building QEMU with gfapi For Debian Based Systems: Administrator-Guide/Building-QEMU-With-gfapi-For-Debian-Based-Systems.md
- GlusterFS Filter: Administrator-Guide/GlusterFS-Filter.md
- Logging: Administrator-Guide/Logging.md
- Features:
- Setting Up Storage : Administrator Guide/setting-up-storage.md
- Geo Replication: Administrator Guide/Geo Replication.md
- Quotas: Administrator Guide/Directory Quota.md
- Snapshots: Administrator Guide/Managing Snapshots.md
- Trash: Administrator Guide/Trash.md
- Hook Scripts: Administrator Guide/Hook-scripts.md
- Monitoring Workload: Administrator Guide/Monitoring Workload.md
- Object Storage: Administrator Guide/Object Storage.md
- GlusterFS Cinder: Administrator Guide/GlusterFS Cinder.md
- GlusterFS Keystone Quickstart: Administrator Guide/GlusterFS Keystone Quickstart.md
- Gluster On ZFS: Administrator Guide/Gluster On ZFS.md
- Configuring Bareos to store backups on Gluster: Administrator Guide/Bareos.md
- SSL: Administrator Guide/SSL.md
- Puppet Gluster: Administrator Guide/Puppet.md
- RDMA Transport: Administrator Guide/RDMA Transport.md
- GlusterFS iSCSI: Administrator Guide/GlusterFS iSCSI.md
- Configuring NFS-Ganesha server: Administrator Guide/NFS-Ganesha GlusterFS Integration.md
- Linux Kernel Tuning: Administrator Guide/Linux Kernel Tuning.md
- Network Configuration Techniques: Administrator Guide/Network Configurations Techniques.md
- Performance Tuning: Administrator Guide/Performance Tuning.md
- Performance Testing: Administrator Guide/Performance Testing.md
- Export and Netgroup Authentication: Administrator Guide/Export And Netgroup Authentication.md
- Consul integration: Administrator Guide/Consul.md
- Split brain and ways to deal with it: Administrator Guide/Split brain and ways to deal with it.md
- Arbiter volumes and quorum options: Administrator Guide/arbiter-volumes-and-quorum.md
- Thin Arbiter volumes: Administrator Guide/Thin-Arbiter-Volumes.md
- Trash for GlusterFS: Administrator Guide/Trash.md
- Mandatory Locks: Administrator Guide/Mandatory Locks.md
- GlusterFS coreutilities: Administrator Guide/GlusterFS Coreutils.md
- Events APIs: Administrator Guide/Events APIs.md
- Setting Up Storage : Administrator-Guide/setting-up-storage.md
- Geo Replication: Administrator-Guide/Geo-Replication.md
- Quotas: Administrator-Guide/Directory-Quota.md
- Snapshots: Administrator-Guide/Managing-Snapshots.md
- Trash: Administrator-Guide/Trash.md
- Monitoring Workload: Administrator-Guide/Monitoring-Workload.md
- Object Storage: Administrator-Guide/Object-Storage.md
- GlusterFS Cinder: Administrator-Guide/GlusterFS-Cinder.md
- GlusterFS Keystone Quickstart: Administrator-Guide/GlusterFS-Keystone-Quickstart.md
- Gluster On ZFS: Administrator-Guide/Gluster-On-ZFS.md
- Configuring Bareos to store backups on Gluster: Administrator-Guide/Bareos.md
- SSL: Administrator-Guide/SSL.md
- Puppet Gluster: Administrator-Guide/Puppet.md
- RDMA Transport: Administrator-Guide/RDMA-Transport.md
- GlusterFS iSCSI: Administrator-Guide/GlusterFS-iSCSI.md
- Configuring NFS-Ganesha server: Administrator-Guide/NFS-Ganesha-GlusterFS-Integration.md
- Linux Kernel Tuning: Administrator-Guide/Linux-Kernel-Tuning.md
- Network Configuration Techniques: Administrator-Guide/Network-Configurations-Techniques.md
- Performance Testing: Administrator-Guide/Performance-Testing.md
- Performance Tuning: Administrator-Guide/Performance-Tuning.md
- Export and Netgroup Authentication: Administrator-Guide/Export-And-Netgroup-Authentication.md
- Consul integration: Administrator-Guide/Consul.md
- Split brain and ways to deal with it: Administrator-Guide/Split-brain-and-ways-to-deal-with-it.md
- Arbiter volumes and quorum options: Administrator-Guide/arbiter-volumes-and-quorum.md
- Thin Arbiter volumes: Administrator-Guide/Thin-Arbiter-Volumes.md
- Trash for GlusterFS: Administrator-Guide/Trash.md
- Mandatory Locks: Administrator-Guide/Mandatory-Locks.md
- GlusterFS coreutilities: Administrator-Guide/GlusterFS-Coreutils.md
- Events APIs: Administrator-Guide/Events-APIs.md
- Managing GlusterFS Volume Life-Cycle Extensions with Hook Scripts: Administrator-Guide/Hook-scripts.md
- CLI Reference:
- Overview: CLI-Reference/cli-main.md
- Presentations: presentations/index.md
@@ -83,6 +83,7 @@ nav:
- Backport Guidelines : Developer-guide/Backport-Guidelines.md
- Contributors Guide:
- Index: Contributors-Guide/Index.md
- Bug reporting guidelines: Contributors-Guide/Bug-Reporting-Guidelines.md
- Bug Triage : Contributors-Guide/Bug-Triage.md
- GlusterFS Release process : Contributors-Guide/GlusterFS-Release-process.md
- Guidelines For Maintainers : Contributors-Guide/Guidelines-For-Maintainers.md
@@ -92,22 +93,23 @@ nav:
- Tools: Ops-Guide/Tools.md
- Upgrade-Guide:
- Upgrade-Guide Index: Upgrade-Guide/README.md
- Op-version: Upgrade-Guide/op_version.md
- Upgrade to 8: Upgrade-Guide/upgrade_to_8.md
- Upgrade to 7: Upgrade-Guide/upgrade_to_7.md
- Upgrade to 6: Upgrade-Guide/upgrade_to_6.md
- Upgrade to 5: Upgrade-Guide/upgrade_to_5.md
- Upgrade to 4.1: Upgrade-Guide/upgrade_to_4.1.md
- Upgrade to 4.0: Upgrade-Guide/upgrade_to_4.0.md
- Upgrade to 3.13: Upgrade-Guide/upgrade_to_3.13.md
- Upgrade to 3.12: Upgrade-Guide/upgrade_to_3.12.md
- Upgrade to 3.11: Upgrade-Guide/upgrade_to_3.11.md
- Upgrade to 3.10: Upgrade-Guide/upgrade_to_3.10.md
- Upgrade to 3.9: Upgrade-Guide/upgrade_to_3.9.md
- Upgrade to 3.8: Upgrade-Guide/upgrade_to_3.8.md
- Upgrade to 3.7: Upgrade-Guide/upgrade_to_3.7.md
- Upgrade to 3.6: Upgrade-Guide/upgrade_to_3.6.md
- Upgrade to 3.5: Upgrade-Guide/upgrade_to_3.5.md
- Op-version: Upgrade-Guide/op-version.md
- Generic upgrade procedure: Upgrade-Guide/generic-upgrade-procedure.md
- Upgrade to 8: Upgrade-Guide/upgrade-to-8.md
- Upgrade to 7: Upgrade-Guide/upgrade-to-7.md
- Upgrade to 6: Upgrade-Guide/upgrade-to-6.md
- Upgrade to 5: Upgrade-Guide/upgrade-to-5.md
- Upgrade to 4.1: Upgrade-Guide/upgrade-to-4.1.md
- Upgrade to 4.0: Upgrade-Guide/upgrade-to-4.0.md
- Upgrade to 3.13: Upgrade-Guide/upgrade-to-3.13.md
- Upgrade to 3.12: Upgrade-Guide/upgrade-to-3.12.md
- Upgrade to 3.11: Upgrade-Guide/upgrade-to-3.11.md
- Upgrade to 3.10: Upgrade-Guide/upgrade-to-3.10.md
- Upgrade to 3.9: Upgrade-Guide/upgrade-to-3.9.md
- Upgrade to 3.8: Upgrade-Guide/upgrade-to-3.8.md
- Upgrade to 3.7: Upgrade-Guide/upgrade-to-3.7.md
- Upgrade to 3.6: Upgrade-Guide/upgrade-to-3.6.md
- Upgrade to 3.5: Upgrade-Guide/upgrade-to-3.5.md
- Release Notes:
- index: release-notes/index.md
- RELEASE 8.x:
@@ -212,9 +214,9 @@ nav:
- 3.5.1: release-notes/3.5.1.md
- 3.5.0: release-notes/3.5.0.md
- GlusterFS Tools:
- GlusterFS Tools List: GlusterFS Tools/README.md
- glusterfind: GlusterFS Tools/glusterfind.md
- gfind missing files: GlusterFS Tools/gfind_missing_files.md
- GlusterFS Tools List: GlusterFS-Tools/README.md
- glusterfind: GlusterFS-Tools/glusterfind.md
- gfind missing files: GlusterFS-Tools/gfind-missing-files.md
- Troubleshooting Guide:
- Index: Troubleshooting/README.md
- Components: