1
0
mirror of https://github.com/gluster/glusterdocs.git synced 2026-02-05 15:47:01 +01:00

Incremental check-in

Running through the topics and cleaning up formatting and applying
templates where possible. Note to future me: search on <!---> tags for
comments on future improvements.
This commit is contained in:
H. Waterhouse
2017-05-09 11:43:21 -05:00
parent 51f1a6b9f2
commit ebfddd8f00
14 changed files with 171 additions and 170 deletions

1
.gitignore vendored
View File

@@ -1,3 +1,4 @@
site/
env/
.env
mkdocs_broken.yml

View File

@@ -1,13 +1,4 @@
# Configuring Bareos to store backups on Gluster
This description assumes that you already have a Gluster environment ready and
configured. The examples use `storage.example.org` as a Round Robin DNS name
that can be used to contact any of the available GlusterD processes. The
Gluster Volume that is used, is called `backups`. Client systems would be able
to access the volume by mounting it with FUSE like this:
# mount -t glusterfs storage.example.org:/backups /mnt
[Bareos](http://bareos.org) contains a plugin for the Storage Daemon that uses
`libgfapi`. This makes it possible for Bareos to access the Gluster Volumes
without the need to have a FUSE mount available.
@@ -18,13 +9,16 @@ together with the Bareos Storage Daemon. In the example, there is a File Daemon
running on the same server. This makes it possible to backup the Bareos
Director, which is useful as a backup of the Bareos database and configuration
is kept that way.
## Prerequisites
- Configured, operational Gluster environment
- Round Robin DNS name that can contact any available GlusterD process (example: ```storage.example.com```)
- Gluster volume (example: ```backups```)
- Client system access to mounted volume (FUSE command: ```mount -t glusterfs storage.example.org:/backups/mnt```)
# Bareos Installation
An absolute minimal Bareos installation needs a Bareos Director and a Storage
Daemon. In order to backup a filesystem, a File Daemon needs to be available
too. For the description in this document, CentOS-7 was used, with the
Daemon. To back up a filesystem, Bareos also needs a File Daemon. For the description in this document, CentOS-7 was used, with the
following packages and versions:
- [glusterfs-3.7.4](http://download.gluster.org)
@@ -32,36 +26,39 @@ following packages and versions:
The Gluster Storage Servers do not need to have any Bareos packages installed.
It is often better to keep applications (Bareos) and storage servers on
different systems. So, when the Bareos repository has been configured, install
the packages on the `backup.example.org` server:
different systems.
### To install Bareos on an application server
1. Configure the Bareos repository
2. Install the packages on the `backup.example.org` server:
# yum install bareos-director bareos-database-sqlite3 \
```
yum install bareos-director bareos-database-sqlite3 \
bareos-storage-glusterfs bareos-filedaemon \
bareos-bconsole
To keep things as simple as possible, SQlite it used. For production
deployments either MySQL or PostgrSQL is advised. It is needed to create the
3. In this example, we used SQlite. For production
deployments either MySQL or PostgrSQL is advised. Create the
initial database:
```
# sqlite3 /var/lib/bareos/bareos.db < /usr/lib/bareos/scripts/ddl/creates/sqlite3.sql
# chown bareos:bareos /var/lib/bareos/bareos.db
# chmod 0660 /var/lib/bareos/bareos.db
```
The `bareos-bconsole` package is optional. `bconsole` is a terminal application
4. The `bareos-bconsole` package is optional. `bconsole` is a terminal application
that can be used to initiate backups, check the status of different Bareos
components and the like. Testing the configuration with `bconsole` is
relatively simple.
Once the packages are installed, you will need to start and enable the daemons:
### To start Bareos on an application server
5. Once the packages are installed, you will need to start and enable the daemons:
```
# systemctl start bareos­sd
# systemctl start bareos­fd
# systemctl start bareos­dir
# systemctl enable bareos­sd
# systemctl enable bareos­fd
# systemctl enable bareos­dir
```
# Gluster Volume preparation
There are a few steps needed to allow Bareos to access the Gluster Volume. By
@@ -71,48 +68,49 @@ need to be opened up.
There are two processes involved when a client accesses a Gluster Volume. For
the initial phase, GlusterD is contacted, when the client received the layout
of the volume, the client will connect to the bricks directly. The changes to
allow unprivileged processes to connect, are therefore twofold:
of the volume, the client will connect to the bricks directly.
1. In `/etc/glusterfs/glusterd.vol` the option `rpc-auth-allow-insecure on`
needs to be added on all storage servers. After the modification of the
configuration file, the GlusterD process needs to be restarted with
### To allow unprivileged processes to connect
1. In `/etc/glusterfs/glusterd.vol` add the `rpc-auth-allow-insecure on`
option on all storage servers.
2. After you modify the configuration file, restart the GlusterD process by entering
`systemctl restart glusterd`.
1. The brick processes for the volume are configured through a volume option.
By executing `gluster volume set backups server.allow-insecure on` the
needed option gets set. Some versions of Gluster require a volume stop/start
3. Execute `gluster volume set backups server.allow-insecure on` to configure the brick processes.
4. Some versions of Gluster require a volume stop/start
before the option is taken into account, for these versions you will need to
execute `gluster volume stop backups` and `gluster volume start backups`.
Except for the network permissions, the Bareos Storage Daemon needs to be
allowed to write to the filesystem provided by the Gluster Volume. This is
achieved by setting normal UNIX permissions/ownership so that the right
allowed to write to the filesystem provided by the Gluster Volume.
### To set permissions allowing the storage daemon to write
1. Set normal UNIX permissions/ownership so that the correct
user/group can write to the volume:
```
# mount -t glusterfs storage.example.org:/backups /mnt
# mkdir /mnt/bareos
# chown bareos:bareos /mnt/bareos
# chmod ug=rwx /mnt/bareos
# umount /mnt
Depending on how users/groups are maintained in the environment, the `bareos`
```
NOTE: Depending on how users/groups are maintained in the environment, the `bareos`
user and group may not be available on the storage servers. If that is the
case, the `chown` command above can be adapted to use the `uid` and `gid` of
the `bareos` user and group from `backup.example.org`. On the Bareos server,
the output would look similar to:
```
# id bareos
uid=998(bareos) gid=997(bareos) groups=997(bareos),6(disk),30(tape)
```
And that makes the `chown` command look like this:
```
# chown 998:997 /mnt/bareos
```
<!--- That is a lot of information for a side item. Consider handling differently.--->
# Bareos Configuration
When `bareos-storage-glusterfs` got installed, an example configuration file
has been added too. The `/etc/bareos/bareos-sd.d/device-gluster.conf` contains
When `bareos-storage-glusterfs` was installed, an example configuration file
was also added. The `/etc/bareos/bareos-sd.d/device-gluster.conf` contains
the `Archive Device` directive, which is a URL for the Gluster Volume and path
where the backups should get stored. In our example, the entry should get set
to:
@@ -126,11 +124,11 @@ to:
}
The default configuration of the Bareos provided jobs is to write backups to
`/var/lib/bareos/storage`. In order to write all the backups to the Gluster
Volume instead, the configuration for the Bareos Director needs to be modified.
In the `/etc/bareos/bareos-dir.conf` configuration, the defaults for all jobs
can be changed to use the `GlusterFile` storage:
`/var/lib/bareos/storage`.
### To write all the backups to the Gluster Volume
1. Modify the configuration for the Bareos Director.
2. In the `/etc/bareos/bareos-dir.conf` configuration, change the defaults for all jobs to use the `GlusterFile` storage:
```
JobDefs {
Name = "DefaultJob"
...
@@ -138,36 +136,37 @@ can be changed to use the `GlusterFile` storage:
Storage = GlusterFile
...
}
```
After changing the configuration files, the Bareos daemons need to apply them.
The easiest to inform the processes of the changed configuration files is by
instructing them to `reload` their configuration:
1. Instruct the processes to `reload` their configuration:
```
# bconsole
Connecting to Director backup:9101
1000 OK: backup-dir Version: 14.2.2 (12 December 2014)
Enter a period to cancel a command.
*reload
```
With `bconsole` it is also possible to check if the configuration has been
1. Use `bconsole` to check if the configuration has been
applied. The `status` command can be used to show the URL of the storage that
is configured. When all is setup correctly, the result looks like this:
```
*status storage=GlusterFile
Connecting to Storage daemon GlusterFile at backup:9103
...
Device "GlusterStorage" (gluster://storage.example.org/backups/bareos) is not open.
...
```
# Create your first backup
There are several default jobs configured in the Bareos Director. One of them
is the `DefaultJob` which was modified in an earlier step. This job uses the
There are several default jobs configured in the Bareos Director.
1. Run the `DefaultJob` which was modified in an earlier step. This job uses the
`SelfTest` FileSet, which backups `/usr/sbin`. Running this job will verify if
the configuration is working correctly. Additional jobs, other FileSets and
more File Daemons (clients that get backed up) can be added later.
```
*run
A job name must be specified.
The defined Job resources are:
@@ -182,7 +181,7 @@ more File Daemons (clients that get backed up) can be added later.
...
OK to run? (yes/mod/no): yes
Job queued. JobId=1
```
The job will need a few seconds to complete, the `status` command can be used
to show the progress. Once done, the `messages` command will display the
result:
@@ -194,16 +193,18 @@ result:
...
Termination: Backup OK
The archive that contains the backup will be located on the Gluster Volume. To
check if the file is available, mount the volume on a storage server:
The archive that contains the backup is located on the Gluster Volume.
2. To
check if the file is available, mount the volume on a storage server:
```
# mount -t glusterfs storage.example.org:/backups /mnt
# ls /mnt/bareos
```
# Further Reading
This document intends to provide a quick start of configuring Bareos to use
This document provides a quick start of configuring Bareos to use
Gluster as a storage backend. Bareos can be configured to create backups of
different clients (which run a File Daemon), run jobs at scheduled time and
intervals and much more. The excellent [Bareos

View File

@@ -1,3 +1,5 @@
<!---This is a weird introduction. Is this page a continutation of something?--->
# Naming standards for bricks
FHS-2.3 isn't entirely clear on where data shared by the server should reside. It does state that "_/srv contains site-specific data which is served by this system_", but is GlusterFS data site-specific?
The consensus seems to lean toward using `/data`. A good hierarchical method for placing bricks is:

View File

@@ -1,24 +1,21 @@
# How to build QEMU with gfapi for Debian-based systems
This how-to has been tested on Ubuntu 13.10 in a clean, up to date
environment. Older Ubuntu distros required some hacks if I remembered
rightly. Other Debian based distros should be able to follow this
environment. Older Ubuntu distros required some hacks. Other Debian-based distros should be able to follow this,
adjusting for dependencies. Please update this if you get it working on
another distro.
### Satisfying dependencies
## Satisfying dependencies
### To get the qemu dependencies
1. Enter ```apt-get build-dep qemu```
2. To get all the dependencies specified in the debian
control file as asked for from upstream Debian sid
You can look into the
options specified there and adjust to your needs.
Make the first stab at getting qemu dependencies
apt-get build-dep qemu
This next command grabs all the dependencies specified in the debian
control file as asked for from upstream Debian sid You can look into the
options specified there and adjust to taste.
# get almost all the rest and the tools to work up the Debian magic
apt-get install devscripts quilt libiscsi-dev libusbredirparser-dev libssh2-1-dev libvdeplug-dev libjpeg-dev glusterfs*
we need a newer version of libseccomp for Ubuntu 13.10
``` apt-get install devscripts quilt libiscsi-dev libusbredirparser-dev libssh2-1-dev libvdeplug-dev libjpeg-dev glusterfs* ```
1. Get a newer version of libseccomp for Ubuntu 13.10
```
mkdir libseccomp
cd libseccomp
# grab it from upstream sid
@@ -36,14 +33,14 @@ we need a newer version of libseccomp for Ubuntu 13.10
cd ..
# install it
dpkg -i *.deb
### Building QEMU
This next part is straightforward if your dependencies are met. For the
```
### To Build QEMU
For the
advanced reader look around debian/control once it is extracted before
you install as you may want to change what options QEMU is built with
and what targets are requested.
2. Enter the following commands. Comments are noted with #.
```
cd ..
mkdir qemu
cd qemu
@@ -74,6 +71,5 @@ and what targets are requested.
# build packages
debuild -i -us -uc -b
cd ..
Your debs now available to install. It is up to the reader to determine
what targets they want installed.
```
3. Your debs now available to install. Determine what targets you want to install.

View File

@@ -1,7 +1,5 @@
How to compile GlusterFS RPMs from git source, for RHEL/CentOS, and Fedora
--------------------------------------------------------------------------
# How to compile GlusterFS RPMs from git source, for RHEL/CentOS, and Fedora
Creating rpm's of GlusterFS from git source is fairly easy, once you know the steps.
RPMS can be compiled on at least the following OS's:
@@ -16,19 +14,19 @@ Specific instructions for compiling are below. If you're using:
- CentOS 6.x - Follow the CentOS 6.x steps, then do all of the Common steps.
- RHEL 6.x - Follow the RHEL 6.x steps, then do all of the Common steps.
Note - these instructions have been explicitly tested on all of CentOS 5.10, RHEL 6.4, CentOS 6.4+, and Fedora 16-20. Other releases of RHEL/CentOS and Fedora may work too, but haven't been tested. Please update this page appropriately if you do so. :)
NOTE - these instructions have been explicitly tested on all of CentOS 5.10, RHEL 6.4, CentOS 6.4+, and Fedora 16-20. Other releases of RHEL/CentOS and Fedora may work too, but haven't been tested. Please update this page appropriately if you do so.
### Preparation steps for Fedora 16-20 (only)
1. Install gcc, the python development headers, and python setuptools:
$ sudo yum -y install gcc python-devel python-setuptools
```$ sudo yum -y install gcc python-devel python-setuptools```
2. If you're compiling GlusterFS version 3.4, then install python-swiftclient. Other GlusterFS versions don't need it:
$ sudo easy_install simplejson python-swiftclient
```$ sudo easy_install simplejson python-swiftclient```
Now follow through the **Common Steps** part below.
Now follow the **Common Steps** below.
### Preparation steps for CentOS 5.x (only)
@@ -36,13 +34,14 @@ You'll need EPEL installed first and some CentOS specific packages. The commands
1. Install EPEL first:
$ curl -OL `[`http://download.fedoraproject.org/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm`](http://download.fedoraproject.org/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm)
$ sudo yum -y install epel-release-5-4.noarch.rpm --nogpgcheck
```$ curl -OL `[`http://download.fedoraproject.org/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm`](http://download.fedoraproject.org/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm)```
``` $ sudo yum -y install epel-release-5-4.noarch.rpm --nogpgcheck```
2. Install the packages required only on CentOS 5.x:
$ sudo yum -y install buildsys-macros gcc ncurses-devel python-ctypes python-sphinx10 \
   redhat-rpm-config
``` $ sudo yum -y install buildsys-macros gcc ncurses-devel python-ctypes python-sphinx10 \```
``` redhat-rpm-config```
Now follow through the **Common Steps** part below.
@@ -52,11 +51,11 @@ You'll need EPEL installed first and some CentOS specific packages. The commands
1. Install EPEL first:
$ sudo yum -y install `[`http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm`](http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm)
``` $ sudo yum -y install `[`http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm`](http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm)```
2. Install the packages required only on CentOS:
$ sudo yum -y install python-webob1.0 python-paste-deploy1.5 python-sphinx10 redhat-rpm-config
```$ sudo yum -y install python-webob1.0 python-paste-deploy1.5 python-sphinx10 redhat-rpm-config```
Now follow through the **Common Steps** part below.
@@ -66,12 +65,11 @@ You'll need EPEL installed first and some RHEL specific packages. The 2 commands
1. Install EPEL first:
$ sudo yum -y install `[`http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm`](http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm)
``` $ sudo yum -y install `[`http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm`](http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm) ```
2. Install the packages required only on RHEL:
$ sudo yum -y --enablerepo=rhel-6-server-optional-rpms install python-webob1.0 \
   python-paste-deploy1.5 python-sphinx10 redhat-rpm-config
``` $ sudo yum -y --enablerepo=rhel-6-server-optional-rpms install python-webob1.0 \ python-paste-deploy1.5 python-sphinx10 redhat-rpm-config ```
Now follow through the **Common Steps** part below.
@@ -81,26 +79,26 @@ These steps are for both Fedora and RHEL/CentOS. At the end you'll have the comp
**NOTES for step 1 below:**
- If you're on RHEL/CentOS 5.x and get a message about lvm2-devel not being available, it's ok. You can ignore it. :)
- If you're on RHEL/CentOS 6.x and get any messages about python-eventlet, python-netifaces, python-sphinx and/or pyxattr not being available, it's ok. You can ignore them. :)
- If you're on RHEL/CentOS 5.x and get a message about lvm2-devel not being available, it's ok. You can ignore it.
- If you're on RHEL/CentOS 6.x and get any messages about python-eventlet, python-netifaces, python-sphinx and/or pyxattr not being available, it's ok. You can ignore them.
1. Install the needed packages
$ sudo yum -y --disablerepo=rhs* --enablerepo=*optional-rpms install git autoconf \
  automake bison dos2unix flex fuse-devel glib2-devel libaio-devel \
  libattr-devel libibverbs-devel librdmacm-devel libtool libxml2-devel lvm2-devel make \
  openssl-devel pkgconfig pyliblzma python-devel python-eventlet python-netifaces \
  python-paste-deploy python-simplejson python-sphinx python-webob pyxattr readline-devel \
  rpm-build systemtap-sdt-devel tar libcmocka-devel
``` $ sudo yum -y --disablerepo=rhs* --enablerepo=*optional-rpms install git autoconf \```
```  automake bison dos2unix flex fuse-devel glib2-devel libaio-devel \```
```  libattr-devel libibverbs-devel librdmacm-devel libtool libxml2-devel lvm2-devel make \```
 ``` openssl-devel pkgconfig pyliblzma python-devel python-eventlet python-netifaces \```
 ``` python-paste-deploy python-simplejson python-sphinx python-webob pyxattr readline-devel \ ```
```   rpm-build systemtap-sdt-devel tar libcmocka-devel ```
2. Clone the GlusterFS git repository
1. Clone the GlusterFS git repository
$ git clone `[`git://git.gluster.org/glusterfs`](git://git.gluster.org/glusterfs)
$ cd glusterfs
``` $ git clone `[`git://git.gluster.org/glusterfs`](git://git.gluster.org/glusterfs)
$ cd glusterfs ```
3. Choose which branch to compile
2. Choose which branch to compile
If you want to compile the latest development code, you can skip this step and go on to the next one. :)
If you want to compile the latest development code, you can skip this step and go on to the next one.
If instead you want to compile the code for a specific release of GlusterFS (such as v3.4), get the list of release names here:
@@ -113,9 +111,9 @@ If instead you want to compile the code for a specific release of GlusterFS (suc
  remotes/origin/release-3.4
  remotes/origin/release-3.5
Then switch to the correct release using the git "checkout" command, and the name of the release after the "remotes/origin/" bit from the list above:
1. Then switch to the correct release using the git "checkout" command, and the name of the release after the "remotes/origin/" bit from the list above:
$ git checkout release-3.4
``` $ git checkout release-3.4```
**NOTE -** The CentOS 5.x instructions have only been tested for the master branch in GlusterFS git. It is unknown (yet) if they work for branches older then release-3.5.
@@ -127,10 +125,11 @@ Now you're ready to compile Gluster:
$ ./configure --enable-fusermount
$ make dist
5. Create the GlusterFS RPMs
1. Create the GlusterFS RPMs
$ cd extras/LinuxRPM
$ make glusterrpms
``` $ cd extras/LinuxRPM```
``` $ make glusterrpms ```
That should complete with no errors, leaving you with a directory containing the RPMs.
@@ -149,3 +148,5 @@ That should complete with no errors, leaving you with a directory containing the
-rw-rw-r-- 1 jc jc  123065 Mar  2 12:17 glusterfs-regression-tests-3git-1.el5.centos.x86_64.rpm
-rw-rw-r-- 1 jc jc   16224 Mar  2 12:17 glusterfs-resource-agents-3git-1.el5.centos.x86_64.rpm
-rw-rw-r-- 1 jc jc  654043 Mar  2 12:17 glusterfs-server-3git-1.el5.centos.x86_64.rpm
<!--- And then what? --->

View File

@@ -1,4 +1,4 @@
##Using the Gluster Console Manager Command Line Utility
# Using the Gluster Console Manager Command Line Utility
The Gluster Console Manager is a single command line utility that
simplifies configuration and management of your storage environment. The
@@ -17,34 +17,34 @@ You can also use the commands to create scripts for automation, as well
as use the commands as an API to allow integration with third-party
applications.
###Running the Gluster Console Manager
## Running the Gluster Console Manager
You can run the Gluster Console Manager on any GlusterFS server either
by invoking the commands or by running the Gluster CLI in interactive
mode. You can also use the gluster command remotely using SSH.
- To run commands directly:
### To run commands directly
`# gluster peer`
For example:
For example:
`# gluster peer status`
- To run the Gluster Console Manager in interactive mode
### To run the Gluster Console Manager in interactive mode
`# gluster`
You can execute gluster commands from the Console Manager prompt:
You can execute gluster commands from the Console Manager prompt:
`gluster>`
For example, to view the status of the peer server:
For example, to view the status of the peer server:
`# gluster`
`gluster > peer status`
Display the status of the peer.
Display the status of the peer.
<!---And then what --->

View File

@@ -1,4 +1,4 @@
#Managing Directory Quota
# Managing Directory Quota
Directory quotas in GlusterFS allows you to set limits on usage of the disk
space by directories or volumes. The storage administrators can control
@@ -24,13 +24,12 @@ You can set the quota at the following levels:
> You can set the disk limit on the directory even if it is not created.
> The disk limit is enforced immediately after creating that directory.
##Enabling Quota
## Enabling Quota
You must enable Quota to set disk limits.
**To enable quota:**
- Use the following command to enable quota:
### To enable quota
1. Use the following command to enable quota:
# gluster volume quota enable
@@ -39,13 +38,12 @@ You must enable Quota to set disk limits.
# gluster volume quota test-volume enable
Quota is enabled on /test-volume
##Disabling Quota
## Disabling Quota
You can disable Quota, if needed.
**To disable quota:**
- Use the following command to disable quota:
### To disable quota
1. Use the following command to disable quota:
# gluster volume quota disable
@@ -54,16 +52,15 @@ You can disable Quota, if needed.
# gluster volume quota test-volume disable
Quota translator is disabled on /test-volume
##Setting or Replacing Disk Limit
## Setting or Replacing Disk Limit
You can create new directories in your storage environment and set the
disk limit or set disk limit for the existing directories. The directory
name should be relative to the volume with the export directory/mount
being treated as "/".
**To set or replace disk limit:**
- Set the disk limit using the following command:
### To set or replace disk limit
1. Set the disk limit using the following command:
# gluster volume quota limit-usage /
@@ -82,14 +79,14 @@ being treated as "/".
> quota is disabled. This mount point is being used by quota to set
> and display limits and lists respectively.
##Displaying Disk Limit Information
## Displaying Disk Limit Information
You can display disk limit information on all the directories on which
the limit is set.
**To display disk limit information:**
### To display disk limit information
- Display disk limit information of all the directories on which limit
1. Display disk limit information of all the directories on which limit
is set, using the following command:
# gluster volume quota list
@@ -100,7 +97,7 @@ the limit is set.
/Test/data 10 GB 6 GB
/Test/data1 10 GB 4 GB
- Display disk limit information on a particular directory on which
1. Display disk limit information on a particular directory on which
limit is set, using the following command:
# gluster volume quota list
@@ -110,7 +107,7 @@ the limit is set.
# gluster volume quota test-volume list /data
/Test/data 10 GB 6 GB
###Displaying Quota Limit Information Using the df Utility
### Displaying Quota Limit Information Using the df Utility
You can create a report of the disk usage using the df utility by taking quota limits into consideration. To generate a report, run the following command:
@@ -155,7 +152,7 @@ Disk usage for volume test-volume as seen on client1:
The quota-deem-statfs option when set to on, allows the administrator to make the user view the total disk space available on the directory as the hard limit set on it.
##Updating Memory Cache Size
## Updating Memory Cache Size
### Setting Timeout
@@ -176,9 +173,8 @@ force fetching of directory sizes from server for every operation that
modifies file data and will effectively disables directory size caching
on client side.
**To update the memory cache size:**
- Use the following command to update the memory cache size:
#### To update the memory cache size
1. Use the following command to update the memory cache size:
# gluster volume set features.quota-timeout
@@ -188,13 +184,12 @@ on client side.
# gluster volume set test-volume features.quota-timeout 5
Set volume successful
##Setting Alert Time
## Setting Alert Time
Alert time is the frequency at which you want your usage information to be logged after you reach the soft limit.
**To set the alert time:**
- Use the following command to set the alert time:
### To set the alert time
1. Use the following command to set the alert time:
# gluster volume quota VOLNAME alert-time time
@@ -207,13 +202,12 @@ Alert time is the frequency at which you want your usage information to be logge
# gluster volume quota test-volume alert-time 1d
volume quota : success
##Removing Disk Limit
## Removing Disk Limit
You can remove set disk limit, if you do not want quota anymore.
**To remove disk limit:**
- Use the following command to remove the disk limit set on a particular directory:
### To remove disk limit
1. Use the following command to remove the disk limit set on a particular directory:
# gluster volume quota remove

View File

@@ -2,7 +2,7 @@
*New in version 3.9*
## Set PYTHONPATH(Only in case of Source installation)
## Set PYTHONPATH (Only in case of Source installation)
If Gluster is installed using source install, `cliutils` will get
installed under `/usr/local/lib/python.2.7/site-packages` Set
PYTHONPATH by adding in `~/.bashrc`
@@ -34,7 +34,7 @@ SysVInit(CentOS 6),
## Status
Status Can be checked using,
Status can be checked using,
gluster-eventsapi status
@@ -57,7 +57,7 @@ server which listens on a URL, this can be deployed outside of the
Cluster. Gluster nodes should be able to access this Webhook server on
the configured port.
Example Webhook written in python,
Example Webhook written in Python:
from flask import Flask, request
@@ -150,7 +150,7 @@ Example,
## Configuration
View all configurations using,
View all configurations using
usage: gluster-eventsapi config-get [-h] [--name NAME]
@@ -187,7 +187,7 @@ Example output,
| node2 | UP | OK |
+-----------+-------------+-------------+
To Reset any configuration,
To reset any configuration,
usage: gluster-eventsapi config-reset [-h] name
@@ -197,7 +197,7 @@ To Reset any configuration,
optional arguments:
-h, --help show this help message and exit
Example output,
Example output
+-----------+-------------+-------------+
| NODE | NODE STATUS | SYNC STATUS |
@@ -209,14 +209,14 @@ Example output,
**Note**: If any node status is not UP or sync status is not OK, make
sure to run `gluster-eventsapi sync` from a peer node.
## Add node to the Cluster
When a new node added to the cluster,
## Add a node to the Cluster
When a new node is added to the cluster,
- Enable and Start Eventsd in the new node using the steps mentioned above
- Run `gluster-eventsapi sync` command from a peer node other than the new node.
## APIs documentation
## API documentation
Glustereventsd pushes the Events in JSON format to configured
Webhooks. All Events will have following attributes.
@@ -587,3 +587,5 @@ VOLUME_REBALANCE_START | volume | Volume Name
VOLUME_REBALANCE_STOP | volume | Volume Name
VOLUME_REBALANCE_FAILED | volume | Volume Name
VOLUME_REBALANCE_COMPLETE | volume | Volume Name
<!---This is good stuff, we need to make sure it's indexed properly.--->

View File

@@ -2,7 +2,7 @@
The arbiter volume is special subset of replica volumes that is aimed at
preventing split-brains and providing the same consistency guarantees as a normal
replica 3 volume without consuming 3x space.
replica 3 volume without consuming 3 times as much space.
<!-- TOC depthFrom:1 depthTo:6 withLinks:1 updateOnSave:1 orderedList:0 -->

0
Concepts-Guide/Design.md Normal file
View File

View File

View File

View File

View File

@@ -13,6 +13,10 @@ pages:
- Terminologies: Quick-Start-Guide/Terminologies.md
- Glossary: Administrator Guide/glossary.md
- Architecture: Quick-Start-Guide/Architecture.md
- Design: Concepts-Guide/Design.md
- Security: Concepts-Guide/Security.md
- Scaling: Concepts-Guide/Scaling.md
- Resiliency: Concepts-Guide/Resiliency.md
- Presentations: presentations/index.md
- Installation Guide:
- Overview: Install-Guide/Overview.md