Long story short, instead of using standard hostnames and relying on official DNS servers which we may not control,
we can use consul to resolve hosts with services under ``.consul`` domain, which turns this classic setup:
```bash
mount -t glusterfs -o backupvolfile-server=gluster-poc-02 gluster-poc-01:/g0 /mnt/gluster/g0
```
into more convenient entry:
```bash
mount -t glusterfs gluster.service.consul:/g0 /mnt/gluster/g0
```
which is especially useful when using image-based servers without further provisioning, and spreading load across all healthy servers registered in consul.
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;127.0.0.1:8600. IN A
;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Sat May 20 08:50:21 UTC 2017
;; MSG SIZE rcvd: 32
```
Now, to be able to use it on system level, we want it to work without specifying port.
We can achieve it with running consul on port 53 (not advised), or redirecting network traffic from port 53 to 8600 or proxy it via local DNS resolver - for example use locally installed dnsmasq.
First, install dnsmasq, and add file ``/etc/dnsmasq.d/10-consul``:
```text
server=/consul/127.0.0.1#8600
```
This will ensure that any ``*.consul`` requests will be forwarded to local consul listening on its default DNS port 8600.
Make sure that ``/etc/resolv.conf`` contains ``nameserver 127.0.0.1``. Under Debian distros it should be there, under RedHat - not really. You can fix this in two ways, choose on your onw which one to apply:
* add ``nameserver 127.0.0.1`` to ``/etc/resolvconf/resolv.conf.d/header``
or
* update ``/etc/dhcp/dhclient.conf`` and add to it line ``prepend domain-name-servers 127.0.0.1;``.
In both cases it ensures that dnsmasq will be a first nameserver, and requires reloading resolver or networking.
Eventually you should have ``nameserver 127.0.0.1`` set as first entry in ``/etc/resolv.conf`` and have DNS resolving consul entries:
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;consul.service.consul. IN A
;; ANSWER SECTION:
consul.service.consul. 0 IN A 172.30.64.198
consul.service.consul. 0 IN A 172.30.82.255
consul.service.consul. 0 IN A 172.30.81.155
;; Query time: 1 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Sat May 20 09:01:12 UTC 2017
;; MSG SIZE rcvd: 87
```
From now on we should be able to use ``<servicename>.service.consul`` in places, where we had FQDN entries of the single servers.
Next, we must define gluster service consul on the servers.
## Consul agent on Linux on gluster servers
Install consul agent as described in previous section.
You can define consul services as ``gluster`` to run health checks, to do that we must add consul to sudoers or allow it executing certain sudo commands without password:
``/etc/sudoers.d/99-consul.conf``:
```text
consul ALL=(ALL) NOPASSWD: /sbin/gluster pool list
```
First, lets define service in consul, it will be very basic, without volume names.
Service name ``gluster``, with default port 24007, and we will tag it as ``gluster`` and ``server``.
Our service will have [service health checks](https://www.consul.io/docs/agent/checks.html) every 10s:
* check if the gluster service is responding to TCP on 24007 port
* check if the gluster server is connected to other peers in the pool (to avoid registering as healthy service which is actaully not serving anything)
Below is an example of ``/etc/consul/service_gluster.json``:
```json
{
"service": {
"address": "",
"checks": [
{
"interval": "10s",
"tcp": "localhost:24007",
"timeout": "5s"
},
{
"interval": "10s",
"script": "/bin/bash -c \"sudo -n /sbin/gluster pool list |grep -v UUID|grep -v localhost|grep Connected\"",
"timeout": "5s"
}
],
"enableTagOverride": false,
"id": "gluster",
"name": "gluster",
"port": 24007,
"tags": [
"gluster",
"server"
]
}
}
```
Restart consul service and you should see gluster servers in consul web ui.
After a while service should be in healthy stage and be available under nslookup:
Notice that gluster server can be also gluster client, for example if we want to mount gluster volume on the servers.
## Mounting gluster volume under Linux
As a moutpoint we would usually select one of the gluster servers, and another as backup server, like this:
```bash
mount -t glusterfs -o backupvolfile-server=gluster-poc-02 gluster-poc-01:/g0 /mnt/gluster/g0
```
This is a bit inconvenient, for example we have an image with hardcoded hostnames, and old servers are gone due to maintenance.
We would have to recreate image, or reconfigure existing nodes if they unmount gluster storage.
To mitigate that issue we can now use consul for fetching the server pool:
```bash
mount -t glusterfs gluster.service.consul:/g0 /mnt/gluster/g0
```
So we can populate that to ``/etc/fstab`` or one of the ``autofs`` files.
# Windows setup
## Configuring gluster servers as samba shares
This is the simplest and not so secure setup, you have been warned.
Proper setup suggests using LDAP or [CTDB](https://ctdb.samba.org/).
You can configure it with puppet using module [kakwa-samba](https://github.com/kakwa/puppet-samba).
First, we want to reconfigure gluster servers so that they serve as samba shares using user/pass credentials, which is separate to Linux credentials.
We assume that accessing windows share will be done as user ``steve`` with password ``steve-loves-bacon``, make sure you create that user on each gluster server host.
```bash
sudo adduser steve
sudo smbpasswd -a steve
```
Notice that if you do not set ``user.smb = disable`` in gluster volume then it may auto-add itself to samba configuration. So better disable this by executing:
```bash
gluster volume get g0 user.smb disable
```
Now install ``samba`` and ``samba-vfs-glusterfs`` packages and configure ``/etc/samba/smb.conf``:
* when using vfs plugin then ``path`` is a relative path via gluster volume.
* ``kernel share modes = no`` may be required to make it work.
We can also use classic fuse mount and use it under samba as share path, then configuration is even simpler.
For detailed description between those two solutions see [gluster vfs blog posts](https://lalatendu.org/2014/04/20/glusterfs-vfs-plugin-for-samba/).
* Remember to add user steve to samba with a password
* unblock firewall ports for samba
* test samba config and reload samba
## Defining new samba service under consul
Now we define gluster-samba service on gluster server hosts in a similiar way as we defined it for gluster itself.
Below is an example of ``/etc/consul/service_samba.json``:
```json
{
"service": {
"address": "",
"checks": [
{
"interval": "10s",
"tcp": "localhost:139",
"timeout": "5s"
},
{
"interval": "10s",
"tcp": "localhost:445",
"timeout": "5s"
}
],
"enableTagOverride": false,
"id": "gluster-samba",
"name": "gluster-samba",
"port": 139,
"tags": [
"gluster",
"samba"
]
}
}
```
We have two health checks here, just checking if we can connect to samba service. It could be also expanded to see if the network share is actually accessible.
Reload consul service and you should after a while see new service registered in the consul.
Check if it exists in dns:
```bash
nslookup gluster-samba.service.consul
Server: 127.0.0.1
Address: 127.0.0.1#53
Name: gluster-samba.service.consul
Address: 172.30.65.61
Name: gluster-samba.service.consul
Address: 172.30.64.144
```
Install ``samba-client`` and check connectivity to samba from gluster server itself.
```bash
[centos@gluster-poc-01]# smbclient -L //gluster-samba.service.consul/g0 -U steve
Remember to replace ``datacenter``, ``recursors`` with preferred local DNS servers and ``retry_join`` with list of consul server hosts or for example some generic Route53 entry from private zone (if it exists) which points to real consul servers.
In AWS you can also use ``retry_join_ec2`` - his way Windows instance will always search other consul server EC2 instances and join them.
Notice that recursors section is required if not using retry_join and just relying on AWS EC2 tags - otherwise consul will fail to resolve anything else, thus not joining to the consul.
We use port ``53`` so that consul will serve as local DNS.
* start consul service
```bash
net start consul
```
* update DNS settings for network interface in Windows, make it the primary entry