2015-05-19 12:11:52 +05:30
|
|
|
|
### Configure Firewall
|
|
|
|
|
|
|
|
|
|
|
|
For the Gluster to communicate within a cluster either the firewalls
|
|
|
|
|
|
have to be turned off or enable communication for each server.
|
|
|
|
|
|
|
|
|
|
|
|
iptables -I INPUT -p all -s `<ip-address>` -j ACCEPT
|
|
|
|
|
|
|
|
|
|
|
|
### Configure the trusted pool
|
|
|
|
|
|
|
|
|
|
|
|
Remember that the trusted pool is the term used to define a cluster of
|
|
|
|
|
|
nodes in Gluster. Choose a server to be your “primary” server. This is
|
|
|
|
|
|
just to keep things simple, you will generally want to run all commands
|
|
|
|
|
|
from this tutorial. Keep in mind, running many Gluster specific commands
|
2017-02-19 20:55:41 +01:00
|
|
|
|
(like `gluster volume create`) on one server in the cluster will
|
2015-05-19 12:11:52 +05:30
|
|
|
|
execute the same command on all other servers.
|
|
|
|
|
|
|
2017-02-19 20:55:41 +01:00
|
|
|
|
Replace `nodename` with hostname of the other server in the cluster,
|
|
|
|
|
|
or IP address if you don’t have DNS or `/etc/hosts` entries.
|
2017-02-19 21:06:08 +01:00
|
|
|
|
Let say we want to connect to `node02`:
|
2015-05-19 12:11:52 +05:30
|
|
|
|
|
2017-02-19 21:06:08 +01:00
|
|
|
|
gluster peer probe node02
|
2017-02-19 20:55:41 +01:00
|
|
|
|
|
|
|
|
|
|
Notice that running `gluster peer status` from the second node shows
|
2015-05-19 12:11:52 +05:30
|
|
|
|
that the first node has already been added.
|
|
|
|
|
|
|
2016-09-18 15:17:15 +02:00
|
|
|
|
### Partition the disk
|
2015-05-19 12:11:52 +05:30
|
|
|
|
|
2017-02-19 20:55:41 +01:00
|
|
|
|
Assuming you have a empty disk at `/dev/sdb`:
|
2015-05-19 12:11:52 +05:30
|
|
|
|
|
2016-09-18 15:17:15 +02:00
|
|
|
|
fdisk /dev/sdb
|
|
|
|
|
|
|
|
|
|
|
|
And then create a single XFS partition using fdisk
|
2015-05-19 12:11:52 +05:30
|
|
|
|
|
|
|
|
|
|
### Format the partition
|
|
|
|
|
|
|
|
|
|
|
|
mkfs.xfs -i size=512 /dev/sdb1
|
|
|
|
|
|
|
|
|
|
|
|
### Add an entry to /etc/fstab
|
|
|
|
|
|
|
|
|
|
|
|
echo "/dev/sdb1 /export/sdb1 xfs defaults 0 0" >> /etc/fstab
|
|
|
|
|
|
|
2016-09-18 15:17:15 +02:00
|
|
|
|
### Mount the partition as a Gluster "brick"
|
|
|
|
|
|
|
|
|
|
|
|
mkdir -p /export/sdb1 && mount -a && mkdir -p /export/sdb1/brick
|
|
|
|
|
|
|
2015-05-19 12:11:52 +05:30
|
|
|
|
#### Set up a Gluster volume
|
|
|
|
|
|
|
|
|
|
|
|
The most basic Gluster volume type is a “Distribute only” volume (also
|
|
|
|
|
|
referred to as a “pure DHT” volume if you want to impress the folks at
|
|
|
|
|
|
the water cooler). This type of volume simply distributes the data
|
|
|
|
|
|
evenly across the available bricks in a volume. So, if I write 100
|
|
|
|
|
|
files, on average, fifty will end up on one server, and fifty will end
|
|
|
|
|
|
up on another. This is faster than a “replicated” volume, but isn’t as
|
|
|
|
|
|
popular since it doesn’t give you two of the most sought after features
|
|
|
|
|
|
of Gluster — multiple copies of the data, and automatic failover if
|
2017-02-19 20:55:41 +01:00
|
|
|
|
something goes wrong.
|
|
|
|
|
|
|
|
|
|
|
|
To set up a replicated volume:
|
2015-05-19 12:11:52 +05:30
|
|
|
|
|
|
|
|
|
|
gluster volume create gv0 replica 2 node01.mydomain.net:/export/sdb1/brick node02.mydomain.net:/export/sdb1/brick
|
|
|
|
|
|
|
2017-02-19 20:55:41 +01:00
|
|
|
|
Breaking this down into pieces:
|
|
|
|
|
|
|
|
|
|
|
|
- the first part says to create a gluster volume named gv0
|
|
|
|
|
|
(the name is arbitrary, gv0 was chosen simply because
|
|
|
|
|
|
it’s less typing than gluster\_volume\_0).
|
|
|
|
|
|
- make the volume a replica volume
|
|
|
|
|
|
- keep a copy of the data on at least 2 bricks at any given time.
|
|
|
|
|
|
Since we only have two bricks total, this
|
|
|
|
|
|
means each server will house a copy of the data.
|
|
|
|
|
|
- we specify which nodes to use, and which bricks on those nodes. The order here is
|
|
|
|
|
|
important when you have more bricks.
|
|
|
|
|
|
|
|
|
|
|
|
It is possible (as of the most current release as of this writing, Gluster 3.3)
|
|
|
|
|
|
to specify the bricks in a such a way that you would make both copies of the data reside on a
|
2015-05-19 12:11:52 +05:30
|
|
|
|
single node. This would make for an embarrassing explanation to your
|
|
|
|
|
|
boss when your bulletproof, completely redundant, always on super
|
|
|
|
|
|
cluster comes to a grinding halt when a single point of failure occurs.
|
|
|
|
|
|
|
|
|
|
|
|
Now, we can check to make sure things are working as expected:
|
|
|
|
|
|
|
|
|
|
|
|
gluster volume info
|
|
|
|
|
|
|
|
|
|
|
|
And you should see results similar to the following:
|
|
|
|
|
|
|
|
|
|
|
|
Volume Name: gv0
|
|
|
|
|
|
Type: Replicate
|
|
|
|
|
|
Volume ID: 8bc3e96b-a1b6-457d-8f7a-a91d1d4dc019
|
|
|
|
|
|
Status: Created
|
|
|
|
|
|
Number of Bricks: 1 x 2 = 2
|
|
|
|
|
|
Transport-type: tcp
|
|
|
|
|
|
Bricks:
|
|
|
|
|
|
Brick1: node01.yourdomain.net:/export/sdb1/brick
|
|
|
|
|
|
Brick2: node02.yourdomain.net:/export/sdb1/brick
|
|
|
|
|
|
|
|
|
|
|
|
This shows us essentially what we just specified during the volume
|
2017-02-19 20:55:41 +01:00
|
|
|
|
creation. The one this to mention is the `Status`. A status of `Created`
|
2015-05-19 12:11:52 +05:30
|
|
|
|
means that the volume has been created, but hasn’t yet been started,
|
|
|
|
|
|
which would cause any attempt to mount the volume fail.
|
|
|
|
|
|
|
|
|
|
|
|
Now, we should start the volume.
|
|
|
|
|
|
|
|
|
|
|
|
gluster volume start gv0
|
|
|
|
|
|
|
2017-02-19 20:55:41 +01:00
|
|
|
|
Find all documentation [here](../index.md)
|