mirror of
https://github.com/gluster/glusterdocs.git
synced 2026-02-06 00:48:24 +01:00
Updated Geo-replication doc to only use Secure(non-root) setup
Updates: https://github.com/gluster/glusterfs/issues/714 Signed-off-by: Aravinda Vishwanathapura <aravinda@kadalu.io>
This commit is contained in:
committed by
Amar Tumballi
parent
cec887aa51
commit
bcbf4abbcf
@@ -1,33 +1,20 @@
|
||||
# Geo Replication
|
||||
# Geo-Replication
|
||||
|
||||
## Introduction
|
||||
|
||||
Geo-replication provides a continuous, asynchronous, and incremental
|
||||
replication service from one site to another over Local Area Networks
|
||||
(LANs), Wide Area Network (WANs), and across the Internet.
|
||||
|
||||
Geo-replication uses a master–slave model, whereby replication and
|
||||
mirroring occurs between the following partners:
|
||||
## Prerequisites
|
||||
|
||||
- **Master** – a GlusterFS volume
|
||||
* Master and Slave Volumes should be Gluster Volumes.
|
||||
* Master and Slave clusters should have the same GlusterFS version.
|
||||
|
||||
- **Slave** – a GlusterFS volume
|
||||
|
||||
- **Session** - Unique identifier of Geo-replication session `<MASTER_VOL> [<SLAVE_USER>@]<PRIMARY_SLAVE_HOST>::<SLAVE_VOL>`
|
||||
|
||||
```text
|
||||
Where,
|
||||
|
||||
MASTER_VOL - Master Volume Name
|
||||
SLAVE_USER - Slave user used to establish the session, Default is root
|
||||
PRIMARY_SLAVE_HOST - Any one Slave node to which password-less
|
||||
SSH is setup to establish session
|
||||
SLAVE_VOL - Slave Volume Name
|
||||
```
|
||||
|
||||
## Replicated Volumes vs Geo-replication
|
||||
|
||||
The following table lists the difference between replicated volumes and
|
||||
geo-replication:
|
||||
The following table lists the difference between replicated volumes
|
||||
and Geo-replication:
|
||||
|
||||
Replicated Volumes | Geo-replication
|
||||
--- | ---
|
||||
@@ -38,192 +25,131 @@ geo-replication:
|
||||
## Exploring Geo-replication Deployment Scenarios
|
||||
|
||||
Geo-replication provides an incremental replication service over Local
|
||||
Area Networks (LANs), Wide Area Network (WANs), and across the Internet.
|
||||
Area Networks (LANs), Wide Area Network (WANs), and across the
|
||||
Internet.
|
||||
|
||||
This section illustrates the most common deployment scenarios for
|
||||
Geo-replication, including the following:
|
||||
|
||||
- Geo-replication over LAN
|
||||
- Geo-replication over WAN
|
||||
- Geo-replication over the Internet
|
||||
- Multi-site cascading Geo-replication
|
||||
|
||||
**Geo-replication over LAN**
|
||||
|
||||
You can configure Geo-replication to mirror data over a Local Area
|
||||
Network.
|
||||
### Geo-replication over Local Area Network(LAN)
|
||||
|
||||

|
||||
|
||||
**Geo-replication over WAN**
|
||||
|
||||
You can configure Geo-replication to replicate data over a Wide Area
|
||||
Network.
|
||||
### Geo-replication over Wide Area Network(WAN)
|
||||
|
||||

|
||||
|
||||
**Geo-replication over Internet**
|
||||
|
||||
You can configure Geo-replication to mirror data over the Internet.
|
||||
### Geo-replication over Internet
|
||||
|
||||

|
||||
|
||||
**Multi-site cascading Geo-replication**
|
||||
|
||||
You can configure Geo-replication to mirror data in a cascading fashion
|
||||
across multiple sites.
|
||||
|
||||
### Mirror data in a cascading fashion across multiple sites(Multi-site cascading Geo-replication)
|
||||

|
||||
|
||||
##Checking Geo-replication Minimum Requirements
|
||||
|
||||
Before deploying GlusterFS Geo-replication, verify that your systems
|
||||
match the minimum requirements.
|
||||
|
||||
The following table outlines the minimum requirements for both master
|
||||
and slave nodes within your environment:
|
||||
|
||||
Component | Master | Slave
|
||||
--- | --- | ---
|
||||
Operating System | GNU/Linux | GNU/Linux
|
||||
Filesystem | GlusterFS 3.6 or higher | GlusterFS 3.6 or higher
|
||||
Python | Python 2.6 (or higher) | Python 2.6 (or higher)
|
||||
Secure shell | OpenSSH version 4.0 (or higher) | SSH2-compliant daemon
|
||||
Remote synchronization | rsync 3.0.7 or higher | rsync 3.0.7 or higher
|
||||
FUSE | GlusterFS supported versions | GlusterFS supported versions
|
||||
|
||||
## Slave User setup
|
||||
Geo-replication supports both root and non-root users at Slave
|
||||
side. If Slave user is root, then skip this section.
|
||||
|
||||
A request from the user, the unprivileged slave user use the
|
||||
mountbroker service of glusterd to set up an auxiliary gluster mount for
|
||||
the user in a special environment which ensures that the user is only
|
||||
allowed to access with special parameters that provide administrative
|
||||
level access to the particular volume.
|
||||
Setup an unprivileged user in Slave nodes to secure the SSH
|
||||
connectivity to those nodes. The unprivileged slave user uses the
|
||||
mountbroker service of glusterd to set up an auxiliary gluster mount
|
||||
for the user in a special environment, which ensures that the user is
|
||||
only allowed to access with special parameters that provide
|
||||
administrative level access to the particular Volume.
|
||||
|
||||
Following steps to be performed to setup Non root Slave user
|
||||
In all the slave nodes, create a new group as "geogroup".
|
||||
|
||||
***New in 3.9***
|
||||
```
|
||||
# sudo groupadd geogroup
|
||||
```
|
||||
|
||||
1. In all Slave nodes, create a new group. For example, `geogroup`.
|
||||
In all the slave nodes, create an unprivileged account. For example,
|
||||
"geoaccount". Add geoaccount as a member of "geogroup" group.
|
||||
|
||||
2. In all Slave nodes, create a unprivileged account. For example, ` geoaccount`. Make it a
|
||||
member of `geogroup`.
|
||||
```
|
||||
# useradd -G geogroup geoaccount
|
||||
```
|
||||
|
||||
3. In any one Slave node, run the following command to setup
|
||||
mountbroker root directory and group.
|
||||
In any one Slave node, run the following command to setup the
|
||||
mountbroker root directory and group.
|
||||
|
||||
gluster-mountbroker setup <MOUNT ROOT> <GROUP>
|
||||
```
|
||||
gluster-mountbroker setup <MOUNT ROOT> <GROUP>
|
||||
```
|
||||
|
||||
For example,
|
||||
For example,
|
||||
|
||||
# gluster-mountbroker setup /var/mountbroker-root geogroup
|
||||
```
|
||||
# gluster-mountbroker setup /var/mountbroker-root geogroup
|
||||
```
|
||||
|
||||
4. In any one of Slave node, Run the following commands to add Volume
|
||||
and user to mountbroker service.
|
||||
In any one of Slave node, Run the following commands to add Volume and
|
||||
user to mountbroker service.
|
||||
|
||||
gluster-mountbroker add <VOLUME> <USER>
|
||||
```
|
||||
gluster-mountbroker add <VOLUME> <USER>
|
||||
```
|
||||
|
||||
For example,
|
||||
For example,
|
||||
|
||||
# gluster-mountbroker add slavevol geoaccount
|
||||
```
|
||||
# gluster-mountbroker add gvol-slave geoaccount
|
||||
```
|
||||
|
||||
Remove user or Volume using,
|
||||
(**Note**: To remove a user, use `gluster-mountbroker remove` command)
|
||||
|
||||
gluster-mountbroker remove [--volume <VOLUME>] [--user <USER>]
|
||||
Check the status of setup using,
|
||||
|
||||
Example,
|
||||
```
|
||||
# gluster-mountbroker status
|
||||
```
|
||||
|
||||
# gluster-mountbroker remove --volume slavevol --user geoaccount
|
||||
# gluster-mountbroker remove --user geoaccount
|
||||
# gluster-mountbroker remove --volume slavevol
|
||||
|
||||
Check the status of setup using,
|
||||
|
||||
# gluster-mountbroker status
|
||||
|
||||
5. Restart `glusterd` service on all Slave nodes.
|
||||
|
||||
***Version 3.8 and below***
|
||||
|
||||
1. In all Slave nodes, create a new group. For example, `geogroup`.
|
||||
|
||||
2. In all Slave nodes, create a unprivileged account. For example, ` geoaccount`. Make it a
|
||||
member of ` geogroup`.
|
||||
|
||||
3. In all Slave nodes, Create a new directory owned by root and with permissions *0711.*
|
||||
For example, create a create mountbroker-root directory
|
||||
`/var/mountbroker-root`.
|
||||
|
||||
4. In any one of Slave node, Run the following commands to add options to glusterd vol
|
||||
file(`/etc/glusterfs/glusterd.vol`)
|
||||
in rpm installations and `/usr/local/etc/glusterfs/glusterd.vol` in Source installation.
|
||||
|
||||
# gluster system:: execute mountbroker opt mountbroker-root /var/mountbroker-root
|
||||
# gluster system:: execute mountbroker opt geo-replication-log-group geogroup
|
||||
# gluster system:: execute mountbroker opt rpc-auth-allow-insecure on
|
||||
|
||||
5. In any one of the Slave node, Add Mountbroker user to glusterd vol file using,
|
||||
|
||||
# gluster system:: execute mountbroker user geoaccount slavevol
|
||||
|
||||
where slavevol is the Slave Volume name
|
||||
|
||||
If you host multiple slave volumes on Slave, for each of them and add the following options to the volfile using,
|
||||
|
||||
# gluster system:: execute mountbroker user geoaccount2 slavevol2
|
||||
# gluster system:: execute mountbroker user geoaccount3 slavevol3
|
||||
|
||||
To add multiple volumes per mountbroker user,
|
||||
|
||||
# gluster system:: execute mountbroker user geoaccount1 slavevol11,slavevol12,slavevol13
|
||||
# gluster system:: execute mountbroker user geoaccount2 slavevol21,slavevol22
|
||||
# gluster system:: execute mountbroker user geoaccount3 slavevol31
|
||||
|
||||
6. Restart `glusterd` service on all Slave nodes.
|
||||
Restart `glusterd` service on all Slave nodes.
|
||||
|
||||
## Setting Up the Environment for Geo-replication
|
||||
|
||||
**Time Synchronization**
|
||||
### Time Synchronization
|
||||
|
||||
- On bricks of a geo-replication master volume, all the servers' time
|
||||
must be uniform. You are recommended to set up NTP (Network Time
|
||||
Protocol) service to keep the bricks sync in time and avoid
|
||||
out-of-time sync effect.
|
||||
On bricks of a geo-replication master volume, all the servers' time
|
||||
must be uniform. You are recommended to set up NTP (Network Time
|
||||
Protocol) or similar service to keep the bricks sync in time and avoid
|
||||
the out-of-time sync effect.
|
||||
|
||||
For example: In a Replicated volume where brick1 of the master is at
|
||||
12.20 hrs and brick 2 of the master is at 12.10 hrs with 10 minutes
|
||||
time lag, all the changes in brick2 between this period may go
|
||||
unnoticed during synchronization of files with Slave.
|
||||
For example: In a Replicated volume where brick1 of the master is at
|
||||
12.20 hrs, and brick 2 of the Master is at 12.10 hrs with 10 minutes
|
||||
time lag, all the changes in brick2 between this period may go
|
||||
unnoticed during synchronization of files with Slave.
|
||||
|
||||
### Password-less SSH
|
||||
|
||||
**Password-less SSH**
|
||||
Password-less login has to be set up between the host machine (where
|
||||
geo-replication Create command will be issued) and one of the Slave node
|
||||
geo-replication Create command will be issued) and one of the Slave
|
||||
node for the unprivileged account created above.
|
||||
|
||||
**Note**: This is required to run Create command. This can be disabled
|
||||
once session is established.(Required again while running create force)
|
||||
once the session is established.(Required again while running create
|
||||
force)
|
||||
|
||||
1. On one of the Master node where geo-replication Create command
|
||||
will be issued, run the following command to generate the SSH key.
|
||||
On one of the Master node where geo-replication Create command will be
|
||||
issued, run the following command to generate the SSH key(Press Enter
|
||||
twice to avoid passphrase).
|
||||
|
||||
# ssh-keygen
|
||||
```
|
||||
# ssh-keygen
|
||||
```
|
||||
|
||||
Press Enter twice to avoid passphrase.
|
||||
Run the following command on the same node to one Slave node which is
|
||||
identified as primary Slave
|
||||
|
||||
2. Run the following command on the same node to one Slave node which
|
||||
is identified as primary Slave
|
||||
```
|
||||
# ssh-copy-id geoaccount@snode1.example.com
|
||||
```
|
||||
|
||||
ssh-copy-id <SLAVE_USER>@<SLAVE_HOST>
|
||||
|
||||
**Creating secret pem pub file**
|
||||
### Creating secret pem pub file
|
||||
|
||||
Execute the below command from the node where you setup the
|
||||
password-less ssh to slave. This will generate Geo-rep session
|
||||
specific ssh-keys in all Master peer nodes and collect public keys
|
||||
from all peer nodes to the command initiated node.
|
||||
|
||||
***New in 3.9***
|
||||
|
||||
```console
|
||||
# gluster-georep-sshkey generate
|
||||
```
|
||||
@@ -236,21 +162,8 @@ disable that prefix,
|
||||
# gluster-georep-sshkey generate --no-prefix
|
||||
```
|
||||
|
||||
***Version 3.8 and below***
|
||||
|
||||
```console
|
||||
# gluster system:: execute gsec_create
|
||||
```
|
||||
|
||||
This command adds extra prefix inside common_secret.pem.pub file to
|
||||
each pub keys to prevent running extra commands using this key, to
|
||||
disable that prefix,
|
||||
|
||||
```console
|
||||
# gluster system:: execute gsec_create container
|
||||
```
|
||||
|
||||
## Creating the session
|
||||
|
||||
Create a geo-rep session between master and slave volume using the
|
||||
following command. The node in which this command is executed and the
|
||||
<slave_host> specified in the command should have password less ssh
|
||||
@@ -260,26 +173,24 @@ less ssh between each node in master to each node of slave.
|
||||
|
||||
```console
|
||||
gluster volume geo-replication <master_volume> \
|
||||
[<slave_user>@]<slave_host>::<slave_volume> \
|
||||
<slave_user>@<slave_host>::<slave_volume> \
|
||||
create [ssh-port <port>] push-pem|no-verify [force]
|
||||
```
|
||||
|
||||
For example(Root user in Slave)
|
||||
For example,
|
||||
|
||||
```console
|
||||
# gluster volume geo-replication gv1 snode1::gv2 create push-pem
|
||||
```
|
||||
|
||||
Non Root user,
|
||||
|
||||
```console
|
||||
# gluster volume geo-replication gv1 geoaccount@snode1::gv2 create push-pem
|
||||
# gluster volume geo-replication gvol-master \
|
||||
geoaccount@snode1.example.com::gvol-slave \
|
||||
create push-pem
|
||||
```
|
||||
|
||||
If custom SSH port is configured in Slave nodes then,
|
||||
|
||||
```console
|
||||
# gluster volume geo-replication gv1 snode1::gv2 create ssh-port 50022 push-pem
|
||||
# gluster volume geo-replication gvol-master \
|
||||
geoaccount@snode1.example.com::gvol-slave \
|
||||
create ssh-port 50022 push-pem
|
||||
```
|
||||
|
||||
If the total available size in slave volume is less than the total
|
||||
@@ -300,13 +211,15 @@ option.
|
||||
|
||||
```console
|
||||
gluster volume geo-replication <master_volume> \
|
||||
[<slave_user>@]<slave_host>::<slave_volume> create no-verify [force]
|
||||
<slave_user>@<slave_host>::<slave_volume> create no-verify [force]
|
||||
```
|
||||
|
||||
For example,
|
||||
|
||||
```console
|
||||
# gluster volume geo-replication gv1 snode1::gv2 create no-verify
|
||||
# gluster volume geo-replication gvol-master \
|
||||
geoaccount@snode1.example.com::gvol-slave \
|
||||
create no-verify
|
||||
```
|
||||
|
||||
In this case the master node rsa-key distribution to slave node does
|
||||
@@ -314,15 +227,24 @@ not happen and above mentioned slave verification is not performed and
|
||||
these two things has to be taken care externaly.
|
||||
|
||||
## Post Creation steps
|
||||
In case of non root user, run the following command as root in any one
|
||||
of Slave node.
|
||||
|
||||
Run the following command as root in any one of Slave node.
|
||||
|
||||
```console
|
||||
/usr/libexec/glusterfs/set_geo_rep_pem_keys.sh <slave_user> \
|
||||
<master_volume> <slave_volume>
|
||||
```
|
||||
|
||||
For example,
|
||||
|
||||
```
|
||||
# /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh geoaccount \
|
||||
gvol-master gvol-slave
|
||||
```
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
Configuration can be changed anytime after creating the session. After
|
||||
successful configuration change, Geo-rep session will be automatically
|
||||
restarted.
|
||||
@@ -331,14 +253,19 @@ To view all configured options of a session,
|
||||
|
||||
```console
|
||||
gluster volume geo-replication <master_volume> \
|
||||
[<slave_user>@]<slave_host>::<slave_volume> config [option]
|
||||
<slave_user>@<slave_host>::<slave_volume> config [option]
|
||||
```
|
||||
|
||||
For Example,
|
||||
|
||||
```console
|
||||
# gluster volume geo-replication gv1 snode1::gv2 config
|
||||
# gluster volume geo-replication gv1 snode1::gv2 config sync-jobs
|
||||
# gluster volume geo-replication gvol-master \
|
||||
geoaccount@snode1.example.com::gvol-slave \
|
||||
config
|
||||
|
||||
# gluster volume geo-replication gvol-master \
|
||||
geoaccount@snode1.example.com::gvol-slave \
|
||||
config sync-jobs
|
||||
```
|
||||
|
||||
To configure Gluster Geo-replication, use the following command at the
|
||||
@@ -346,13 +273,15 @@ Gluster command line
|
||||
|
||||
```console
|
||||
gluster volume geo-replication <master_volume> \
|
||||
[<slave_user>@]<slave_host>::<slave_volume> config [option]
|
||||
<slave_user>@<slave_host>::<slave_volume> config [option]
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
```console
|
||||
# gluster volume geo-replication gv1 snode1::gv2 config sync-jobs 3
|
||||
# gluster volume geo-replication gvol-master \
|
||||
geoaccount@snode1.example.com::gvol-slave \
|
||||
config sync-jobs 3
|
||||
```
|
||||
|
||||
> **Note**: If Geo-rep is in between sync, restart due to configuration
|
||||
@@ -377,7 +306,7 @@ else remain as Passive.
|
||||
|
||||
```console
|
||||
gluster volume geo-replication <master_volume> \
|
||||
[<slave_user>@]<slave_host>::<slave_volume> config
|
||||
<slave_user>@<slave_host>::<slave_volume> config
|
||||
use-meta-volume true
|
||||
```
|
||||
|
||||
@@ -385,34 +314,37 @@ gluster volume geo-replication <master_volume> \
|
||||
> of the meta-volume should be `gluster_shared_storage` and should be
|
||||
> mounted at `/var/run/gluster/shared_storage/`.
|
||||
|
||||
The following table provides an overview of the configurable options for a geo-replication setting:
|
||||
The following table provides an overview of the configurable options
|
||||
for a geo-replication setting:
|
||||
|
||||
Option | Description
|
||||
--- | ---
|
||||
log-level LOGFILELEVEL | The log level for geo-replication.
|
||||
log-level LOGFILELEVEL | The log level for geo-replication.
|
||||
gluster-log-level LOGFILELEVEL | The log level for glusterfs processes.
|
||||
changelog-log-level LOGFILELEVEL| The log level for Changelog processes.
|
||||
ssh-command COMMAND | The SSH command to connect to the remote machine (the default is ssh). If ssh is installed in custom location, that path can be configured. For ex `/usr/local/sbin/ssh`
|
||||
rsync-command COMMAND | The rsync command to use for synchronizing the files (the default is rsync).
|
||||
use-tarssh true | The use-tarssh command allows tar over Secure Shell protocol. Use this option to handle workloads of files that have not undergone edits.
|
||||
timeout SECONDS | The timeout period in seconds.
|
||||
sync-jobs N | The number of simultaneous files/directories that can be synchronized.
|
||||
ignore-deletes | If this option is set to 1, a file deleted on the master will not trigger a delete operation on the slave. As a result, the slave will remain as a superset of the master and can be used to recover the master in the event of a crash and/or accidental delete.
|
||||
ssh-command COMMAND | The SSH command to connect to the remote machine (the default is ssh). If ssh is installed in custom location, that path can be configured. For ex `/usr/local/sbin/ssh`
|
||||
rsync-command COMMAND | The rsync command to use for synchronizing the files (the default is rsync).
|
||||
use-tarssh true | The use-tarssh command allows tar over Secure Shell protocol. Use this option to handle workloads of files that have not undergone edits.
|
||||
timeout SECONDS | The timeout period in seconds.
|
||||
sync-jobs N | The number of simultaneous files/directories that can be synchronized.
|
||||
ignore-deletes | If this option is set to 1, a file deleted on the master will not trigger a delete operation on the slave. As a result, the slave will remain as a superset of the master and can be used to recover the master in the event of a crash and/or accidental delete.
|
||||
|
||||
## Starting Geo-replication
|
||||
|
||||
Use the following command to start geo-replication session,
|
||||
|
||||
```console
|
||||
gluster volume geo-replication <master_volume> \
|
||||
[<slave_user>@]<slave_host>::<slave_volume> start [force]
|
||||
gluster volume geo-replication <master_volume> \
|
||||
<slave_user>@<slave_host>::<slave_volume> \
|
||||
start [force]
|
||||
```
|
||||
|
||||
For example,
|
||||
|
||||
```console
|
||||
# gluster volume geo-replication gv1 snode1::gv2 start
|
||||
# gluster volume geo-replication gv1 geoaccount@snode1::gv2 start
|
||||
# gluster volume geo-replication gvol-master \
|
||||
geoaccount@snode1.example.com::gvol-slave \
|
||||
start
|
||||
```
|
||||
|
||||
> **Note**
|
||||
@@ -425,15 +357,17 @@ For example,
|
||||
Use the following command to stop geo-replication sesion,
|
||||
|
||||
```console
|
||||
gluster volume geo-replication <master_volume> \
|
||||
[<slave_user>@]<slave_host>::<slave_volume> stop [force]
|
||||
gluster volume geo-replication <master_volume> \
|
||||
<slave_user>@<slave_host>::<slave_volume> \
|
||||
stop [force]
|
||||
```
|
||||
|
||||
For example,
|
||||
|
||||
```console
|
||||
# gluster volume geo-replication gv1 snode1::gv2 stop
|
||||
# gluster volume geo-replication gv1 geoaccount@snode1::gv2 stop
|
||||
# gluster volume geo-replication gvol-master \
|
||||
geoaccount@snode1.example.com::gvol-slave \
|
||||
stop
|
||||
```
|
||||
|
||||
## Status
|
||||
@@ -447,16 +381,17 @@ To check the status of one session,
|
||||
|
||||
```console
|
||||
gluster volume geo-replication <master_volume> \
|
||||
[<slave_user>@]<slave_host>::<slave_volume> status [detail]
|
||||
<slave_user>@<slave_host>::<slave_volume> status [detail]
|
||||
```
|
||||
|
||||
Example,
|
||||
|
||||
```console
|
||||
# gluster volume geo-replication gv1 snode1::gv2 status
|
||||
# gluster volume geo-replication gv1 snode1::gv2 status detail
|
||||
# gluster volume geo-replication gv1 geoaccount@snode1::gv2 status
|
||||
# gluster volume geo-replication gv1 geoaccount@snode1::gv2 status detail
|
||||
# gluster volume geo-replication gvol-master \
|
||||
geoaccount@snode1::gvol-slave status
|
||||
|
||||
# gluster volume geo-replication gvol-master \
|
||||
geoaccount@snode1::gvol-slave status detail
|
||||
```
|
||||
|
||||
Example Status Output
|
||||
@@ -464,8 +399,8 @@ Example Status Output
|
||||
```console
|
||||
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
|
||||
-------------------------------------------------------------------------------------------------------------------------------------
|
||||
mnode1 gv1 /bricks/b1 root snode1::gv2 snode1 Active Changelog Crawl 2016-10-12 23:07:13
|
||||
mnode2 gv1 /bricks/b2 root snode1::gv2 snode2 Active Changelog Crawl 2016-10-12 23:07:13
|
||||
mnode1 gvol-master /bricks/b1 root snode1::gvol-slave snode1 Active Changelog Crawl 2016-10-12 23:07:13
|
||||
mnode2 gvol-master /bricks/b2 root snode1::gvol-slave snode2 Active Changelog Crawl 2016-10-12 23:07:13
|
||||
```
|
||||
|
||||
Example Status detail Output
|
||||
@@ -473,66 +408,78 @@ Example Status detail Output
|
||||
```console
|
||||
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED ENTRY DATA META FAILURES CHECKPOINT TIME CHECKPOINT COMPLETED CHECKPOINT COMPLETION TIME
|
||||
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
mnode1 gv1 /bricks/b1 root snode1::gv2 snode1 Active Changelog Crawl 2016-10-12 23:07:13 0 0 0 0 N/A N/A N/A
|
||||
mnode2 gv1 /bricks/b2 root snode1::gv2 snode2 Active Changelog Crawl 2016-10-12 23:07:13 0 0 0 0 N/A N/A N/A
|
||||
mnode1 gvol-master /bricks/b1 root snode1::gvol-slave snode1 Active Changelog Crawl 2016-10-12 23:07:13 0 0 0 0 N/A N/A N/A
|
||||
mnode2 gvol-master /bricks/b2 root snode1::gvol-slave snode2 Active Changelog Crawl 2016-10-12 23:07:13 0 0 0 0 N/A N/A N/A
|
||||
```
|
||||
|
||||
The `STATUS` of the session could be one of the following,
|
||||
|
||||
- **Initializing**: This is the initial phase of the Geo-replication session;
|
||||
it remains in this state for a minute in order to make sure no abnormalities are present.
|
||||
- **Initializing**: This is the initial phase of the Geo-replication
|
||||
session; it remains in this state for a minute in order to make
|
||||
sure no abnormalities are present.
|
||||
|
||||
- **Created**: The geo-replication session is created, but not started.
|
||||
- **Created**: The geo-replication session is created, but not
|
||||
started.
|
||||
|
||||
- **Active**: The gsync daemon in this node is active and syncing the data. (One worker among the replica pairs will be in Active state)
|
||||
- **Active**: The gsync daemon in this node is active and syncing the
|
||||
data. (One worker among the replica pairs will be in Active state)
|
||||
|
||||
- **Passive**: A replica pair of the active node. The data synchronization is handled by active node.
|
||||
Hence, this node does not sync any data. If Active node goes down, Passive worker will become Active
|
||||
- **Passive**: A replica pair of the active node. The data
|
||||
synchronization is handled by active node. Hence, this node does
|
||||
not sync any data. If Active node goes down, Passive worker will
|
||||
become Active
|
||||
|
||||
- **Faulty**: The geo-replication session has experienced a problem, and the issue needs to be
|
||||
investigated further. Check log files for more details about the
|
||||
Faulty status. Log file path can be found using
|
||||
- **Faulty**: The geo-replication session has experienced a problem,
|
||||
and the issue needs to be investigated further. Check log files
|
||||
for more details about the Faulty status. Log file path can be
|
||||
found using
|
||||
|
||||
gluster volume geo-replication <master_volume> [<slave_user>@]<slave_host>::<slave_volume> config log-file
|
||||
gluster volume geo-replication <master_volume> \
|
||||
<slave_user>@<slave_host>::<slave_volume> config log-file
|
||||
|
||||
- **Stopped**: The geo-replication session has stopped, but has not been deleted.
|
||||
- **Stopped**: The geo-replication session has stopped, but has not
|
||||
been deleted.
|
||||
|
||||
The `CRAWL STATUS` can be one of the following:
|
||||
|
||||
- **Hybrid Crawl**: The gsyncd daemon is crawling the glusterFS file system and generating pseudo
|
||||
changelog to sync data. This crawl is used during initial sync and
|
||||
if Changelogs are not available.
|
||||
- **Hybrid Crawl**: The gsyncd daemon is crawling the glusterFS file
|
||||
system and generating pseudo changelog to sync data. This crawl is
|
||||
used during initial sync and if Changelogs are not available.
|
||||
|
||||
- **History Crawl**: gsyncd daemon syncs data by consuming Historical
|
||||
Changelogs. On every worker restart, Geo-rep uses this Crawl to
|
||||
process backlog Changelogs.
|
||||
|
||||
- **Changelog Crawl**: The changelog translator has produced the changelog and that is being consumed
|
||||
by gsyncd daemon to sync data.
|
||||
- **Changelog Crawl**: The changelog translator has produced the
|
||||
changelog and that is being consumed by gsyncd daemon to sync
|
||||
data.
|
||||
|
||||
## Deleting the session
|
||||
|
||||
Established Geo-replication session can be deleted using the following
|
||||
command,
|
||||
|
||||
```console
|
||||
gluster volume geo-replication <master_volume> \
|
||||
[<slave_user>@]<slave_host>::<slave_volume> delete [force]
|
||||
<slave_user>@<slave_host>::<slave_volume> delete [force]
|
||||
```
|
||||
|
||||
For example,
|
||||
|
||||
```console
|
||||
# gluster volume geo-replication gv1 snode1::gv2 delete
|
||||
# gluster volume geo-replication gvol-master \
|
||||
geoaccount@snode1.example.com::gvol-slave delete
|
||||
```
|
||||
|
||||
> Note: If the same session is created again then syncing will resume
|
||||
> from where it was stopped before deleting the session. If the
|
||||
> session to be deleted permanently then use reset-sync-time option
|
||||
> with delete command. For example, `gluster volume geo-replication
|
||||
> gv1 snode1::gv2 delete reset-sync-time`
|
||||
> gvol-master geoaccount@snode1::gvol-slave delete reset-sync-time`
|
||||
|
||||
|
||||
## Checkpoint
|
||||
|
||||
Using Checkpoint feature we can find the status of sync with respect
|
||||
to the Checkpoint time. Checkpoint completion status shows "Yes" once
|
||||
Geo-rep syncs all the data from that brick which are created or
|
||||
@@ -542,13 +489,15 @@ Set the Checkpoint using,
|
||||
|
||||
```console
|
||||
gluster volume geo-replication <master_volume> \
|
||||
[<slave_user>@]<slave_host>::<slave_volume> config checkpoint now
|
||||
<slave_user>@<slave_host>::<slave_volume> config checkpoint now
|
||||
```
|
||||
|
||||
Example,
|
||||
|
||||
```console
|
||||
# gluster volume geo-replication gv1 snode1::gv2 config checkpoint now
|
||||
# gluster volume geo-replication gvol-master \
|
||||
geoaccount@snode1.example.com::gvol-slave \
|
||||
config checkpoint now
|
||||
```
|
||||
|
||||
Touch the Master mount point to make sure Checkpoint completes even
|
||||
@@ -582,7 +531,7 @@ slave Volumes.
|
||||
- Pause the Geo-replication session using,
|
||||
|
||||
gluster volume geo-replication <master_volume> \
|
||||
[<slave_user>@]<slave_host>::<slave_volume> pause
|
||||
<slave_user>@<slave_host>::<slave_volume> pause
|
||||
|
||||
- Take Gluster Snapshot of Slave Volume and Master Volume(Use same
|
||||
name for snapshots)
|
||||
@@ -591,13 +540,13 @@ slave Volumes.
|
||||
|
||||
Example,
|
||||
|
||||
# gluster snapshot create snap1 gv2
|
||||
# gluster snapshot create snap1 gv1
|
||||
# gluster snapshot create snap1 gvol-slave
|
||||
# gluster snapshot create snap1 gvol-master
|
||||
|
||||
- Resume Geo-replication session using,
|
||||
|
||||
gluster volume geo-replication <master_volume> \
|
||||
[<slave_user>@]<slave_host>::<slave_volume> resume
|
||||
<slave_user>@<slave_host>::<slave_volume> resume
|
||||
|
||||
If we want to continue Geo-rep session after snapshot restore, we need
|
||||
to restore both Master and Slave Volume and resume the Geo-replication
|
||||
@@ -606,7 +555,7 @@ session using force option
|
||||
```console
|
||||
gluster snapshot restore <snapname>
|
||||
gluster volume geo-replication <master_volume> \
|
||||
[<slave_user>@]<slave_host>::<slave_volume> resume force
|
||||
<slave_user>@<slave_host>::<slave_volume> resume force
|
||||
```
|
||||
|
||||
Example,
|
||||
@@ -614,5 +563,6 @@ Example,
|
||||
```console
|
||||
# gluster snapshot restore snap1 # Slave Snap
|
||||
# gluster snapshot restore snap1 # Master Snap
|
||||
# gluster volume geo-replication gv1 snode1::gv2 resume force
|
||||
# gluster volume geo-replication gvol-master geoaccount@snode1::gvol-slave \
|
||||
resume force
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user