mirror of
https://github.com/gluster/glusterdocs.git
synced 2026-02-05 15:47:01 +01:00
Merge pull request #749 from black-dragon74/upgrd-guide
[upgrade-guide] Cleanup syntax and fix inconsistent numberings
This commit is contained in:
@@ -1,32 +1,32 @@
|
||||
Upgrading GlusterFS
|
||||
-------------------
|
||||
- [About op-version](./op-version.md)
|
||||
## Upgrading GlusterFS
|
||||
|
||||
- [About op-version](./op-version.md)
|
||||
|
||||
If you are using GlusterFS version 6.x or above, you can upgrade it to the following:
|
||||
|
||||
- [Upgrading to 10](./upgrade-to-10.md)
|
||||
- [Upgrading to 9](./upgrade-to-9.md)
|
||||
- [Upgrading to 8](./upgrade-to-8.md)
|
||||
- [Upgrading to 7](./upgrade-to-7.md)
|
||||
- [Upgrading to 10](./upgrade-to-10.md)
|
||||
- [Upgrading to 9](./upgrade-to-9.md)
|
||||
- [Upgrading to 8](./upgrade-to-8.md)
|
||||
- [Upgrading to 7](./upgrade-to-7.md)
|
||||
|
||||
If you are using GlusterFS version 5.x or above, you can upgrade it to the following:
|
||||
|
||||
- [Upgrading to 8](./upgrade-to-8.md)
|
||||
- [Upgrading to 7](./upgrade-to-7.md)
|
||||
- [Upgrading to 6](./upgrade-to-6.md)
|
||||
- [Upgrading to 8](./upgrade-to-8.md)
|
||||
- [Upgrading to 7](./upgrade-to-7.md)
|
||||
- [Upgrading to 6](./upgrade-to-6.md)
|
||||
|
||||
If you are using GlusterFS version 4.x or above, you can upgrade it to the following:
|
||||
|
||||
- [Upgrading to 6](./upgrade-to-6.md)
|
||||
- [Upgrading to 5](./upgrade-to-5.md)
|
||||
- [Upgrading to 6](./upgrade-to-6.md)
|
||||
- [Upgrading to 5](./upgrade-to-5.md)
|
||||
|
||||
If you are using GlusterFS version 3.4.x or above, you can upgrade it to following:
|
||||
|
||||
- [Upgrading to 3.5](./upgrade-to-3.5.md)
|
||||
- [Upgrading to 3.6](./upgrade-to-3.6.md)
|
||||
- [Upgrading to 3.7](./upgrade-to-3.7.md)
|
||||
- [Upgrading to 3.9](./upgrade-to-3.9.md)
|
||||
- [Upgrading to 3.10](./upgrade-to-3.10.md)
|
||||
- [Upgrading to 3.11](./upgrade-to-3.11.md)
|
||||
- [Upgrading to 3.12](./upgrade-to-3.12.md)
|
||||
- [Upgrading to 3.13](./upgrade-to-3.13.md)
|
||||
- [Upgrading to 3.5](./upgrade-to-3.5.md)
|
||||
- [Upgrading to 3.6](./upgrade-to-3.6.md)
|
||||
- [Upgrading to 3.7](./upgrade-to-3.7.md)
|
||||
- [Upgrading to 3.9](./upgrade-to-3.9.md)
|
||||
- [Upgrading to 3.10](./upgrade-to-3.10.md)
|
||||
- [Upgrading to 3.11](./upgrade-to-3.11.md)
|
||||
- [Upgrading to 3.12](./upgrade-to-3.12.md)
|
||||
- [Upgrading to 3.13](./upgrade-to-3.13.md)
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
# Generic Upgrade procedure
|
||||
|
||||
### Pre-upgrade notes
|
||||
|
||||
- Online upgrade is only possible with replicated and distributed replicate volumes
|
||||
- Online upgrade is not supported for dispersed or distributed dispersed volumes
|
||||
- Ensure no configuration changes are done during the upgrade
|
||||
@@ -9,27 +10,28 @@
|
||||
- It is recommended to have the same client and server, major versions running eventually
|
||||
|
||||
### Online upgrade procedure for servers
|
||||
|
||||
This procedure involves upgrading **one server at a time**, while keeping the volume(s) online and client IO ongoing. This procedure assumes that multiple replicas of a replica set, are not part of the same server in the trusted storage pool.
|
||||
|
||||
> **ALERT:** If there are disperse or, pure distributed volumes in the storage pool being upgraded, this procedure is NOT recommended, use the [Offline upgrade procedure](#offline-upgrade-procedure) instead.
|
||||
|
||||
#### Repeat the following steps, on each server in the trusted storage pool, to upgrade the entire pool to new-version :
|
||||
1. Stop all gluster services, either using the command below, or through other means.
|
||||
|
||||
1. Stop all gluster services, either using the command below, or through other means.
|
||||
|
||||
# systemctl stop glusterd
|
||||
# systemctl stop glustereventsd
|
||||
# killall glusterfs glusterfsd glusterd
|
||||
systemctl stop glusterd
|
||||
systemctl stop glustereventsd
|
||||
killall glusterfs glusterfsd glusterd
|
||||
|
||||
2. Stop all applications that run on this server and access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.)
|
||||
2. Stop all applications that run on this server and access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.)
|
||||
|
||||
3. Install Gluster new-version, below example shows how to create a repository on fedora and use it to upgrade :
|
||||
3. Install Gluster new-version, below example shows how to create a repository on fedora and use it to upgrade :
|
||||
|
||||
3.1 Create a private repository (assuming /new-gluster-rpms/ folder has the new rpms ):
|
||||
3.1 Create a private repository (assuming /new-gluster-rpms/ folder has the new rpms ):
|
||||
|
||||
# createrepo /new-gluster-rpms/
|
||||
createrepo /new-gluster-rpms/
|
||||
|
||||
3.2 Create the .repo file in /etc/yum.d/ :
|
||||
3.2 Create the .repo file in /etc/yum.d/ :
|
||||
|
||||
# cat /etc/yum.d/newglusterrepo.repo
|
||||
[newglusterrepo]
|
||||
@@ -38,76 +40,74 @@ This procedure involves upgrading **one server at a time**, while keeping the vo
|
||||
gpgcheck=0
|
||||
enabled=1
|
||||
|
||||
3.3 Upgrade glusterfs, for example to upgrade glusterfs-server to x.y version :
|
||||
3.3 Upgrade glusterfs, for example to upgrade glusterfs-server to x.y version :
|
||||
|
||||
# yum update glusterfs-server-x.y.fc30.x86_64.rpm
|
||||
yum update glusterfs-server-x.y.fc30.x86_64.rpm
|
||||
|
||||
4. Ensure that version reflects new-version in the output of,
|
||||
4. Ensure that version reflects new-version in the output of,
|
||||
|
||||
# gluster --version
|
||||
gluster --version
|
||||
|
||||
5. Start glusterd on the upgraded server
|
||||
5. Start glusterd on the upgraded server
|
||||
|
||||
# systemctl start glusterd
|
||||
systemctl start glusterd
|
||||
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
|
||||
# gluster volume status
|
||||
gluster volume status
|
||||
|
||||
7. If the glustereventsd service was previously enabled, it is required to start it using the commands below, or, through other means,
|
||||
7. If the glustereventsd service was previously enabled, it is required to start it using the commands below, or, through other means,
|
||||
|
||||
# systemctl start glustereventsd
|
||||
systemctl start glustereventsd
|
||||
|
||||
8. Invoke self-heal on all the gluster volumes by running,
|
||||
8. Invoke self-heal on all the gluster volumes by running,
|
||||
|
||||
# for i in `gluster volume list`; do gluster volume heal $i; done
|
||||
for i in `gluster volume list`; do gluster volume heal $i; done
|
||||
|
||||
9. Verify that there are no heal backlog by running the command for all the volumes,
|
||||
9. Verify that there are no heal backlog by running the command for all the volumes,
|
||||
|
||||
# gluster volume heal <volname> info
|
||||
gluster volume heal <volname> info
|
||||
|
||||
> **NOTE:** Before proceeding to upgrade the next server in the pool it is recommended to check the heal backlog. If there is a heal backlog, it is recommended to wait until the backlog is empty, or, the backlog does not contain any entries requiring a sync to the just upgraded server.
|
||||
|
||||
10. Restart any gfapi based application stopped previously in step (2)
|
||||
1. Restart any gfapi based application stopped previously in step (2)
|
||||
|
||||
### Offline upgrade procedure
|
||||
|
||||
This procedure involves cluster downtime and during the upgrade window, clients are not allowed access to the volumes.
|
||||
|
||||
#### Steps to perform an offline upgrade:
|
||||
1. On every server in the trusted storage pool, stop all gluster services, either using the command below, or through other means,
|
||||
|
||||
```sh
|
||||
1. On every server in the trusted storage pool, stop all gluster services, either using the command below, or through other means,
|
||||
|
||||
# systemctl stop glusterd
|
||||
# systemctl stop glustereventsd
|
||||
# killall glusterfs glusterfsd glusterd
|
||||
```
|
||||
2. Stop all applications that access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.), across all servers
|
||||
systemctl stop glusterd
|
||||
systemctl stop glustereventsd
|
||||
killall glusterfs glusterfsd glusterd
|
||||
|
||||
3. Install Gluster new-version, on all servers
|
||||
2. Stop all applications that access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.), across all servers
|
||||
|
||||
4. Ensure that version reflects new-version in the output of the following command on all servers,
|
||||
```sh
|
||||
# gluster --version
|
||||
```
|
||||
3. Install Gluster new-version, on all servers
|
||||
|
||||
5. Start glusterd on all the upgraded servers
|
||||
```sh
|
||||
# systemctl start glusterd
|
||||
```
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
```sh
|
||||
# gluster volume status
|
||||
```
|
||||
4. Ensure that version reflects new-version in the output of the following command on all servers,
|
||||
|
||||
7. If the glustereventsd service was previously enabled, it is required to start it using the commands below, or, through other means,
|
||||
```sh
|
||||
# systemctl start glustereventsd
|
||||
```
|
||||
gluster --version
|
||||
|
||||
8. Restart any gfapi based application stopped previously in step (2)
|
||||
5. Start glusterd on all the upgraded servers
|
||||
|
||||
systemctl start glusterd
|
||||
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
|
||||
gluster volume status
|
||||
|
||||
7. If the glustereventsd service was previously enabled, it is required to start it using the commands below, or, through other means,
|
||||
|
||||
systemctl start glustereventsd
|
||||
|
||||
8. Restart any gfapi based application stopped previously in step (2)
|
||||
|
||||
### Post upgrade steps
|
||||
|
||||
Perform the following steps post upgrading the entire trusted storage pool,
|
||||
|
||||
- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op-version.md) section for further details
|
||||
@@ -117,12 +117,13 @@ Perform the following steps post upgrading the entire trusted storage pool,
|
||||
#### If upgrading from a version lesser than Gluster 7.0
|
||||
|
||||
> **NOTE:** If you have ever enabled quota on your volumes then after the upgrade
|
||||
is done, you will have to restart all the nodes in the cluster one by one so as to
|
||||
fix the checksum values in the quota.cksum file under the `/var/lib/glusterd/vols/<volname>/ directory.`
|
||||
The peers may go into `Peer rejected` state while doing so but once all the nodes are rebooted
|
||||
everything will be back to normal.
|
||||
> is done, you will have to restart all the nodes in the cluster one by one so as to
|
||||
> fix the checksum values in the quota.cksum file under the `/var/lib/glusterd/vols/<volname>/ directory.`
|
||||
> The peers may go into `Peer rejected` state while doing so but once all the nodes are rebooted
|
||||
> everything will be back to normal.
|
||||
|
||||
### Upgrade procedure for clients
|
||||
|
||||
Following are the steps to upgrade clients to the new-version version,
|
||||
|
||||
1. Unmount all glusterfs mount points on the client
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
|
||||
### op-version
|
||||
|
||||
op-version is the operating version of the Gluster which is running.
|
||||
|
||||
op-version was introduced to ensure gluster running with different versions do not end up in a problem and backward compatibility issues can be tackled.
|
||||
@@ -13,19 +13,19 @@ Current op-version can be queried as below:
|
||||
For 3.10 onwards:
|
||||
|
||||
```console
|
||||
# gluster volume get all cluster.op-version
|
||||
gluster volume get all cluster.op-version
|
||||
```
|
||||
|
||||
For release < 3.10:
|
||||
|
||||
```console
|
||||
```{ .console .no-copy }
|
||||
# gluster volume get <VOLNAME> cluster.op-version
|
||||
```
|
||||
|
||||
To get the maximum possible op-version a cluster can support, the following query can be used (this is available 3.10 release onwards):
|
||||
|
||||
```console
|
||||
# gluster volume get all cluster.max-op-version
|
||||
gluster volume get all cluster.max-op-version
|
||||
```
|
||||
|
||||
For example, if some nodes in a cluster have been upgraded to X and some to X+, then the maximum op-version supported by the cluster is X, and the cluster.op-version can be bumped up to X to support new features.
|
||||
@@ -34,7 +34,7 @@ op-version can be updated as below.
|
||||
For example, after upgrading to glusterfs-4.0.0, set op-version as:
|
||||
|
||||
```console
|
||||
# gluster volume set all cluster.op-version 40000
|
||||
gluster volume set all cluster.op-version 40000
|
||||
```
|
||||
|
||||
Note:
|
||||
@@ -46,11 +46,10 @@ When trying to set a volume option, it might happen that one or more of the conn
|
||||
|
||||
To check op-version information for the connected clients and find the offending client, the following query can be used for 3.10 release onwards:
|
||||
|
||||
```console
|
||||
```{ .console .no-copy }
|
||||
# gluster volume status <all|VOLNAME> clients
|
||||
```
|
||||
|
||||
The respective clients can then be upgraded to the required version.
|
||||
|
||||
This information could also be used to make an informed decision while bumping up the op-version of a cluster, so that connected clients can support all the new features provided by the upgraded cluster as well.
|
||||
|
||||
|
||||
@@ -10,6 +10,7 @@ Refer, to the [generic upgrade procedure](./generic-upgrade-procedure.md) guide
|
||||
## Major issues
|
||||
|
||||
### The following options are removed from the code base and require to be unset
|
||||
|
||||
before an upgrade from releases older than release 4.1.0,
|
||||
|
||||
- features.lock-heal
|
||||
@@ -18,7 +19,7 @@ before an upgrade from releases older than release 4.1.0,
|
||||
To check if these options are set use,
|
||||
|
||||
```console
|
||||
# gluster volume info
|
||||
gluster volume info
|
||||
```
|
||||
|
||||
and ensure that the above options are not part of the `Options Reconfigured:`
|
||||
@@ -26,7 +27,7 @@ section in the output of all volumes in the cluster.
|
||||
|
||||
If these are set, then unset them using the following commands,
|
||||
|
||||
```console
|
||||
```{ .console .no-copy }
|
||||
# gluster volume reset <volname> <option>
|
||||
```
|
||||
|
||||
@@ -40,7 +41,6 @@ If these are set, then unset them using the following commands,
|
||||
- Tiering support (tier xlator and changetimerecorder)
|
||||
- Glupy
|
||||
|
||||
|
||||
**NOTE:** Failure to do the above may result in failure during online upgrades,
|
||||
and the reset of these options to their defaults needs to be done **prior** to
|
||||
upgrading the cluster.
|
||||
@@ -48,4 +48,3 @@ upgrading the cluster.
|
||||
### Deprecated translators and upgrade procedure for volumes using these features
|
||||
|
||||
[If you are upgrading from a release prior to release-6 be aware of deprecated xlators and functionality](https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_6/#deprecated-translators-and-upgrade-procedure-for-volumes-using-these-features).
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
## Upgrade procedure to Gluster 3.10.0, from Gluster 3.9.x, 3.8.x and 3.7.x
|
||||
|
||||
### Pre-upgrade notes
|
||||
|
||||
- Online upgrade is only possible with replicated and distributed replicate volumes
|
||||
- Online upgrade is not supported for dispersed or distributed dispersed volumes
|
||||
- Ensure no configuration changes are done during the upgrade
|
||||
@@ -9,83 +10,82 @@
|
||||
- It is recommended to have the same client and server, major versions running eventually
|
||||
|
||||
### Online upgrade procedure for servers
|
||||
|
||||
This procedure involves upgrading **one server at a time**, while keeping the volume(s) online and client IO ongoing. This procedure assumes that multiple replicas of a replica set, are not part of the same server in the trusted storage pool.
|
||||
|
||||
> **ALERT**: If any of your volumes, in the trusted storage pool that is being upgraded, uses disperse or is a pure distributed volume, this procedure is **NOT** recommended, use the [Offline upgrade procedure](#offline-upgrade-procedure) instead.
|
||||
|
||||
#### Repeat the following steps, on each server in the trusted storage pool, to upgrade the entire pool to 3.10 version:
|
||||
1. Stop all gluster services, either using the command below, or through other means,
|
||||
```sh
|
||||
#killall glusterfs glusterfsd glusterd
|
||||
```
|
||||
|
||||
2. Stop all applications that run on this server and access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.)
|
||||
1. Stop all gluster services, either using the command below, or through other means,
|
||||
|
||||
3. Install Gluster 3.10
|
||||
killall glusterfs glusterfsd glusterd
|
||||
|
||||
4. Ensure that version reflects 3.10.0 in the output of,
|
||||
```sh
|
||||
#gluster --version
|
||||
```
|
||||
2. Stop all applications that run on this server and access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.)
|
||||
|
||||
5. Start glusterd on the upgraded server
|
||||
```sh
|
||||
#glusterd
|
||||
```
|
||||
3. Install Gluster 3.10
|
||||
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
```sh
|
||||
#gluster volume status
|
||||
```
|
||||
4. Ensure that version reflects 3.10.0 in the output of,
|
||||
|
||||
7. Self-heal all gluster volumes by running
|
||||
```sh
|
||||
#for i in `gluster volume list`; do gluster volume heal $i; done
|
||||
```
|
||||
gluster --version
|
||||
|
||||
8. Ensure that there is no heal backlog by running the below command for all volumes
|
||||
```sh
|
||||
#gluster volume heal <volname> info
|
||||
```
|
||||
> NOTE: If there is a heal backlog, wait till the backlog is empty, or the backlog does not have any entries needing a sync to the just upgraded server, before proceeding to upgrade the next server in the pool
|
||||
5. Start glusterd on the upgraded server
|
||||
|
||||
9. Restart any gfapi based application stopped previously in step (2)
|
||||
glusterd
|
||||
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
|
||||
gluster volume status
|
||||
|
||||
7. Self-heal all gluster volumes by running
|
||||
|
||||
for i in `gluster volume list`; do gluster volume heal $i; done
|
||||
|
||||
8. Ensure that there is no heal backlog by running the below command for all volumes
|
||||
|
||||
gluster volume heal <volname> info
|
||||
|
||||
> NOTE: If there is a heal backlog, wait till the backlog is empty, or the backlog does not have any entries needing a sync to the just upgraded server, before proceeding to upgrade the next server in the pool
|
||||
|
||||
9. Restart any gfapi based application stopped previously in step (2)
|
||||
|
||||
### Offline upgrade procedure
|
||||
|
||||
This procedure involves cluster downtime and during the upgrade window, clients are not allowed access to the volumes.
|
||||
|
||||
#### Steps to perform an offline upgrade:
|
||||
1. On every server in the trusted storage pool, stop all gluster services, either using the command below, or through other means,
|
||||
|
||||
```sh
|
||||
#killall glusterfs glusterfsd glusterd
|
||||
```
|
||||
2. Stop all applications that access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.), across all servers
|
||||
1. On every server in the trusted storage pool, stop all gluster services, either using the command below, or through other means,
|
||||
|
||||
3. Install Gluster 3.10, on all servers
|
||||
killall glusterfs glusterfsd glusterd
|
||||
|
||||
4. Ensure that version reflects 3.10.0 in the output of the following command on all servers,
|
||||
```sh
|
||||
#gluster --version
|
||||
```
|
||||
2. Stop all applications that access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.), across all servers
|
||||
|
||||
5. Start glusterd on all the upgraded servers
|
||||
```sh
|
||||
#glusterd
|
||||
```
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
```sh
|
||||
#gluster volume status
|
||||
```
|
||||
3. Install Gluster 3.10, on all servers
|
||||
|
||||
7. Restart any gfapi based application stopped previously in step (2)
|
||||
4. Ensure that version reflects 3.10.0 in the output of the following command on all servers,
|
||||
|
||||
gluster --version
|
||||
|
||||
5. Start glusterd on all the upgraded servers
|
||||
|
||||
glusterd
|
||||
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
|
||||
gluster volume status
|
||||
|
||||
7. Restart any gfapi based application stopped previously in step (2)
|
||||
|
||||
### Post upgrade steps
|
||||
|
||||
Perform the following steps post upgrading the entire trusted storage pool,
|
||||
|
||||
- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op-version.md) section for further details
|
||||
- Proceed to [upgrade the clients](#upgrade-procedure-for-clients) to 3.10 version as well
|
||||
|
||||
### Upgrade procedure for clients
|
||||
|
||||
Following are the steps to upgrade clients to the 3.10.0 version,
|
||||
|
||||
1. Unmount all glusterfs mount points on the client
|
||||
|
||||
@@ -3,6 +3,7 @@
|
||||
**NOTE:** Upgrade procedure remains the same as with the 3.10 release
|
||||
|
||||
### Pre-upgrade notes
|
||||
|
||||
- Online upgrade is only possible with replicated and distributed replicate volumes
|
||||
- Online upgrade is not supported for dispersed or distributed dispersed volumes
|
||||
- Ensure no configuration changes are done during the upgrade
|
||||
@@ -11,87 +12,86 @@
|
||||
- It is recommended to have the same client and server, major versions running eventually
|
||||
|
||||
### Online upgrade procedure for servers
|
||||
|
||||
This procedure involves upgrading **one server at a time**, while keeping the volume(s) online and client IO ongoing. This procedure assumes that multiple replicas of a replica set, are not part of the same server in the trusted storage pool.
|
||||
|
||||
> **ALERT**: If any of your volumes, in the trusted storage pool that is being upgraded, uses disperse or is a pure distributed volume, this procedure is **NOT** recommended, use the [Offline upgrade procedure](#offline-upgrade-procedure) instead.
|
||||
|
||||
#### Repeat the following steps, on each server in the trusted storage pool, to upgrade the entire pool to 3.11 version:
|
||||
1. Stop all gluster services, either using the command below, or through other means,
|
||||
```sh
|
||||
#killall glusterfs glusterfsd glusterd
|
||||
```
|
||||
|
||||
2. Stop all applications that run on this server and access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.)
|
||||
1. Stop all gluster services, either using the command below, or through other means,
|
||||
|
||||
3. Install Gluster 3.11
|
||||
killall glusterfs glusterfsd glusterd
|
||||
|
||||
4. Ensure that version reflects 3.11.x in the output of,
|
||||
```sh
|
||||
#gluster --version
|
||||
```
|
||||
2. Stop all applications that run on this server and access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.)
|
||||
|
||||
**NOTE:** x is the minor release number for the release
|
||||
3. Install Gluster 3.11
|
||||
|
||||
5. Start glusterd on the upgraded server
|
||||
```sh
|
||||
#glusterd
|
||||
```
|
||||
4. Ensure that version reflects 3.11.x in the output of,
|
||||
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
```sh
|
||||
#gluster volume status
|
||||
```
|
||||
gluster --version
|
||||
|
||||
7. Self-heal all gluster volumes by running
|
||||
```sh
|
||||
#for i in `gluster volume list`; do gluster volume heal $i; done
|
||||
```
|
||||
**NOTE:** x is the minor release number for the release
|
||||
|
||||
8. Ensure that there is no heal backlog by running the below command for all volumes
|
||||
```sh
|
||||
#gluster volume heal <volname> info
|
||||
```
|
||||
> NOTE: If there is a heal backlog, wait till the backlog is empty, or the backlog does not have any entries needing a sync to the just upgraded server, before proceeding to upgrade the next server in the pool
|
||||
5. Start glusterd on the upgraded server
|
||||
|
||||
9. Restart any gfapi based application stopped previously in step (2)
|
||||
glusterd
|
||||
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
|
||||
gluster volume status
|
||||
|
||||
7. Self-heal all gluster volumes by running
|
||||
|
||||
for i in `gluster volume list`; do gluster volume heal $i; done
|
||||
|
||||
8. Ensure that there is no heal backlog by running the below command for all volumes
|
||||
|
||||
gluster volume heal <volname> info
|
||||
|
||||
> NOTE: If there is a heal backlog, wait till the backlog is empty, or the backlog does not have any entries needing a sync to the just upgraded server, before proceeding to upgrade the next server in the pool
|
||||
|
||||
9. Restart any gfapi based application stopped previously in step (2)
|
||||
|
||||
### Offline upgrade procedure
|
||||
|
||||
This procedure involves cluster downtime and during the upgrade window, clients are not allowed access to the volumes.
|
||||
|
||||
#### Steps to perform an offline upgrade:
|
||||
1. On every server in the trusted storage pool, stop all gluster services, either using the command below, or through other means,
|
||||
|
||||
```sh
|
||||
#killall glusterfs glusterfsd glusterd
|
||||
```
|
||||
2. Stop all applications that access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.), across all servers
|
||||
1. On every server in the trusted storage pool, stop all gluster services, either using the command below, or through other means,
|
||||
|
||||
3. Install Gluster 3.11, on all servers
|
||||
killall glusterfs glusterfsd glusterd
|
||||
|
||||
4. Ensure that version reflects 3.11.x in the output of the following command on all servers,
|
||||
```sh
|
||||
#gluster --version
|
||||
```
|
||||
2. Stop all applications that access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.), across all servers
|
||||
|
||||
**NOTE:** x is the minor release number for the release
|
||||
3. Install Gluster 3.11, on all servers
|
||||
|
||||
5. Start glusterd on all the upgraded servers
|
||||
```sh
|
||||
#glusterd
|
||||
```
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
```sh
|
||||
#gluster volume status
|
||||
```
|
||||
4. Ensure that version reflects 3.11.x in the output of the following command on all servers,
|
||||
|
||||
7. Restart any gfapi based application stopped previously in step (2)
|
||||
gluster --version
|
||||
|
||||
**NOTE:** x is the minor release number for the release
|
||||
|
||||
5. Start glusterd on all the upgraded servers
|
||||
|
||||
glusterd
|
||||
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
|
||||
gluster volume status
|
||||
|
||||
7. Restart any gfapi based application stopped previously in step (2)
|
||||
|
||||
### Post upgrade steps
|
||||
|
||||
Perform the following steps post upgrading the entire trusted storage pool,
|
||||
|
||||
- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op-version.md) section for further details
|
||||
- Proceed to [upgrade the clients](#upgrade-procedure-for-clients) to 3.11 version as well
|
||||
|
||||
### Upgrade procedure for clients
|
||||
|
||||
Following are the steps to upgrade clients to the 3.11.x version,
|
||||
|
||||
**NOTE:** x is the minor release number for the release
|
||||
|
||||
@@ -3,6 +3,7 @@
|
||||
> **NOTE:** Upgrade procedure remains the same as with 3.11 and 3.10 releases
|
||||
|
||||
### Pre-upgrade notes
|
||||
|
||||
- Online upgrade is only possible with replicated and distributed replicate volumes
|
||||
- Online upgrade is not supported for dispersed or distributed dispersed volumes
|
||||
- Ensure no configuration changes are done during the upgrade
|
||||
@@ -11,90 +12,96 @@
|
||||
- It is recommended to have the same client and server, major versions running eventually
|
||||
|
||||
### Online upgrade procedure for servers
|
||||
|
||||
This procedure involves upgrading **one server at a time**, while keeping the volume(s) online and client IO ongoing. This procedure assumes that multiple replicas of a replica set, are not part of the same server in the trusted storage pool.
|
||||
|
||||
> **ALERT:** If there are disperse or, pure distributed volumes in the storage pool being upgraded, this procedure is NOT recommended, use the [Offline upgrade procedure](#offline-upgrade-procedure) instead.
|
||||
|
||||
#### Repeat the following steps, on each server in the trusted storage pool, to upgrade the entire pool to 3.12 version:
|
||||
1. Stop all gluster services, either using the command below, or through other means,
|
||||
|
||||
# killall glusterfs glusterfsd glusterd
|
||||
# systemctl stop glustereventsd
|
||||
1. Stop all gluster services, either using the command below, or through other means,
|
||||
|
||||
2. Stop all applications that run on this server and access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.)
|
||||
killall glusterfs glusterfsd glusterd
|
||||
systemctl stop glustereventsd
|
||||
|
||||
3. Install Gluster 3.12
|
||||
2. Stop all applications that run on this server and access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.)
|
||||
|
||||
4. Ensure that version reflects 3.12.x in the output of,
|
||||
3. Install Gluster 3.12
|
||||
|
||||
# gluster --version
|
||||
4. Ensure that version reflects 3.12.x in the output of,
|
||||
|
||||
gluster --version
|
||||
|
||||
> **NOTE:** x is the minor release number for the release
|
||||
|
||||
5. Start glusterd on the upgraded server
|
||||
5. Start glusterd on the upgraded server
|
||||
|
||||
# glusterd
|
||||
glusterd
|
||||
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
|
||||
# gluster volume status
|
||||
gluster volume status
|
||||
|
||||
7. If the glustereventsd service was previously enabled, it is required to start it using the commands below, or, through other means,
|
||||
7. If the glustereventsd service was previously enabled, it is required to start it using the commands below, or, through other means,
|
||||
|
||||
# systemctl start glustereventsd
|
||||
systemctl start glustereventsd
|
||||
|
||||
8. Invoke self-heal on all the gluster volumes by running,
|
||||
8. Invoke self-heal on all the gluster volumes by running,
|
||||
|
||||
# for i in `gluster volume list`; do gluster volume heal $i; done
|
||||
for i in `gluster volume list`; do gluster volume heal $i; done
|
||||
|
||||
9. Verify that there are no heal backlog by running the command for all the volumes,
|
||||
9. Verify that there are no heal backlog by running the command for all the volumes,
|
||||
|
||||
# gluster volume heal <volname> info
|
||||
gluster volume heal <volname> info
|
||||
|
||||
> **NOTE:** Before proceeding to upgrade the next server in the pool it is recommended to check the heal backlog. If there is a heal backlog, it is recommended to wait until the backlog is empty, or, the backlog does not contain any entries requiring a sync to the just upgraded server.
|
||||
|
||||
10. Restart any gfapi based application stopped previously in step (2)
|
||||
|
||||
### Offline upgrade procedure
|
||||
|
||||
This procedure involves cluster downtime and during the upgrade window, clients are not allowed access to the volumes.
|
||||
|
||||
#### Steps to perform an offline upgrade:
|
||||
1. On every server in the trusted storage pool, stop all gluster services, either using the command below, or through other means,
|
||||
|
||||
# killall glusterfs glusterfsd glusterd glustereventsd
|
||||
# systemctl stop glustereventsd
|
||||
1. On every server in the trusted storage pool, stop all gluster services, either using the command below, or through other means,
|
||||
|
||||
2. Stop all applications that access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.), across all servers
|
||||
killall glusterfs glusterfsd glusterd glustereventsd
|
||||
systemctl stop glustereventsd
|
||||
|
||||
3. Install Gluster 3.12, on all servers
|
||||
2. Stop all applications that access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.), across all servers
|
||||
|
||||
4. Ensure that version reflects 3.12.x in the output of the following command on all servers,
|
||||
3. Install Gluster 3.12, on all servers
|
||||
|
||||
# gluster --version
|
||||
4. Ensure that version reflects 3.12.x in the output of the following command on all servers,
|
||||
|
||||
> **NOTE:** x is the minor release number for the release
|
||||
gluster --version
|
||||
|
||||
5. Start glusterd on all the upgraded servers
|
||||
> **NOTE:** x is the minor release number for the release
|
||||
|
||||
# glusterd
|
||||
5. Start glusterd on all the upgraded servers
|
||||
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
glusterd
|
||||
|
||||
# gluster volume status
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
|
||||
7. If the glustereventsd service was previously enabled, it is required to start it using the commands below, or, through other means,
|
||||
gluster volume status
|
||||
|
||||
# systemctl start glustereventsd
|
||||
7. If the glustereventsd service was previously enabled, it is required to start it using the commands below, or, through other means,
|
||||
|
||||
8. Restart any gfapi based application stopped previously in step (2)
|
||||
systemctl start glustereventsd
|
||||
|
||||
8. Restart any gfapi based application stopped previously in step (2)
|
||||
|
||||
### Post upgrade steps
|
||||
|
||||
Perform the following steps post upgrading the entire trusted storage pool,
|
||||
|
||||
- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op-version.md) section for further details
|
||||
- Proceed to [upgrade the clients](#upgrade-procedure-for-clients) to 3.12 version as well
|
||||
|
||||
### Upgrade procedure for clients
|
||||
|
||||
Following are the steps to upgrade clients to the 3.12.x version,
|
||||
|
||||
> **NOTE:** x is the minor release number for the release
|
||||
|
||||
@@ -3,6 +3,7 @@
|
||||
**NOTE:** Upgrade procedure remains the same as with 3.12 and 3.10 releases
|
||||
|
||||
### Pre-upgrade notes
|
||||
|
||||
- Online upgrade is only possible with replicated and distributed replicate volumes
|
||||
- Online upgrade is not supported for dispersed or distributed dispersed volumes
|
||||
- Ensure no configuration changes are done during the upgrade
|
||||
@@ -11,80 +12,86 @@
|
||||
- It is recommended to have the same client and server, major versions running eventually
|
||||
|
||||
### Online upgrade procedure for servers
|
||||
|
||||
This procedure involves upgrading **one server at a time**, while keeping the volume(s) online and client IO ongoing. This procedure assumes that multiple replicas of a replica set, are not part of the same server in the trusted storage pool.
|
||||
|
||||
> **ALERT**: If any of your volumes, in the trusted storage pool that is being upgraded, uses disperse or is a pure distributed volume, this procedure is **NOT** recommended, use the [Offline upgrade procedure](#offline-upgrade-procedure) instead.
|
||||
|
||||
#### Repeat the following steps, on each server in the trusted storage pool, to upgrade the entire pool to 3.13 version:
|
||||
1. Stop all gluster services, either using the command below, or through other means,
|
||||
|
||||
# killall glusterfs glusterfsd glusterd
|
||||
1. Stop all gluster services, either using the command below, or through other means,
|
||||
|
||||
2. Stop all applications that run on this server and access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.)
|
||||
killall glusterfs glusterfsd glusterd
|
||||
|
||||
3. Install Gluster 3.13
|
||||
2. Stop all applications that run on this server and access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.)
|
||||
|
||||
4. Ensure that version reflects 3.13.x in the output of,
|
||||
|
||||
# gluster --version
|
||||
3. Install Gluster 3.13
|
||||
|
||||
**NOTE:** x is the minor release number for the release
|
||||
4. Ensure that version reflects 3.13.x in the output of,
|
||||
|
||||
5. Start glusterd on the upgraded server
|
||||
gluster --version
|
||||
|
||||
# glusterd
|
||||
**NOTE:** x is the minor release number for the release
|
||||
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
5. Start glusterd on the upgraded server
|
||||
|
||||
# gluster volume status
|
||||
glusterd
|
||||
|
||||
7. Self-heal all gluster volumes by running
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
|
||||
# for i in `gluster volume list`; do gluster volume heal $i; done
|
||||
gluster volume status
|
||||
|
||||
8. Ensure that there is no heal backlog by running the below command for all volumes
|
||||
7. Self-heal all gluster volumes by running
|
||||
|
||||
# gluster volume heal <volname> info
|
||||
for i in `gluster volume list`; do gluster volume heal $i; done
|
||||
|
||||
8. Ensure that there is no heal backlog by running the below command for all volumes
|
||||
|
||||
gluster volume heal <volname> info
|
||||
|
||||
> NOTE: If there is a heal backlog, wait till the backlog is empty, or the backlog does not have any entries needing a sync to the just upgraded server, before proceeding to upgrade the next server in the pool
|
||||
|
||||
9. Restart any gfapi based application stopped previously in step (2)
|
||||
9. Restart any gfapi based application stopped previously in step (2)
|
||||
|
||||
### Offline upgrade procedure
|
||||
|
||||
This procedure involves cluster downtime and during the upgrade window, clients are not allowed access to the volumes.
|
||||
|
||||
#### Steps to perform an offline upgrade:
|
||||
1. On every server in the trusted storage pool, stop all gluster services, either using the command below, or through other means,
|
||||
|
||||
# killall glusterfs glusterfsd glusterd
|
||||
1. On every server in the trusted storage pool, stop all gluster services, either using the command below, or through other means,
|
||||
|
||||
2. Stop all applications that access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.), across all servers
|
||||
killall glusterfs glusterfsd glusterd
|
||||
|
||||
3. Install Gluster 3.13, on all servers
|
||||
2. Stop all applications that access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.), across all servers
|
||||
|
||||
4. Ensure that version reflects 3.13.x in the output of the following command on all servers,
|
||||
3. Install Gluster 3.13, on all servers
|
||||
|
||||
# gluster --version
|
||||
4. Ensure that version reflects 3.13.x in the output of the following command on all servers,
|
||||
|
||||
**NOTE:** x is the minor release number for the release
|
||||
gluster --version
|
||||
|
||||
5. Start glusterd on all the upgraded servers
|
||||
**NOTE:** x is the minor release number for the release
|
||||
|
||||
# glusterd
|
||||
5. Start glusterd on all the upgraded servers
|
||||
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
glusterd
|
||||
|
||||
# gluster volume status
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
|
||||
7. Restart any gfapi based application stopped previously in step (2)
|
||||
gluster volume status
|
||||
|
||||
7. Restart any gfapi based application stopped previously in step (2)
|
||||
|
||||
### Post upgrade steps
|
||||
|
||||
Perform the following steps post upgrading the entire trusted storage pool,
|
||||
|
||||
- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op-version.md) section for further details
|
||||
- Proceed to [upgrade the clients](#upgrade-procedure-for-clients) to 3.13 version as well
|
||||
|
||||
### Upgrade procedure for clients
|
||||
|
||||
Following are the steps to upgrade clients to the 3.13.x version,
|
||||
|
||||
**NOTE:** x is the minor release number for the release
|
||||
|
||||
@@ -23,7 +23,7 @@ provided below)
|
||||
|
||||
1. Execute "pre-upgrade-script-for-quota.sh" mentioned under "Upgrade Steps For Quota" section.
|
||||
2. Stop all glusterd, glusterfsd and glusterfs processes on your server.
|
||||
3. Install GlusterFS 3.5.0
|
||||
3. Install GlusterFS 3.5.0
|
||||
4. Start glusterd.
|
||||
5. Ensure that all started volumes have processes online in “gluster volume status”.
|
||||
6. Execute "Post-Upgrade Script" mentioned under "Upgrade Steps For Quota" section.
|
||||
@@ -77,7 +77,7 @@ The upgrade process for quota involves executing two upgrade scripts:
|
||||
1. pre-upgrade-script-for-quota.sh, and\
|
||||
2. post-upgrade-script-for-quota.sh
|
||||
|
||||
*Pre-Upgrade Script:*
|
||||
_Pre-Upgrade Script:_
|
||||
|
||||
What it does:
|
||||
|
||||
@@ -105,11 +105,11 @@ Invocation:
|
||||
Invoke the script by executing \`./pre-upgrade-script-for-quota.sh\`
|
||||
from the shell on any one of the nodes in the cluster.
|
||||
|
||||
- Example:
|
||||
- Example:
|
||||
|
||||
[root@server1 extras]#./pre-upgrade-script-for-quota.sh
|
||||
[root@server1 extras]#./pre-upgrade-script-for-quota.sh
|
||||
|
||||
*Post-Upgrade Script:*
|
||||
_Post-Upgrade Script:_
|
||||
|
||||
What it does:
|
||||
|
||||
@@ -164,9 +164,9 @@ In the first case, invoke post-upgrade-script-for-quota.sh from the
|
||||
shell for each volume with quota enabled, with the name of the volume
|
||||
passed as an argument in the command-line:
|
||||
|
||||
- Example:
|
||||
- Example:
|
||||
|
||||
*For a volume "vol1" on which quota is enabled, invoke the script in the following way:*
|
||||
_For a volume "vol1" on which quota is enabled, invoke the script in the following way:_
|
||||
|
||||
[root@server1 extras]#./post-upgrade-script-for-quota.sh vol1
|
||||
|
||||
@@ -176,9 +176,9 @@ procedure on each one of them. In this case, invoke
|
||||
post-upgrade-script-for-quota.sh from the shell with 'all' passed as an
|
||||
argument in the command-line:
|
||||
|
||||
- Example:
|
||||
- Example:
|
||||
|
||||
[root@server1 extras]#./post-upgrade-script-for-quota.sh all
|
||||
[root@server1 extras]#./post-upgrade-script-for-quota.sh all
|
||||
|
||||
Note:
|
||||
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
# GlusterFS upgrade from 3.5.x to 3.6.x
|
||||
|
||||
Now that GlusterFS 3.6.0 is out, here is the process to upgrade from
|
||||
earlier installed versions of GlusterFS.
|
||||
|
||||
@@ -8,15 +9,15 @@ GlusterFS clients. If you are not updating your clients to GlusterFS
|
||||
version 3.6 you need to disable client self healing process. You can
|
||||
perform this by below steps.
|
||||
|
||||
```console
|
||||
```{ .console .no-copy }
|
||||
# gluster v set testvol cluster.entry-self-heal off
|
||||
volume set: success
|
||||
#
|
||||
|
||||
# gluster v set testvol cluster.data-self-heal off
|
||||
volume set: success
|
||||
|
||||
# gluster v set testvol cluster.metadata-self-heal off
|
||||
volume set: success
|
||||
#
|
||||
```
|
||||
|
||||
### GlusterFS upgrade from 3.5.x to 3.6.x
|
||||
@@ -27,7 +28,7 @@ For this approach, schedule a downtime and prevent all your clients from
|
||||
accessing ( umount your volumes, stop gluster Volumes..etc)the servers.
|
||||
|
||||
1. Stop all glusterd, glusterfsd and glusterfs processes on your server.
|
||||
2. Install GlusterFS 3.6.0
|
||||
2. Install GlusterFS 3.6.0
|
||||
3. Start glusterd.
|
||||
4. Ensure that all started volumes have processes online in “gluster volume status”.
|
||||
|
||||
@@ -59,7 +60,7 @@ provided below)
|
||||
|
||||
1. Execute "pre-upgrade-script-for-quota.sh" mentioned under "Upgrade Steps For Quota" section.
|
||||
2. Stop all glusterd, glusterfsd and glusterfs processes on your server.
|
||||
3. Install GlusterFS 3.6.0
|
||||
3. Install GlusterFS 3.6.0
|
||||
4. Start glusterd.
|
||||
5. Ensure that all started volumes have processes online in “gluster volume status”.
|
||||
6. Execute "Post-Upgrade Script" mentioned under "Upgrade Steps For Quota" section.
|
||||
@@ -87,7 +88,7 @@ The upgrade process for quota involves executing two upgrade scripts:
|
||||
1. pre-upgrade-script-for-quota.sh, and\
|
||||
2. post-upgrade-script-for-quota.sh
|
||||
|
||||
*Pre-Upgrade Script:*
|
||||
_Pre-Upgrade Script:_
|
||||
|
||||
What it does:
|
||||
|
||||
@@ -121,7 +122,7 @@ Example:
|
||||
[root@server1 extras]#./pre-upgrade-script-for-quota.sh
|
||||
```
|
||||
|
||||
*Post-Upgrade Script:*
|
||||
_Post-Upgrade Script:_
|
||||
|
||||
What it does:
|
||||
|
||||
@@ -178,7 +179,7 @@ passed as an argument in the command-line:
|
||||
|
||||
Example:
|
||||
|
||||
*For a volume "vol1" on which quota is enabled, invoke the script in the following way:*
|
||||
_For a volume "vol1" on which quota is enabled, invoke the script in the following way:_
|
||||
|
||||
```console
|
||||
[root@server1 extras]#./post-upgrade-script-for-quota.sh vol1
|
||||
@@ -227,7 +228,7 @@ covered in detail here.
|
||||
|
||||
**Below are the steps to upgrade:**
|
||||
|
||||
1. Stop the geo-replication session in older version ( \< 3.5) using
|
||||
1. Stop the geo-replication session in older version ( \< 3.5) using
|
||||
the below command
|
||||
|
||||
# gluster volume geo-replication `<master_vol>` `<slave_host>`::`<slave_vol>` stop
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
# GlusterFS upgrade to 3.7.x
|
||||
|
||||
Now that GlusterFS 3.7.0 is out, here is the process to upgrade from
|
||||
earlier installed versions of GlusterFS. Please read the entire howto
|
||||
before proceeding with an upgrade of your deployment
|
||||
@@ -13,15 +14,15 @@ version 3.6 along with your servers you would need to disable client
|
||||
self healing process before the upgrade. You can perform this by below
|
||||
steps.
|
||||
|
||||
```console
|
||||
```{ .console .no-copy }
|
||||
# gluster v set testvol cluster.entry-self-heal off
|
||||
volume set: success
|
||||
#
|
||||
|
||||
# gluster v set testvol cluster.data-self-heal off
|
||||
volume set: success
|
||||
|
||||
# gluster v set testvol cluster.metadata-self-heal off
|
||||
volume set: success
|
||||
#
|
||||
```
|
||||
|
||||
### GlusterFS upgrade to 3.7.x
|
||||
@@ -71,11 +72,11 @@ The upgrade process for quota involves the following:
|
||||
|
||||
1. Run pre-upgrade-script-for-quota.sh
|
||||
2. Upgrade to 3.7.0
|
||||
2. Run post-upgrade-script-for-quota.sh
|
||||
3. Run post-upgrade-script-for-quota.sh
|
||||
|
||||
More details on the scripts are as under.
|
||||
|
||||
*Pre-Upgrade Script:*
|
||||
_Pre-Upgrade Script:_
|
||||
|
||||
What it does:
|
||||
|
||||
@@ -109,7 +110,7 @@ Example:
|
||||
[root@server1 extras]#./pre-upgrade-script-for-quota.sh
|
||||
```
|
||||
|
||||
*Post-Upgrade Script:*
|
||||
_Post-Upgrade Script:_
|
||||
|
||||
What it does:
|
||||
|
||||
|
||||
@@ -1,12 +1,13 @@
|
||||
## Upgrade procedure from Gluster 3.7.x
|
||||
|
||||
### Pre-upgrade Notes
|
||||
- Online upgrade is only possible with replicated and distributed replicate volumes.
|
||||
- Online upgrade is not yet supported for dispersed or distributed dispersed volumes.
|
||||
- Ensure no configuration changes are done during the upgrade.
|
||||
- If you are using geo-replication, please upgrade the slave cluster(s) before upgrading the master.
|
||||
- Upgrading the servers ahead of the clients is recommended.
|
||||
- Upgrade the clients after the servers are upgraded. It is recommended to have the same client and server major versions.
|
||||
### Pre-upgrade Notes
|
||||
|
||||
- Online upgrade is only possible with replicated and distributed replicate volumes.
|
||||
- Online upgrade is not yet supported for dispersed or distributed dispersed volumes.
|
||||
- Ensure no configuration changes are done during the upgrade.
|
||||
- If you are using geo-replication, please upgrade the slave cluster(s) before upgrading the master.
|
||||
- Upgrading the servers ahead of the clients is recommended.
|
||||
- Upgrade the clients after the servers are upgraded. It is recommended to have the same client and server major versions.
|
||||
|
||||
### Online Upgrade Procedure for Servers
|
||||
|
||||
@@ -14,7 +15,7 @@ The procedure involves upgrading one server at a time . On every storage server
|
||||
|
||||
- Stop all gluster services using the below command or through your favorite way to stop them.
|
||||
|
||||
# killall glusterfs glusterfsd glusterd
|
||||
killall glusterfs glusterfsd glusterd
|
||||
|
||||
- If you are using gfapi based applications (qemu, NFS-Ganesha, Samba etc.) on the servers, please stop those applications too.
|
||||
|
||||
@@ -22,38 +23,39 @@ The procedure involves upgrading one server at a time . On every storage server
|
||||
|
||||
- Ensure that version reflects 3.8.x in the output of
|
||||
|
||||
# gluster --version
|
||||
gluster --version
|
||||
|
||||
- Start glusterd on the upgraded server
|
||||
|
||||
# glusterd
|
||||
glusterd
|
||||
|
||||
- Ensure that all gluster processes are online by executing
|
||||
|
||||
# gluster volume status
|
||||
gluster volume status
|
||||
|
||||
- Self-heal all gluster volumes by running
|
||||
|
||||
# for i in `gluster volume list`; do gluster volume heal $i; done
|
||||
for i in `gluster volume list`; do gluster volume heal $i; done
|
||||
|
||||
- Ensure that there is no heal backlog by running the below command for all volumes
|
||||
|
||||
# gluster volume heal <volname> info
|
||||
gluster volume heal <volname> info
|
||||
|
||||
- Restart any gfapi based application stopped previously.
|
||||
|
||||
- After the upgrade is complete on all servers, run the following command:
|
||||
|
||||
# gluster volume set all cluster.op-version 30800
|
||||
gluster volume set all cluster.op-version 30800
|
||||
|
||||
### Offline Upgrade Procedure
|
||||
### Offline Upgrade Procedure
|
||||
|
||||
For this procedure, schedule a downtime and prevent all your clients from accessing the servers.
|
||||
|
||||
On every storage server in your trusted storage pool:
|
||||
|
||||
- Stop all gluster services using the below command or through your favorite way to stop them.
|
||||
|
||||
# killall glusterfs glusterfsd glusterd
|
||||
killall glusterfs glusterfsd glusterd
|
||||
|
||||
- If you are using gfapi based applications (qemu, NFS-Ganesha, Samba etc.) on the servers, please stop those applications too.
|
||||
|
||||
@@ -61,25 +63,24 @@ On every storage server in your trusted storage pool:
|
||||
|
||||
- Ensure that version reflects 3.8.x in the output of
|
||||
|
||||
# gluster --version
|
||||
gluster --version
|
||||
|
||||
- Start glusterd on the upgraded server
|
||||
|
||||
# glusterd
|
||||
glusterd
|
||||
|
||||
- Ensure that all gluster processes are online by executing
|
||||
|
||||
# gluster volume status
|
||||
gluster volume status
|
||||
|
||||
- Restart any gfapi based application stopped previously.
|
||||
|
||||
- After the upgrade is complete on all servers, run the following command:
|
||||
|
||||
# gluster volume set all cluster.op-version 30800
|
||||
gluster volume set all cluster.op-version 30800
|
||||
|
||||
### Upgrade Procedure for Clients
|
||||
|
||||
|
||||
- Unmount all glusterfs mount points on the client
|
||||
- Stop applications using gfapi (qemu etc.)
|
||||
- Install Gluster 3.8
|
||||
|
||||
@@ -9,5 +9,5 @@ Note that there is only a single difference, related to the `op-version`:
|
||||
After the upgrade is complete on all servers, run the following command:
|
||||
|
||||
```console
|
||||
# gluster volume set all cluster.op-version 30900
|
||||
gluster volume set all cluster.op-version 30900
|
||||
```
|
||||
|
||||
@@ -3,6 +3,7 @@
|
||||
**NOTE:** Upgrade procedure remains the same as with 3.12 and 3.10 releases
|
||||
|
||||
### Pre-upgrade notes
|
||||
|
||||
- Online upgrade is only possible with replicated and distributed replicate volumes
|
||||
- Online upgrade is not supported for dispersed or distributed dispersed volumes
|
||||
- Ensure no configuration changes are done during the upgrade
|
||||
@@ -11,74 +12,79 @@
|
||||
- It is recommended to have the same client and server, major versions running eventually
|
||||
|
||||
### Online upgrade procedure for servers
|
||||
|
||||
This procedure involves upgrading **one server at a time**, while keeping the volume(s) online and client IO ongoing. This procedure assumes that multiple replicas of a replica set, are not part of the same server in the trusted storage pool.
|
||||
|
||||
> **ALERT**: If any of your volumes, in the trusted storage pool that is being upgraded, uses disperse or is a pure distributed volume, this procedure is **NOT** recommended, use the [Offline upgrade procedure](#offline-upgrade-procedure) instead.
|
||||
|
||||
#### Repeat the following steps, on each server in the trusted storage pool, to upgrade the entire pool to 4.0 version:
|
||||
1. Stop all gluster services, either using the command below, or through other means,
|
||||
|
||||
# killall glusterfs glusterfsd glusterd
|
||||
1. Stop all gluster services, either using the command below, or through other means,
|
||||
|
||||
2. Stop all applications that run on this server and access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.)
|
||||
killall glusterfs glusterfsd glusterd
|
||||
|
||||
3. Install Gluster 4.0
|
||||
2. Stop all applications that run on this server and access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.)
|
||||
|
||||
4. Ensure that version reflects 4.0.x in the output of,
|
||||
3. Install Gluster 4.0
|
||||
|
||||
# gluster --version
|
||||
4. Ensure that version reflects 4.0.x in the output of,
|
||||
|
||||
**NOTE:** x is the minor release number for the release
|
||||
gluster --version
|
||||
|
||||
5. Start glusterd on the upgraded server
|
||||
**NOTE:** x is the minor release number for the release
|
||||
|
||||
# glusterd
|
||||
5. Start glusterd on the upgraded server
|
||||
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
glusterd
|
||||
|
||||
# gluster volume status
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
|
||||
7. Self-heal all gluster volumes by running
|
||||
gluster volume status
|
||||
|
||||
# for i in `gluster volume list`; do gluster volume heal $i; done
|
||||
7. Self-heal all gluster volumes by running
|
||||
|
||||
8. Ensure that there is no heal backlog by running the below command for all volumes
|
||||
for i in `gluster volume list`; do gluster volume heal $i; done
|
||||
|
||||
# gluster volume heal <volname> info
|
||||
8. Ensure that there is no heal backlog by running the below command for all volumes
|
||||
|
||||
> NOTE: If there is a heal backlog, wait till the backlog is empty, or the backlog does not have any entries needing a sync to the just upgraded server, before proceeding to upgrade the next server in the pool
|
||||
gluster volume heal <volname> info
|
||||
|
||||
9. Restart any gfapi based application stopped previously in step (2)
|
||||
> NOTE: If there is a heal backlog, wait till the backlog is empty, or the backlog does not have any entries needing a sync to the just upgraded server, before proceeding to upgrade the next server in the pool
|
||||
|
||||
9. Restart any gfapi based application stopped previously in step (2)
|
||||
|
||||
### Offline upgrade procedure
|
||||
|
||||
This procedure involves cluster downtime and during the upgrade window, clients are not allowed access to the volumes.
|
||||
|
||||
#### Steps to perform an offline upgrade:
|
||||
1. On every server in the trusted storage pool, stop all gluster services, either using the command below, or through other means,
|
||||
|
||||
# killall glusterfs glusterfsd glusterd
|
||||
1. On every server in the trusted storage pool, stop all gluster services, either using the command below, or through other means,
|
||||
|
||||
2. Stop all applications that access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.), across all servers
|
||||
killall glusterfs glusterfsd glusterd
|
||||
|
||||
3. Install Gluster 4.0, on all servers
|
||||
2. Stop all applications that access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.), across all servers
|
||||
|
||||
4. Ensure that version reflects 4.0.x in the output of the following command on all servers,
|
||||
3. Install Gluster 4.0, on all servers
|
||||
|
||||
# gluster --version
|
||||
4. Ensure that version reflects 4.0.x in the output of the following command on all servers,
|
||||
|
||||
**NOTE:** x is the minor release number for the release
|
||||
gluster --version
|
||||
|
||||
5. Start glusterd on all the upgraded servers
|
||||
**NOTE:** x is the minor release number for the release
|
||||
|
||||
# glusterd
|
||||
5. Start glusterd on all the upgraded servers
|
||||
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
glusterd
|
||||
|
||||
# gluster volume status
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
|
||||
7. Restart any gfapi based application stopped previously in step (2)
|
||||
gluster volume status
|
||||
|
||||
7. Restart any gfapi based application stopped previously in step (2)
|
||||
|
||||
### Post upgrade steps
|
||||
|
||||
Perform the following steps post upgrading the entire trusted storage pool,
|
||||
|
||||
- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op-version.md) section for further details
|
||||
@@ -86,6 +92,7 @@ Perform the following steps post upgrading the entire trusted storage pool,
|
||||
- Post upgrading the clients, for replicate volumes, it is recommended to enable the option `gluster volume set <volname> fips-mode-rchecksum on` to turn off usage of MD5 checksums during healing. This enables running Gluster on FIPS compliant systems.
|
||||
|
||||
### Upgrade procedure for clients
|
||||
|
||||
Following are the steps to upgrade clients to the 4.0.x version,
|
||||
|
||||
**NOTE:** x is the minor release number for the release
|
||||
|
||||
@@ -3,6 +3,7 @@
|
||||
> **NOTE:** Upgrade procedure remains the same as with 3.12 and 3.10 releases
|
||||
|
||||
### Pre-upgrade notes
|
||||
|
||||
- Online upgrade is only possible with replicated and distributed replicate volumes
|
||||
- Online upgrade is not supported for dispersed or distributed dispersed volumes
|
||||
- Ensure no configuration changes are done during the upgrade
|
||||
@@ -11,88 +12,89 @@
|
||||
- It is recommended to have the same client and server, major versions running eventually
|
||||
|
||||
### Online upgrade procedure for servers
|
||||
|
||||
This procedure involves upgrading **one server at a time**, while keeping the volume(s) online and client IO ongoing. This procedure assumes that multiple replicas of a replica set, are not part of the same server in the trusted storage pool.
|
||||
|
||||
> **ALERT:** If there are disperse or, pure distributed volumes in the storage pool being upgraded, this procedure is NOT recommended, use the [Offline upgrade procedure](#offline-upgrade-procedure) instead.
|
||||
|
||||
#### Repeat the following steps, on each server in the trusted storage pool, to upgrade the entire pool to 4.1 version:
|
||||
1. Stop all gluster services, either using the command below, or through other means,
|
||||
|
||||
# killall glusterfs glusterfsd glusterd
|
||||
# systemctl stop glustereventsd
|
||||
1. Stop all gluster services, either using the command below, or through other means,
|
||||
|
||||
2. Stop all applications that run on this server and access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.)
|
||||
killall glusterfs glusterfsd glusterd
|
||||
systemctl stop glustereventsd
|
||||
|
||||
3. Install Gluster 4.1
|
||||
2. Stop all applications that run on this server and access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.)
|
||||
|
||||
4. Ensure that version reflects 4.1.x in the output of,
|
||||
3. Install Gluster 4.1
|
||||
|
||||
# gluster --version
|
||||
4. Ensure that version reflects 4.1.x in the output of,
|
||||
|
||||
gluster --version
|
||||
|
||||
> **NOTE:** x is the minor release number for the release
|
||||
|
||||
5. Start glusterd on the upgraded server
|
||||
5. Start glusterd on the upgraded server
|
||||
|
||||
# glusterd
|
||||
glusterd
|
||||
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
|
||||
# gluster volume status
|
||||
gluster volume status
|
||||
|
||||
7. If the glustereventsd service was previously enabled, it is required to start it using the commands below, or, through other means,
|
||||
7. If the glustereventsd service was previously enabled, it is required to start it using the commands below, or, through other means,
|
||||
|
||||
# systemctl start glustereventsd
|
||||
systemctl start glustereventsd
|
||||
|
||||
8. Invoke self-heal on all the gluster volumes by running,
|
||||
8. Invoke self-heal on all the gluster volumes by running,
|
||||
|
||||
# for i in `gluster volume list`; do gluster volume heal $i; done
|
||||
for i in `gluster volume list`; do gluster volume heal $i; done
|
||||
|
||||
9. Verify that there are no heal backlog by running the command for all the volumes,
|
||||
9. Verify that there are no heal backlog by running the command for all the volumes,
|
||||
|
||||
# gluster volume heal <volname> info
|
||||
gluster volume heal <volname> info
|
||||
|
||||
> **NOTE:** Before proceeding to upgrade the next server in the pool it is recommended to check the heal backlog. If there is a heal backlog, it is recommended to wait until the backlog is empty, or, the backlog does not contain any entries requiring a sync to the just upgraded server.
|
||||
|
||||
10. Restart any gfapi based application stopped previously in step (2)
|
||||
|
||||
### Offline upgrade procedure
|
||||
|
||||
This procedure involves cluster downtime and during the upgrade window, clients are not allowed access to the volumes.
|
||||
|
||||
#### Steps to perform an offline upgrade:
|
||||
1. On every server in the trusted storage pool, stop all gluster services, either using the command below, or through other means,
|
||||
|
||||
```sh
|
||||
#killall glusterfs glusterfsd glusterd glustereventsd
|
||||
#systemctl stop glustereventsd
|
||||
```
|
||||
2. Stop all applications that access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.), across all servers
|
||||
1. On every server in the trusted storage pool, stop all gluster services, either using the command below, or through other means,
|
||||
|
||||
3. Install Gluster 4.1, on all servers
|
||||
killall glusterfs glusterfsd glusterd glustereventsd
|
||||
systemctl stop glustereventsd
|
||||
|
||||
4. Ensure that version reflects 4.1.x in the output of the following command on all servers,
|
||||
```sh
|
||||
#gluster --version
|
||||
```
|
||||
2. Stop all applications that access the volumes via gfapi (qemu, NFS-Ganesha, Samba, etc.), across all servers
|
||||
|
||||
> **NOTE:** x is the minor release number for the release
|
||||
3. Install Gluster 4.1, on all servers
|
||||
|
||||
5. Start glusterd on all the upgraded servers
|
||||
```sh
|
||||
#glusterd
|
||||
```
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
```sh
|
||||
#gluster volume status
|
||||
```
|
||||
4. Ensure that version reflects 4.1.x in the output of the following command on all servers,
|
||||
|
||||
7. If the glustereventsd service was previously enabled, it is required to start it using the commands below, or, through other means,
|
||||
```sh
|
||||
#systemctl start glustereventsd
|
||||
```
|
||||
gluster --version
|
||||
|
||||
8. Restart any gfapi based application stopped previously in step (2)
|
||||
> **NOTE:** x is the minor release number for the release
|
||||
|
||||
5. Start glusterd on all the upgraded servers
|
||||
|
||||
glusterd
|
||||
|
||||
6. Ensure that all gluster processes are online by checking the output of,
|
||||
|
||||
gluster volume status
|
||||
|
||||
7. If the glustereventsd service was previously enabled, it is required to start it using the commands below, or, through other means,
|
||||
|
||||
systemctl start glustereventsd
|
||||
|
||||
8. Restart any gfapi based application stopped previously in step (2)
|
||||
|
||||
### Post upgrade steps
|
||||
|
||||
Perform the following steps post upgrading the entire trusted storage pool,
|
||||
|
||||
- It is recommended to update the op-version of the cluster. Refer, to the [op-version](./op-version.md) section for further details
|
||||
@@ -100,6 +102,7 @@ Perform the following steps post upgrading the entire trusted storage pool,
|
||||
- Post upgrading the clients, for replicate volumes, it is recommended to enable the option `gluster volume set <volname> fips-mode-rchecksum on` to turn off usage of MD5 checksums during healing. This enables running Gluster on FIPS compliant systems.
|
||||
|
||||
### Upgrade procedure for clients
|
||||
|
||||
Following are the steps to upgrade clients to the 4.1.x version,
|
||||
|
||||
> **NOTE:** x is the minor release number for the release
|
||||
|
||||
@@ -8,15 +8,16 @@ version reference.
|
||||
|
||||
### Major issues
|
||||
|
||||
1. The following options are removed from the code base and require to be unset
|
||||
before an upgrade from releases older than release 4.1.0,
|
||||
- features.lock-heal
|
||||
- features.grace-timeout
|
||||
1. The following options are removed from the code base and require to be unset
|
||||
before an upgrade from releases older than release 4.1.0,
|
||||
|
||||
- features.lock-heal
|
||||
- features.grace-timeout
|
||||
|
||||
To check if these options are set use,
|
||||
|
||||
```console
|
||||
# gluster volume info
|
||||
gluster volume info
|
||||
```
|
||||
|
||||
and ensure that the above options are not part of the `Options Reconfigured:`
|
||||
@@ -24,7 +25,7 @@ section in the output of all volumes in the cluster.
|
||||
|
||||
If these are set, then unset them using the following commands,
|
||||
|
||||
```console
|
||||
```{ .console .no-copy }
|
||||
# gluster volume reset <volname> <option>
|
||||
```
|
||||
|
||||
|
||||
@@ -11,15 +11,16 @@ version reference.
|
||||
|
||||
### Major issues
|
||||
|
||||
1. The following options are removed from the code base and require to be unset
|
||||
before an upgrade from releases older than release 4.1.0,
|
||||
- features.lock-heal
|
||||
- features.grace-timeout
|
||||
1. The following options are removed from the code base and require to be unset
|
||||
before an upgrade from releases older than release 4.1.0,
|
||||
|
||||
- features.lock-heal
|
||||
- features.grace-timeout
|
||||
|
||||
To check if these options are set use,
|
||||
|
||||
```console
|
||||
# gluster volume info
|
||||
gluster volume info
|
||||
```
|
||||
|
||||
and ensure that the above options are not part of the `Options Reconfigured:`
|
||||
@@ -27,7 +28,7 @@ section in the output of all volumes in the cluster.
|
||||
|
||||
If these are set, then unset them using the following commands,
|
||||
|
||||
```console
|
||||
```{ .console .no-copy }
|
||||
# gluster volume reset <volname> <option>
|
||||
```
|
||||
|
||||
|
||||
@@ -10,22 +10,23 @@ documented instructions, replacing 7 when you encounter 4.1 in the guide as the
|
||||
version reference.
|
||||
|
||||
> **NOTE:** If you have ever enabled quota on your volumes then after the upgrade
|
||||
is done, you will have to restart all the nodes in the cluster one by one so as to
|
||||
fix the checksum values in the quota.cksum file under the `/var/lib/glusterd/vols/<volname>/ directory.`
|
||||
The peers may go into `Peer rejected` state while doing so but once all the nodes are rebooted
|
||||
everything will be back to normal.
|
||||
> is done, you will have to restart all the nodes in the cluster one by one so as to
|
||||
> fix the checksum values in the quota.cksum file under the `/var/lib/glusterd/vols/<volname>/ directory.`
|
||||
> The peers may go into `Peer rejected` state while doing so but once all the nodes are rebooted
|
||||
> everything will be back to normal.
|
||||
|
||||
### Major issues
|
||||
|
||||
1. The following options are removed from the code base and require to be unset
|
||||
before an upgrade from releases older than release 4.1.0,
|
||||
- features.lock-heal
|
||||
- features.grace-timeout
|
||||
1. The following options are removed from the code base and require to be unset
|
||||
before an upgrade from releases older than release 4.1.0,
|
||||
|
||||
- features.lock-heal
|
||||
- features.grace-timeout
|
||||
|
||||
To check if these options are set use,
|
||||
|
||||
```console
|
||||
# gluster volume info
|
||||
gluster volume info
|
||||
```
|
||||
|
||||
and ensure that the above options are not part of the `Options Reconfigured:`
|
||||
@@ -33,7 +34,7 @@ section in the output of all volumes in the cluster.
|
||||
|
||||
If these are set, then unset them using the following commands,
|
||||
|
||||
```console
|
||||
```{ .console .no-copy }
|
||||
# gluster volume reset <volname> <option>
|
||||
```
|
||||
|
||||
|
||||
@@ -7,17 +7,19 @@ aware of the features and fixes provided with the release.
|
||||
|
||||
> With version 8, there are certain changes introduced to the directory structure of changelog files in gluster geo-replication.
|
||||
> Thus, before the upgrade of geo-rep packages, we need to execute the [upgrade script](https://github.com/gluster/glusterfs/commit/2857fe3fad4d2b30894847088a54b847b88a23b9) with the brick path as argument, as described below:
|
||||
>1. Stop the geo-rep session
|
||||
>2. Run the upgrade script with the brick path as the argument. Script can be used in loop for multiple bricks.
|
||||
>3. Start the upgradation process.
|
||||
>This script will update the existing changelog directory structure and the paths inside the htime files to a new format introduced in version 8.
|
||||
>If the above mentioned script is not executed, the search algorithm, used during the history crawl will fail with the wrong result for upgradation from version 7 and below to version 8 and above.
|
||||
>
|
||||
> 1. Stop the geo-rep session
|
||||
> 2. Run the upgrade script with the brick path as the argument. Script can be used in loop for multiple bricks.
|
||||
> 3. Start the upgradation process.
|
||||
> This script will update the existing changelog directory structure and the paths inside the htime files to a new format introduced in version 8.
|
||||
> If the above mentioned script is not executed, the search algorithm, used during the history crawl will fail with the wrong result for upgradation from version 7 and below to version 8 and above.
|
||||
|
||||
Refer, to the [generic upgrade procedure](./generic-upgrade-procedure.md) guide and follow documented instructions.
|
||||
|
||||
## Major issues
|
||||
|
||||
### The following options are removed from the code base and require to be unset
|
||||
|
||||
before an upgrade from releases older than release 4.1.0,
|
||||
|
||||
- features.lock-heal
|
||||
@@ -26,7 +28,7 @@ before an upgrade from releases older than release 4.1.0,
|
||||
To check if these options are set use,
|
||||
|
||||
```console
|
||||
# gluster volume info
|
||||
gluster volume info
|
||||
```
|
||||
|
||||
and ensure that the above options are not part of the `Options Reconfigured:`
|
||||
@@ -34,7 +36,7 @@ section in the output of all volumes in the cluster.
|
||||
|
||||
If these are set, then unset them using the following commands,
|
||||
|
||||
```console
|
||||
```{ .console .no-copy }
|
||||
# gluster volume reset <volname> <option>
|
||||
```
|
||||
|
||||
@@ -48,7 +50,6 @@ If these are set, then unset them using the following commands,
|
||||
- Tiering support (tier xlator and changetimerecorder)
|
||||
- Glupy
|
||||
|
||||
|
||||
**NOTE:** Failure to do the above may result in failure during online upgrades,
|
||||
and the reset of these options to their defaults needs to be done **prior** to
|
||||
upgrading the cluster.
|
||||
|
||||
@@ -10,6 +10,7 @@ Refer, to the [generic upgrade procedure](./generic-upgrade-procedure.md) guide
|
||||
## Major issues
|
||||
|
||||
### The following options are removed from the code base and require to be unset
|
||||
|
||||
before an upgrade from releases older than release 4.1.0,
|
||||
|
||||
- features.lock-heal
|
||||
@@ -18,7 +19,7 @@ before an upgrade from releases older than release 4.1.0,
|
||||
To check if these options are set use,
|
||||
|
||||
```console
|
||||
# gluster volume info
|
||||
gluster volume info
|
||||
```
|
||||
|
||||
and ensure that the above options are not part of the `Options Reconfigured:`
|
||||
@@ -26,11 +27,11 @@ section in the output of all volumes in the cluster.
|
||||
|
||||
If these are set, then unset them using the following commands,
|
||||
|
||||
```console
|
||||
```{ .console .no-copy }
|
||||
# gluster volume reset <volname> <option>
|
||||
```
|
||||
|
||||
### Make sure you are not using any of the following depricated features :
|
||||
### Make sure you are not using any of the following deprecated features :
|
||||
|
||||
- Block device (bd) xlator
|
||||
- Decompounder feature
|
||||
@@ -40,7 +41,6 @@ If these are set, then unset them using the following commands,
|
||||
- Tiering support (tier xlator and changetimerecorder)
|
||||
- Glupy
|
||||
|
||||
|
||||
**NOTE:** Failure to do the above may result in failure during online upgrades,
|
||||
and the reset of these options to their defaults needs to be done **prior** to
|
||||
upgrading the cluster.
|
||||
|
||||
Reference in New Issue
Block a user